New Nas - wont spindown

Discussion about hard drive spin down (standby) feature of NAS.
Post Reply
Bart001
New here
Posts: 5
Joined: Mon Mar 23, 2015 7:15 pm

New Nas - wont spindown

Post by Bart001 » Sun Jun 28, 2020 6:01 am

Brand new TS-253D, with 2 brand new Seagate 4TB in Raid 1. Fully updated. HDD's won't stop churning away over first 3 days of ownership.

Ran the tool and got back:
Note after about the first 10 tests, I changed the spin-down time to 5 minutes AND I went into multimedia and unchecked the thumbnail option, as I thought it might be churning away at making thumbnails of the album art for my ~1900 music albums on there. Any ideas??

===== Welcome to use blkdevMonitor_v2 on Sat Jun 27 14:59:56 EDT 2020 =====
Stop klogd.sh daemon... Done
Turn off/on VM block_dump & Clean dmesg
Countdown: 3 2 1
Start...
============= 0/100 test, Sat Jun 27 15:00:03 EDT 2020 ===============
<7>[22600.482431] smbstatus(24482): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 1/100 test, Sat Jun 27 15:01:57 EDT 2020 ===============
<7>[22630.726257] smbstatus(4311): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0
<7>[22630.727280] smbstatus(4311): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 2/100 test, Sat Jun 27 15:02:27 EDT 2020 ===============
<7>[22721.525385] jbd2/md9-8(2243): WRITE block 638480 on unknown-block(9,9) (8 sectors)

============= 3/100 test, Sat Jun 27 15:04:00 EDT 2020 ===============
<7>[22782.608609] smbstatus(17664): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 4/100 test, Sat Jun 27 15:04:59 EDT 2020 ===============
<<<7>[23058.603765] jbd2/md9-8(2243): WRITE block 639168 on unknown-block(9,9) (8 sectors)
<7>[23058.603842] jbd2/md9-8(2243): WRITE block 36040 on unknown-block(9,9) (8 sectors)
<7>[23058.603849] jbd2/md9-8(2243): WRITE block 36048 on unknown-block(9,9) (8 sectors)
<7>[23058.603854] jbd2/md9-8(2243): WRITE block 36056 on unknown-block(9,9) (8 sectors)
<7>[23058.603860] jbd2/md9-8(2243): WRITE block 36064 on unknown-block(9,9) (8 sectors)
<7>[23058.603866] jbd2/md9-8(2243): WRITE block 36072 on unknown-block(9,9) (8 sectors)

============= 5/100 test, Sat Jun 27 15:09:35 EDT 2020 ===============
<<<7>[23150.406821] smbstatus(27284): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 6/100 test, Sat Jun 27 15:11:07 EDT 2020 ===============
<<7>[23425.509984] smbstatus(14715): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 7/100 test, Sat Jun 27 15:15:43 EDT 2020 ===============
<<7>[23489.726223] jbd2/md9-8(2243): WRITE block 889528 on unknown-block(9,9) (8 sectors)

============= 8/100 test, Sat Jun 27 15:16:46 EDT 2020 ===============
<7>[23624.679720] jbd2/md9-8(2243): WRITE block 10808 on unknown-block(9,9) (8 sectors)

============= 9/100 test, Sat Jun 27 15:19:02 EDT 2020 ===============
<7>[23699.855277] smbstatus(8039): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 10/100 test, Sat Jun 27 15:20:16 EDT 2020 ===============
<<7>[23761.405543] smbstatus(25953): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 11/100 test, Sat Jun 27 15:21:18 EDT 2020 ===============
<<<<<<<<<<<<<<<<<<<<<<<7>[23787.584026] jbd2/md9-8(2243): WRITE block 795760 on unknown-block(9,9) (8 sectors)

============= 12/100 test, Sat Jun 27 15:21:44 EDT 2020 ===============
<7>[23789.715303] disk_manage.cgi(2754): dirtied inode 7146 (qpkgStatus.con~) on md9

============= 13/100 test, Sat Jun 27 15:21:46 EDT 2020 ===============
<7>[24158.947050] jbd2/md9-8(2243): WRITE block 19280 on unknown-block(9,9) (8 sectors)

============= 14/100 test, Sat Jun 27 15:27:55 EDT 2020 ===============
<<<<<7>[24236.233074] jbd2/md9-8(2243): WRITE block 20032 on unknown-block(9,9) (8 sectors)
<7>[24236.233081] jbd2/md9-8(2243): WRITE block 20040 on unknown-block(9,9) (8 sectors)
<7>[24236.233087] jbd2/md9-8(2243): WRITE block 20048 on unknown-block(9,9) (8 sectors)
<7>[24236.233093] jbd2/md9-8(2243): WRITE block 20056 on unknown-block(9,9) (8 sectors)
<7>[24236.233099] jbd2/md9-8(2243): WRITE block 20064 on unknown-block(9,9) (8 sectors)

============= 15/100 test, Sat Jun 27 15:29:12 EDT 2020 ===============
<7>[24427.219058] jbd2/dm-0-8(3126): WRITE block 755259168 on unknown-block(253,0) (8 sectors)

============= 16/100 test, Sat Jun 27 15:32:24 EDT 2020 ===============
<<7>[24585.484299] smbstatus(21624): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 17/100 test, Sat Jun 27 15:35:02 EDT 2020 ===============
<<<7>[24601.682043] md9_raid1(2230): WRITE block 1060216 on unknown-block(8,16) (1 sectors)

============= 18/100 test, Sat Jun 27 15:35:18 EDT 2020 ===============
<7>[24739.514353] smbstatus(1225): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 19/100 test, Sat Jun 27 15:37:36 EDT 2020 ===============
<<<<7><<7>[25226.372325] rsyslogd(30782): dirtied inode 13286 (kmsg) on md9
<7>[25226.372368] jbd2/md9-8(2243): WRITE block 995760 on unknown-block(9,9) (8 sectors)

============= 20/100 test, Sat Jun 27 15:45:43 EDT 2020 ===============
<<<<7>[25439.509647] jbd2/md9-8(2243): WRITE block 31408 on unknown-block(9,9) (8 sectors)

============= 21/100 test, Sat Jun 27 15:49:16 EDT 2020 ===============
<<<7><<7>[25447.122964] jbd2/dm-0-8(3126): WRITE block 755259768 on unknown-block(253,0) (8 sectors)

============= 22/100 test, Sat Jun 27 15:49:24 EDT 2020 ===============
<<7>[25562.539198] smbstatus(31972): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 23/100 test, Sat Jun 27 15:51:19 EDT 2020 ===============
<7>[25632.978950] jbd2/dm-0-8(3126): WRITE block 755259

============= 24/100 test, Sat Jun 27 15:52:29 EDT 2020 ===============
<<7>[25868.425210] smbstatus(13046): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 25/100 test, Sat Jun 27 15:56:25 EDT 2020 ===============
<<<<7>[26223.824996] jbd2/md9-8(2243): WRITE block 791592 on unknown-block(9,9) (8 sectors)

============= 26/100 test, Sat Jun 27 16:02:20 EDT 2020 ===============
<<7>[26447.589575] smbstatus(1457): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 27/100 test, Sat Jun 27 16:06:04 EDT 2020 ===============
<<7>[26631.853853] smbstatus(1833): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 28/100 test, Sat Jun 27 16:09:08 EDT 2020 ===============
<<7>[26662.588712] smbstatus(12291): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 29/100 test, Sat Jun 27 16:09:39 EDT 2020 ===============
<7>[26805.468084] md9_raid1(2230): WRITE block 1060232 on unknown-block(8,0) (1 sectors)
<7>[26805.468104] md9_raid1(2230): WRITE block 1060232 on unknown-block(8,16) (1 <<<7>[2

============= 30/100 test, Sat Jun 27 16:12:03 EDT 2020 ===============
<7>[27336.400269] smbstatus(3092): dirtied inode 1572890 (smbXsrv_session_global.tdb) on dm-0

============= 31/100 test, Sat Jun 27 16:20:53 EDT 2020 ===============
<<<<7>[27457.759781] md1_raid1(2581): WRITE block 7794127504 on unknown-block(8,0) (1 sectors)
<7>[27457.759812] md1_raid1(2581): WRITE block 7794127504 on unknown-block(8,16) (1 sectors)

============= 32/100 test, Sat Jun 27 16:22:54 EDT 2020 ===============
<7>[27865.910836] rsyslogd(30782): dirtied inode 13286 (kmsg) on md9

============= 33/100 test, Sat Jun 27 16:29:42 EDT 2020 ===============
<<7>[27884.238798] setcfg(16062): dirtied inode 7012 (CACHEDEV1_DATA.lo~) on md9

============= 34/100 test, Sat Jun 27 16:30:01 EDT 2020 ===============
<7>[27887.389152] smbstatus(21425): dirtied inode 1572891 (smbXsrv_tcon_global.tdb) on dm-0

============= 35/100 test, Sat Jun 27 16:30:04 EDT 2020 ===============
<7>[29084.267728] rsyslogd(30782): dirtied inode 13286 (kmsg) on md9

============= 36/100 test, Sat Jun 27 16:50:01 EDT 2020 ===============
<<7>[29116.007732] rsyslogd(30782): dirtied inode 13286 (kmsg) on md9

============= 37/100 test, Sat Jun 27 16:50:32 EDT 2020 ===============
<7>[29702.872685] rsyslogd(30782): dirtied inode 13286 (kmsg) on md9

============= 38/100 test, Sat Jun 27 17:00:20 EDT 2020 ===============
<7>[30938.839485] md9_raid1(2230): WRITE block 1060216 on unknown-block(8,0) (1 sectors)
<7>[30938.839511] md9_raid1(2230): WRITE block 1060216 on unknown-block(8,16) (1 sectors)

============= 39/100 test, Sat Jun 27 17:20:55 EDT 2020 ===============
<7>[31137.116370] smbd(13096): dirtied inode 1572899 (smbXsrv_open_global.tdb) on dm-0

============= 40/100 test, Sat Jun 27 17:24:13 EDT 2020 ===============
<7>[31149.266458] jbd2/md9-8(2243): WRITE block 793408 on unknown-bloc

============= 41/100 test, Sat Jun 27 17:24:25 EDT 2020 ===============
<7>[31313.667516] md9_raid1(2230): WRITE block 1060232 on unknown-block(8,0) (1 sectors)

============= 42/100 test, Sat Jun 27 17:27:10 EDT 2020 ===============
<<7>[32684.264325] md1_raid1(2581): WRITE block 7794127504 on unknown-block(8,0) (1 sectors)
<7>[32684.264351] md1_raid1(2581): WRITE block 7794127504 on unknown-block(8,16) (1 sectors)

Bart001
New here
Posts: 5
Joined: Mon Mar 23, 2015 7:15 pm

Re: New Nas - wont spindown

Post by Bart001 » Sun Jun 28, 2020 10:21 pm

I opened a ticket with qnap customer service -- do they help with this issue?

Erland552
Starting out
Posts: 16
Joined: Tue Jun 25, 2019 9:48 pm

Re: New Nas - wont spindown

Post by Erland552 » Thu Jul 16, 2020 4:19 am

My experience, no, they don't . They don't even read the text in your original submission and ask redundant questions they already have the answer to.

If you can return your QNAP NAS, do it and by a Synology product.

oRThYpeC
First post
Posts: 1
Joined: Wed Jul 22, 2020 4:30 am

Re: New Nas - wont spindown

Post by oRThYpeC » Wed Jul 22, 2020 4:37 am

Bart001 wrote:
Sun Jun 28, 2020 10:21 pm
I opened a ticket with qnap customer service -- do they help with this issue?
Hi Bart001, did you manage to solve this? I have exactly the same issues with my TS-251D.

User avatar
Toxic17
Ask me anything
Posts: 5432
Joined: Tue Jan 25, 2011 11:41 pm
Location: Planet Earth
Contact:

Re: New Nas - wont spindown

Post by Toxic17 » Wed Jul 22, 2020 7:06 am

Please read the following article. setting standby mode is not just as simple as on or off.

https://www.qnap.com/en/how-to/faq/arti ... ndby-mode/
Regards Simon

QTS 4.x User Guidex

QNAP Club Repository
Submit a ticket • QNAP Helpdesk
QNAP Tutorials, User Manuals, FAQs, Downloads, Wiki
When you ask a question, please include the following


NAS: TS-473-32GB QM2-2P QXG-10G1T 4.4.2.1354 • TVS-463-16GB 4.4.2.1354 QM2-2S10G1TB • TS-459 Pro 2GB 4.2.6 • TS-121 4.3.3.1161 • APC Back-UPS ES 700G •
QPKG's: Plex 1.19.3 • Apache73 v2443.74070 • QSonarr 3.0.3.809 • QNBZGet 21.0 • phpMyAdmin 5.0.2 • Qmono 6.80.105 • McAfee 3.1.0 -6010 • HBS 3.0.200424 • LEgo v3.6.0
Network: VM Hub 3.0 <500/35> • UniFi USG Pro 4 • UniFi USW-16-150W • UniFi USW-8-60W • UniFi CloudKey Gen2+ • UniFi G3-Flex • UAP AC Pro • UAP AC Lite • SLM2008 • Dell 7050 MFF •

tumbi
New here
Posts: 8
Joined: Sun Aug 09, 2020 2:42 pm

Re: New Nas - wont spindown

Post by tumbi » Sat Aug 15, 2020 3:37 pm

I have a new TS-431-P2 (my first QNAP), currently running with three 1TB drives and a small SSD. Just waiting for some new Ironwolf drives to upgrade capacity.

It is running well but I too have been unable to get the drives to go into standby mode. The computer is in Sleep mode and there are no other users - just me. Nothing scheduled in the QNAP, no backups, not even the auto Time updates. I have switched off everything I can find, but still all the green lights stay on.

Can someone help me please?

I am relatively new to NAS and am a Windows user with little exposure to UNIX/Linux. Checking online I find several articles offering help to fix the problem. But the advice seems to be mainly offered assuming some proficiency with UNIX. For example the article suggested by Toxic17 made several suggestions but they are largely outside my experience level and are not much to me help really.

It goes on to say: If problems persist, you can connect to your NAS ... and issue a ps command. This will show all running processes.
Well I did this ps command but that produces PAGES of hieroglyphics that mean nothing to me!

One suggestion that I thought might be relevant:
Q'center Agent. Q'center Agent regularly collects disk health and monitoring information, which prevents drives from entering standby mode.

I am getting reports in my email like this: “[Storage & Snapshots] Uploaded disk analysis data. Disk: Host: Disk 1.” so it would appear my system might be running this Q’center Agent? How can I turn that off please? That might be all I need?

Appreciate any help.

tumbi
New here
Posts: 8
Joined: Sun Aug 09, 2020 2:42 pm

Re: New Nas - wont spindown

Post by tumbi » Mon Aug 17, 2020 11:16 am

***** SUCCESS *********

I did it! The NAS is now going into standby! At least the Status LED goes off (while the drive lights remain a steady Green).

I suppose you want to know how I did it? Well, to cut a long story short ... I am not sure!

But I did go through EVERY possible option I could find in the NAS Control Panel, and I think the last thing I changed was the Global Notification Settings in the Notification Center. I unchecked the headers for Email, SMS, Instant Messaging and Push Service, effectively turning off everything there.

Until I explore further, selectively turning various settings back ON again, I cannot be too sure just what has done the trick. But something DID the trick!

One thing I can tell you is I did everything through the online Web interface. The has been absolutely NO Telnet/SSH command line activity with confusing (to me) UNIX commands. I am a UNIX/Linux novice and prefer to stay away from that sort of thing. So if I can do it, you can do it.

I will post further if I manage to identify the offending activity. And of course I welcome your contributions if you find it before I do.

Dynamac
First post
Posts: 1
Joined: Sun Aug 23, 2020 2:39 am

Re: New Nas - wont spindown

Post by Dynamac » Sun Aug 23, 2020 2:44 am

tumbi wrote:
Mon Aug 17, 2020 11:16 am
***** SUCCESS *********

I did it! The NAS is now going into standby! At least the Status LED goes off (while the drive lights remain a steady Green).

I suppose you want to know how I did it? Well, to cut a long story short ... I am not sure!
Thanks for the update and giving hope there is a solution. Do you map drives to windows for access in windows explorer - the faq suggests this may cause problem - but it's a pretty crucial function of NAS for me. I find the NAS still churns when my laptop is asleep though.

Anyway please keep us posted if you figure out what actually caused the issue for you.

Gwild2020
First post
Posts: 1
Joined: Fri Sep 04, 2020 9:30 pm

Re: New Nas - wont spindown

Post by Gwild2020 » Fri Sep 04, 2020 9:37 pm

I suspect - and this is just an educated hunch - but RAID's require initialization, similar to a single disk format, but writes info to all of the drives to ease housekeeping and parity checking. Most controllers, and I assume QNAP does this too, offer "Quick Init" which inits the first few sectors to allow near instant use of the RAID, but does the full init in the background.

Think Windows disk format: you can select "Quick Format" which only writes the boot sectors and allocation tables and the first few sectors, or you can require it do a full format before the drive is usable. For sanity and time savings, most of us opt for Quick Format.

ps: RAID's also do background parity checks or data verification on some schedule - so they will churn at odd times. 20TB at 300MB/s ... do the math: it takes a while to check large arrays.

storageneeded
Starting out
Posts: 39
Joined: Sun Jan 01, 2012 2:40 am

Re: New Nas - wont spindown

Post by storageneeded » Fri Oct 30, 2020 3:53 am

This has been an ongoing issue with Qnap for many years. They refuse to acknowledge the problem is using the NAS array for frequent writes by the QTS operating system. Drives can't remain in standby if you're writing to them multiple times each hour. This is something that happens right out of the box with any Qnap NAS. You plug your NAS in, set it up, set the drives to sleep after say a few hours, and find out the drives don't even stay asleep for 10 minutes before some Qnap process forces them to spin back up. This happens even with the network connection unplugged so you can't blame it on other network activity. It's just a complete fail by Qnap and sadly one they won't even acknowledge. If the feature can't possibly work they need to just remove it otherwise they need to finally fix it and say store the writes in a RAM disk instead.

User avatar
Moogle Stiltzkin
Ask me anything
Posts: 9289
Joined: Thu Dec 04, 2008 12:21 am
Location: Around the world....
Contact:

Re: New Nas - wont spindown

Post by Moogle Stiltzkin » Fri Oct 30, 2020 1:08 pm

storageneeded wrote:
Fri Oct 30, 2020 3:53 am
This has been an ongoing issue with Qnap for many years. They refuse to acknowledge the problem is using the NAS array for frequent writes by the QTS operating system. Drives can't remain in standby if you're writing to them multiple times each hour. This is something that happens right out of the box with any Qnap NAS. You plug your NAS in, set it up, set the drives to sleep after say a few hours, and find out the drives don't even stay asleep for 10 minutes before some Qnap process forces them to spin back up. This happens even with the network connection unplugged so you can't blame it on other network activity. It's just a complete fail by Qnap and sadly one they won't even acknowledge. If the feature can't possibly work they need to just remove it otherwise they need to finally fix it and say store the writes in a RAM disk instead.
just wondering but how do other vendors deal with this issue? what is their solution to get this to work? :' then maybe we can also forward that to qnap as a feature request hopefully.

personally i never got the spindown to work well for me, so i just did not bother doing that. i just disable the spindown job.
NAS
[Main Server] QNAP TS-877 w. 4tb [ 3x HGST Deskstar NAS (HDN724040ALE640) & 1x WD RED NAS ] EXT4 Raid5 & 2 x m.2 SATA Samsung 850 Evo raid1 +16gb ddr4 Crucial+ QWA-AC2600 wireless+QXP PCIE
[Backup] QNAP TS-653A w. 5x 2TB Samsung F3 (HD203WI) EXT4 Raid5
[Backup] QNAP TL-D400S 2x 4TB WD Red Nas (WD40EFRX) single disks.
[^] QNAP TS-659 Pro II
[^] QNAP TS-509 Pro w. 4x 1TB WD RE3 (WD1002FBYS) EXT4 Raid5
[^] QNAP TS-228
[^] QNAP TS-128
[Mobile NAS] TBS-453DX w. 2x Crucial MX500 500gb EXT4 raid1

Network
Asus AC68U Router|100dl/50ul MBPS FTTH Internet | Win10, WC PC-Intel i7 920 Ivy bridge desktop (1x 512gb Samsung 850 Pro SSD + 1x 4tb HGST Ultrastar 7K4000)


Guides & articles
[Review] Moogle's QNAP experience
[Review] Moogle's TS-877 review
https://www.patreon.com/mooglestiltzkin

storageneeded
Starting out
Posts: 39
Joined: Sun Jan 01, 2012 2:40 am

Re: New Nas - wont spindown

Post by storageneeded » Sat Oct 31, 2020 12:14 am

Moogle Stiltzkin wrote:
Fri Oct 30, 2020 1:08 pm

just wondering but how do other vendors deal with this issue? what is their solution to get this to work? :' then maybe we can also forward that to qnap as a feature request hopefully.

personally i never got the spindown to work well for me, so i just did not bother doing that. i just disable the spindown job.
AFAIK other manufactures use a RAM disk, or other storage if the hardware has it, instead of using the main storage array. You can use a RAM disk for temporary operating system activity and, the next time the storage array is spun back up, you just transfer the updates to the array.

5 watts per drive doesn't sound like much but, If you do the math, this non-working feature is huge. A typical large capacity 3.5 in drive uses about 5.5 watts just sitting there idle doing nothing. In many applications a NAS is only used periodically, like in the evening for streaming media or even less often for running backups, but let's be generous and say it averages out to 8 hours of usage a day. So that's 16 hours a day every drive in every Qnap NAS could be drawing 0.5 watts instead of 5.5 watts.

The current NAS market is about $20 billion per year and that might translate to 20 to 40 million units total. Let's again be generous and say only 20 million units. And let's further be generous and say Qnap has only 10% of the market which would be 2 million Qnap NAS units sold per year.

Let's say at least 8 million of those Qnap NAS units are in operation (4 years worth of sales). With an average of 4 bays, that's 16 million hard drives using 5 watts times 16 hours per day they don't need to.

That's a staggering 1.28 million KWh per day of wasted energy or 467 MILLION Kwh per year. The average American home uses about 10,000 Kwh per year so that's enough energy to power 46,700 homes for free. How is that something not worth fixing?

Here in the US where we have relatively cheap electricity around $0.10 per Kwh so that's around $46 MILLION dollars worth of wasted electricity per year because Qnap is unwilling to fix this problem that's persisted for the last several years after lots of QTS versions and new models of hardware. They just don't seem to care.

If I got the math wrong Qnap, please let us all know how much energy a year you're wasting because you can't make a simple feature work right?

Post Reply

Return to “HDD Spin Down (HDD Standby)”