Help with understanding the log from blkdevMonitor_v2

Discussion about hard drive spin down (standby) feature of NAS.
Post Reply
Ephreal
Starting out
Posts: 25
Joined: Fri Jun 09, 2017 11:53 pm

Help with understanding the log from blkdevMonitor_v2

Post by Ephreal » Sun Dec 29, 2019 5:21 am

Hi

I just installed my Qnap on a SSD. and after that added a 2nd HDD to store data on.

But without ever adding anything further i noticed that there is access on said HDD.

I have made a log dump using blkdevMonitor_v2 however i do not understand it. And i cannot see any access to my secundary hdd in the log.

============= 96/100 test, Sat Dec 28 22:15:25 CET 2019 ===============
<7>[289173.891141] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289174.293128] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289173.855030] smbstatus(5363): dirtied inode 197656634 (5363) on dm-12
<7>[289173.855257] smbstatus(5363): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289173.855270] smbstatus(5363): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289173.879851] smbstatus(5363): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289173.879866] smbstatus(5363): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289173.880285] smbstatus(5363): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
<7>[289173.880296] smbstatus(5363): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
<7>[289173.855030] smbstatus(5363): dirtied inode 197656634 (5363) on dm-12
<7>[289173.855257] smbstatus(5363): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289173.855270] smbstatus(5363): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289173.879851] smbstatus(5363): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289173.879866] smbstatus(5363): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289173.880285] smbstatus(5363): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
<7>[289173.880296] smbstatus(5363): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12

============= 97/100 test, Sat Dec 28 22:15:37 CET 2019 ===============
<7>[289180.128182] jbd2/md13-8(3951): WRITE block 532272 on unknown-block(9,13) (8 sectors)
<7>[289180.132207] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289180.132214] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289180.169523] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289180.169561] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289180.186084] jbd2/md13-8(3951): WRITE block 532280 on unknown-block(9,13) (8 sectors)
<7>[289180.186122] jbd2/md13-8(3951): WRITE block 532288 on unknown-block(9,13) (8 sectors)
<7>[289180.186131] jbd2/md13-8(3951): WRITE block 532296 on unknown-block(9,13) (8 sectors)
<7>[289180.186140] jbd2/md13-8(3951): WRITE block 532304 on unknown-block(9,13) (8 sectors)
<7>[289180.186152] jbd2/md13-8(3951): WRITE block 532312 on unknown-block(9,13) (8 sectors)
<7>[289180.186605] jbd2/md13-8(3951): WRITE block 532320 on unknown-block(9,13) (8 sectors)
<7>[289180.524167] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289180.524202] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289180.552821] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289180.552851] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289185.254114] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289185.254149] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289185.293631] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289185.293668] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289185.548164] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289185.548203] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289185.560262] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289185.560293] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289180.128182] jbd2/md13-8(3951): WRITE block 532272 on unknown-block(9,13) (8 sectors)
<7>[289180.132161] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289180.132207] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289180.132214] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289180.169523] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289180.169561] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289180.186084] jbd2/md13-8(3951): WRITE block 532280 on unknown-block(9,13) (8 sectors)
<7>[289180.186122] jbd2/md13-8(3951): WRITE block 532288 on unknown-block(9,13) (8 sectors)
<7>[289180.186131] jbd2/md13-8(3951): WRITE block 532296 on unknown-block(9,13) (8 sectors)
<7>[289180.186140] jbd2/md13-8(3951): WRITE block 532304 on unknown-block(9,13) (8 sectors)
<7>[289180.186152] jbd2/md13-8(3951): WRITE block 532312 on unknown-block(9,13) (8 sectors)
<7>[289180.186605] jbd2/md13-8(3951): WRITE block 532320 on unknown-block(9,13) (8 sectors)
<7>[289180.524167] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289180.524202] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289180.527156] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289180.552821] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289180.552851] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289185.254114] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289185.254149] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289185.293631] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289185.293668] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289185.548164] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289185.548203] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289185.560262] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289185.560293] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289186.212789] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289186.557122] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289180.128192] jbd2/dm-12-8(3416): WRITE block 882512304 on unknown-block(253,12) (8 sectors)
<7>[289180.133369] jbd2/dm-12-8(3416): WRITE block 882512312 on unknown-block(253,12) (8 sectors)
<7>[289180.133400] jbd2/dm-12-8(3416): WRITE block 882512320 on unknown-block(253,12) (8 sectors)
<7>[289180.133408] jbd2/dm-12-8(3416): WRITE block 882512328 on unknown-block(253,12) (8 sectors)
<7>[289180.133419] jbd2/dm-12-8(3416): WRITE block 882512336 on unknown-block(253,12) (8 sectors)
<7>[289180.133425] jbd2/dm-12-8(3416): WRITE block 882512344 on unknown-block(253,12) (8 sectors)
<7>[289180.133432] jbd2/dm-12-8(3416): WRITE block 882512352 on unknown-block(253,12) (8 sectors)
<7>[289180.133438] jbd2/dm-12-8(3416): WRITE block 882512360 on unknown-block(253,12) (8 sectors)
<7>[289180.133445] jbd2/dm-12-8(3416): WRITE block 882512368 on unknown-block(253,12) (8 sectors)
<7>[289180.133451] jbd2/dm-12-8(3416): WRITE block 882512376 on unknown-block(253,12) (8 sectors)
<7>[289180.133457] jbd2/dm-12-8(3416): WRITE block 882512384 on unknown-block(253,12) (8 sectors)
<7>[289180.133463] jbd2/dm-12-8(3416): WRITE block 882512392 on unknown-block(253,12) (8 sectors)
<7>[289180.134522] jbd2/dm-12-8(3416): WRITE block 882512400 on unknown-block(253,12) (8 sectors)
<7>[289180.128192] jbd2/dm-12-8(3416): WRITE block 882512304 on unknown-block(253,12) (8 sectors)
<7>[289180.133369] jbd2/dm-12-8(3416): WRITE block 882512312 on unknown-block(253,12) (8 sectors)
<7>[289180.133400] jbd2/dm-12-8(3416): WRITE block 882512320 on unknown-block(253,12) (8 sectors)
<7>[289180.133408] jbd2/dm-12-8(3416): WRITE block 882512328 on unknown-block(253,12) (8 sectors)
<7>[289180.133419] jbd2/dm-12-8(3416): WRITE block 882512336 on unknown-block(253,12) (8 sectors)
<7>[289180.133425] jbd2/dm-12-8(3416): WRITE block 882512344 on unknown-block(253,12) (8 sectors)
<7>[289180.133432] jbd2/dm-12-8(3416): WRITE block 882512352 on unknown-block(253,12) (8 sectors)
<7>[289180.133438] jbd2/dm-12-8(3416): WRITE block 882512360 on unknown-block(253,12) (8 sectors)
<7>[289180.133445] jbd2/dm-12-8(3416): WRITE block 882512368 on unknown-block(253,12) (8 sectors)
<7>[289180.133451] jbd2/dm-12-8(3416): WRITE block 882512376 on unknown-block(253,12) (8 sectors)
<7>[289180.133457] jbd2/dm-12-8(3416): WRITE block 882512384 on unknown-block(253,12) (8 sectors)
<7>[289180.133463] jbd2/dm-12-8(3416): WRITE block 882512392 on unknown-block(253,12) (8 sectors)
<7>[289180.134522] jbd2/dm-12-8(3416): WRITE block 882512400 on unknown-block(253,12) (8 sectors)

============= 98/100 test, Sat Dec 28 22:15:48 CET 2019 ===============
<7>[289190.572164] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289191.908159] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289192.181145] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289202.598166] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289202.838123] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289191.905189] jbd2/dm-12-8(3416): WRITE block 882512408 on unknown-block(253,12) (8 sectors)
<7>[289191.909378] jbd2/dm-12-8(3416): WRITE block 882512416 on unknown-block(253,12) (8 sectors)
<7>[289191.909410] jbd2/dm-12-8(3416): WRITE block 882512424 on unknown-block(253,12) (8 sectors)
<7>[289191.909417] jbd2/dm-12-8(3416): WRITE block 882512432 on unknown-block(253,12) (8 sectors)
<7>[289191.909563] jbd2/dm-12-8(3416): WRITE block 882512440 on unknown-block(253,12) (8 sectors)
<7>[289191.905189] jbd2/dm-12-8(3416): WRITE block 882512408 on unknown-block(253,12) (8 sectors)
<7>[289191.909378] jbd2/dm-12-8(3416): WRITE block 882512416 on unknown-block(253,12) (8 sectors)
<7>[289191.909410] jbd2/dm-12-8(3416): WRITE block 882512424 on unknown-block(253,12) (8 sectors)
<7>[289191.909417] jbd2/dm-12-8(3416): WRITE block 882512432 on unknown-block(253,12) (8 sectors)
<7>[289191.909563] jbd2/dm-12-8(3416): WRITE block 882512440 on unknown-block(253,12) (8 sectors)

============= 99/100 test, Sat Dec 28 22:16:01 CET 2019 ===============
<7>[289209.824166] jbd2/md13-8(3951): WRITE block 532328 on unknown-block(9,13) (8 sectors)
<7>[289209.828114] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289209.828150] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289209.865124] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289209.865148] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289209.881802] jbd2/md13-8(3951): WRITE block 532336 on unknown-block(9,13) (8 sectors)
<7>[289209.881815] jbd2/md13-8(3951): WRITE block 532344 on unknown-block(9,13) (8 sectors)
<7>[289209.881824] jbd2/md13-8(3951): WRITE block 532352 on unknown-block(9,13) (8 sectors)
<7>[289209.881833] jbd2/md13-8(3951): WRITE block 532360 on unknown-block(9,13) (8 sectors)
<7>[289209.881842] jbd2/md13-8(3951): WRITE block 532368 on unknown-block(9,13) (8 sectors)
<7>[289209.882246] jbd2/md13-8(3951): WRITE block 532376 on unknown-block(9,13) (8 sectors)
<7>[289210.278127] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289210.278163] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289210.315057] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289210.315083] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289204.019597] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289204.244141] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289209.824166] jbd2/md13-8(3951): WRITE block 532328 on unknown-block(9,13) (8 sectors)
<7>[289209.828114] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289209.828116] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289209.828150] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289209.865124] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289209.865148] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289209.881802] jbd2/md13-8(3951): WRITE block 532336 on unknown-block(9,13) (8 sectors)
<7>[289209.881815] jbd2/md13-8(3951): WRITE block 532344 on unknown-block(9,13) (8 sectors)
<7>[289209.881824] jbd2/md13-8(3951): WRITE block 532352 on unknown-block(9,13) (8 sectors)
<7>[289209.881833] jbd2/md13-8(3951): WRITE block 532360 on unknown-block(9,13) (8 sectors)
<7>[289209.881842] jbd2/md13-8(3951): WRITE block 532368 on unknown-block(9,13) (8 sectors)
<7>[289209.882246] jbd2/md13-8(3951): WRITE block 532376 on unknown-block(9,13) (8 sectors)
<7>[289210.078119] md1_raid1(2491): WRITE block 1980493176 on unknown-block(8,0) (1 sectors)
<7>[289210.278127] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)
<7>[289210.278163] md13_raid1(2265): WRITE block 1060256 on unknown-block(8,16) (1 sectors)
<7>[289210.315057] md13_raid1(2265): WRITE block 1060264 on unknown-block(8,0) (1 sectors)
<7>[289210.315083] md13_raid1(2265): WRITE block 1060272 on unknown-block(8,16) (1 sectors)
<7>[289203.988400] smbstatus(9616): dirtied inode 197656635 (9616) on dm-12
<7>[289203.988616] smbstatus(9616): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289203.988628] smbstatus(9616): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289204.011694] smbstatus(9616): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289204.011708] smbstatus(9616): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289204.012063] smbstatus(9616): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
<7>[289204.012073] smbstatus(9616): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
<7>[289209.824174] jbd2/dm-12-8(3416): WRITE block 882512448 on unknown-block(253,12) (8 sectors)
<7>[289209.830181] jbd2/dm-12-8(3416): WRITE block 882512456 on unknown-block(253,12) (8 sectors)
<7>[289209.830191] jbd2/dm-12-8(3416): WRITE block 882512464 on unknown-block(253,12) (8 sectors)
<7>[289209.830200] jbd2/dm-12-8(3416): WRITE block 882512472 on unknown-block(253,12) (8 sectors)
<7>[289209.830207] jbd2/dm-12-8(3416): WRITE block 882512480 on unknown-block(253,12) (8 sectors)
<7>[289209.830215] jbd2/dm-12-8(3416): WRITE block 882512488 on unknown-block(253,12) (8 sectors)
<7>[289209.830223] jbd2/dm-12-8(3416): WRITE block 882512496 on unknown-block(253,12) (8 sectors)
<7>[289209.830231] jbd2/dm-12-8(3416): WRITE block 882512504 on unknown-block(253,12) (8 sectors)
<7>[289209.830238] jbd2/dm-12-8(3416): WRITE block 882512512 on unknown-block(253,12) (8 sectors)
<7>[289209.830246] jbd2/dm-12-8(3416): WRITE block 882512520 on unknown-block(253,12) (8 sectors)
<7>[289209.831072] jbd2/dm-12-8(3416): WRITE block 882512528 on unknown-block(253,12) (8 sectors)
<7>[289203.988400] smbstatus(9616): dirtied inode 197656635 (9616) on dm-12
<7>[289203.988616] smbstatus(9616): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289203.988628] smbstatus(9616): dirtied inode 197656588 (names.tdb) on dm-12
<7>[289204.011694] smbstatus(9616): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289204.011708] smbstatus(9616): dirtied inode 197656602 (smbXsrv_session_global.tdb) on dm-12
<7>[289204.012063] smbstatus(9616): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
<7>[289204.012073] smbstatus(9616): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
<7>[289209.824174] jbd2/dm-12-8(3416): WRITE block 882512448 on unknown-block(253,12) (8 sectors)
<7>[289209.830181] jbd2/dm-12-8(3416): WRITE block 882512456 on unknown-block(253,12) (8 sectors)
<7>[289209.830191] jbd2/dm-12-8(3416): WRITE block 882512464 on unknown-block(253,12) (8 sectors)
<7>[289209.830200] jbd2/dm-12-8(3416): WRITE block 882512472 on unknown-block(253,12) (8 sectors)
<7>[289209.830207] jbd2/dm-12-8(3416): WRITE block 882512480 on unknown-block(253,12) (8 sectors)
<7>[289209.830215] jbd2/dm-12-8(3416): WRITE block 882512488 on unknown-block(253,12) (8 sectors)
<7>[289209.830223] jbd2/dm-12-8(3416): WRITE block 882512496 on unknown-block(253,12) (8 sectors)
<7>[289209.830231] jbd2/dm-12-8(3416): WRITE block 882512504 on unknown-block(253,12) (8 sectors)
<7>[289209.830238] jbd2/dm-12-8(3416): WRITE block 882512512 on unknown-block(253,12) (8 sectors)
<7>[289209.830246] jbd2/dm-12-8(3416): WRITE block 882512520 on unknown-block(253,12) (8 sectors)
<7>[289209.831072] jbd2/dm-12-8(3416): WRITE block 882512528 on unknown-block(253,12) (8 sectors)

Turn off block_dump
Start klogd.sh daemon


End of standby test!


Regards

User avatar
dolbyman
Guru
Posts: 20012
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Help with understanding the log from blkdevMonitor_v2

Post by dolbyman » Sun Dec 29, 2019 5:39 am

part of the system is on each drive in the system...so there is also no individual disk standby

make sure the system is always on a raid1 or higher (no single disks)

Ephreal
Starting out
Posts: 25
Joined: Fri Jun 09, 2017 11:53 pm

Re: Help with understanding the log from blkdevMonitor_v2

Post by Ephreal » Sun Dec 29, 2019 6:49 am

why would the system be on both disks ? they are both single disk. where the HDD where added after the system was installed. That makes no sense to do

User avatar
dolbyman
Guru
Posts: 20012
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Help with understanding the log from blkdevMonitor_v2

Post by dolbyman » Sun Dec 29, 2019 7:57 am

that would be a question for qnap..

but if you do a "cat /proc/mdstat" via ssh you will see the hidden system volumes

P3R
Guru
Posts: 12338
Joined: Sat Dec 29, 2007 1:39 am
Location: Stockholm, Sweden (UTC+01:00)

Re: Help with understanding the log from blkdevMonitor_v2

Post by P3R » Sun Dec 29, 2019 9:27 am

Ephreal wrote:
Sun Dec 29, 2019 6:49 am
why would the system be on both disks ?
Because redundant disks are much, much better for system reliability than single disks.
they are both single disk. where the HDD where added after the system was installed.
As soon as additional disks are inserted into a Qnap those disks will be repartioned with small hidden system volumes in RAID 1 across all disks regardless of how you then choose to configure the visible user data partitions on those same disks.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!

A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.

All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!

Ephreal
Starting out
Posts: 25
Joined: Fri Jun 09, 2017 11:53 pm

Re: Help with understanding the log from blkdevMonitor_v2

Post by Ephreal » Sun Dec 29, 2019 6:23 pm

P3R wrote:
Sun Dec 29, 2019 9:27 am
Because redundant disks are much, much better for system reliability than single disks.
What redundancy ? If I pull out my SSD all my apps are gone. And I guess if the internal flash fails the QNAP will never boot.
P3R wrote:
Sun Dec 29, 2019 9:27 am
As soon as additional disks are inserted into a Qnap those disks will be repartioned with small hidden system volumes in RAID 1 across all disks regardless of how you then choose to configure the visible user data partitions on those same disks.
This sounds like a stupid approach if the disk is not in the same raid group and are standalone. Since unessesary usage of a mechanical disk will where it out eventually.

So can I disable this bug ?. Because it serve me no purpose at all. And I really like to be in control of what is stored on my hdd's

Also on another note the hdd where already partitioned when I inserted it. Does that mean it rezised my partition ? Sound like a potential dangerous process for dataloss

Regards

P3R
Guru
Posts: 12338
Joined: Sat Dec 29, 2007 1:39 am
Location: Stockholm, Sweden (UTC+01:00)

Re: Help with understanding the log from blkdevMonitor_v2

Post by P3R » Sun Dec 29, 2019 7:22 pm

Ephreal wrote:
Sun Dec 29, 2019 6:23 pm
What redundancy ? If I pull out my SSD all my apps are gone.
Yes. That's because user installable apps are installed in the visible partitions that you choose to not have in a redundant configuration.

The Qnap system design is that they have other hidden system partitions that run RAID 1 across all disks. dolbyman did explain to you how you could see those partitions if you want.
Since unessesary usage of a mechanical disk will where it out eventually.
Mechanical disks are designed to handle usage but of course everything will wear out eventually. SSDs wear out as well. Cheap SSDs maybe even quicker than rotating disks?
So can I disable this bug ?.
  1. It's not a bug, it's by design.
  2. No you can't change the system design.
  3. If you wanted a general computer with absolute configuration freedom you shouldn't have bought an appliance-like NAS.
  4. If you want to achieve more configuration freedom on the current hardware you need to install another operating system (Debian seem to be the most popular) and support it yourself. If you want to go that route there is a dedicated area in the forum for user discussions about it here.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!

A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.

All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!

Ephreal
Starting out
Posts: 25
Joined: Fri Jun 09, 2017 11:53 pm

Re: Help with understanding the log from blkdevMonitor_v2

Post by Ephreal » Sun Dec 29, 2019 10:56 pm

P3R wrote:
Sun Dec 29, 2019 7:22 pm
If you wanted a general computer with absolute configuration freedom you shouldn't have bought an appliance-like NAS.
I can see that now.
P3R wrote:
Sun Dec 29, 2019 7:22 pm
If you want to achieve more configuration freedom on the current hardware you need to install another operating system (Debian seem to be the most popular) and support it yourself. If you want to go that route there is a dedicated area in the forum for user discussions about it here.
I will look into this thank you.

Anyways back to topic

What does
<7>[289173.880285] smbstatus(5363): dirtied inode 197656603 (smbXsrv_tcon_global.tdb) on dm-12
and
<7>[289180.132207] md13_raid1(2265): WRITE block 1060248 on unknown-block(8,0) (1 sectors)

mean. and is it an error ?

regards

Ephreal
Starting out
Posts: 25
Joined: Fri Jun 09, 2017 11:53 pm

Re: Help with understanding the log from blkdevMonitor_v2

Post by Ephreal » Sun Dec 29, 2019 10:59 pm

dolbyman wrote:
Sun Dec 29, 2019 7:57 am
that would be a question for qnap..

but if you do a "cat /proc/mdstat" via ssh you will see the hidden system volumes
I get the following list.
Though I cant seem to find what identifies as my 2nd Drive

Code: Select all

[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid1 sdb3[0]
      7804071616 blocks super 1.0 [1/1] [U]

md1 : active raid1 sda3[0]
      891221504 blocks super 1.0 [1/1] [U]

md322 : active raid1 sdb5[0]
      7235136 blocks super 1.0 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md256 : active raid1 sdb2[0]
      530112 blocks super 1.0 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md321 : active raid1 sda5[0]
      8283712 blocks super 1.0 [2/1] [U_]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[0] sdb4[32]
      458880 blocks super 1.0 [32/2] [UU______________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[0] sdb1[32]
      530048 blocks super 1.0 [32/2] [UU______________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

User avatar
dolbyman
Guru
Posts: 20012
Joined: Sat Feb 12, 2011 2:11 am
Location: Vancouver BC , Canada

Re: Help with understanding the log from blkdevMonitor_v2

Post by dolbyman » Sun Dec 29, 2019 11:42 pm

as you can see md13 and md9 have two members

and in the above monitor log md13 is written to often..so both disks wake up for that

Ephreal
Starting out
Posts: 25
Joined: Fri Jun 09, 2017 11:53 pm

Re: Help with understanding the log from blkdevMonitor_v2

Post by Ephreal » Mon Dec 30, 2019 12:13 am

But they are unknown blocks.

Also is it possible to figure out what it is thats being written so I perhaps could stop it if its a service ?

P3R
Guru
Posts: 12338
Joined: Sat Dec 29, 2007 1:39 am
Location: Stockholm, Sweden (UTC+01:00)

Re: Help with understanding the log from blkdevMonitor_v2

Post by P3R » Mon Dec 30, 2019 12:16 am

Ephreal wrote:
Sun Dec 29, 2019 10:59 pm
Though I cant seem to find what identifies as my 2nd Drive
I'm not sure which drive you define as the second but from that I would say it's sdb.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!

A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.

All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!

Ephreal
Starting out
Posts: 25
Joined: Fri Jun 09, 2017 11:53 pm

Re: Help with understanding the log from blkdevMonitor_v2

Post by Ephreal » Mon Dec 30, 2019 12:37 am

P3R wrote:
Mon Dec 30, 2019 12:16 am
Ephreal wrote:
Sun Dec 29, 2019 10:59 pm
Though I cant seem to find what identifies as my 2nd Drive
I'm not sure which drive you define as the second but from that I would say it's sdb.
I think that is correct i know my HDD is sdb2 for the visible partition.
Is it possible to delte the hidden partition on my 2nd HDD ?

P3R
Guru
Posts: 12338
Joined: Sat Dec 29, 2007 1:39 am
Location: Stockholm, Sweden (UTC+01:00)

Re: Help with understanding the log from blkdevMonitor_v2

Post by P3R » Mon Dec 30, 2019 12:55 am

Ephreal wrote:
Mon Dec 30, 2019 12:37 am
Is it possible to delte the hidden partition on my 2nd HDD ?
Anything is possible but I'd say that it isn't if you expect a working system.

I already told you that you have to accept the basic Qnap system design or switch to another operating system.
RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!

A non-RAID configuration (including RAID 0, which isn't really RAID) with a backup on a separate media protects your data far better than any RAID-volume without backup.

All data storage consists of both the primary storage and the backups. It's your money and your data, spend the storage budget wisely or pay with your data!

Post Reply

Return to “HDD Spin Down (HDD Standby)”