Sorgenfri Skrevet 13. mars 2016 Del Skrevet 13. mars 2016 Hei! Har en aldri så liten nøtt med at et raid har blitt unmounted på vår kjære nas. Skjedde under en kopieringsoperasjon så er noen filer som det er litt upraktisk om må slettes. Nasen ellers fungerer... Er satt opp i RAID 6 med åtte disker hvor den åttende er hot-spare. Det jeg lurer på er om jeg kan kjøre denne kommandoen her på alle åtte diskene, eller om jeg må kjøre bare på de syv diskene som er i selve raidet. Lurer også på om jeg skal kjøre e2fsck_64 eller bare e2fsck. swapoff /dev/md8mdadm -S /dev/md8mkswap /dev/sda2mkswap /dev/sdb2mkswap /dev/sdc2mkswap /dev/sdd2mkswap /dev/sde2mkswap /dev/sdf2mkswap /dev/sdg2mkswap /dev/sdh2swapon /dev/sda2swapon /dev/sdb2swapon /dev/sdc2swapon /dev/sdd2swapon /dev/sde2swapon /dev/sdf2swapon /dev/sdg2swapon /dev/sdh2e2fsck_64 -f /dev/md0mount /dev/md0 /share/MD0_DATA/ -t ext4reboot I henhold til løsningen her : http://forum.qnap.com/viewtopic.php?p=224731 Har lagt ved loggen som jeg mener hinter om at jeg er på riktig spor. Dette er stort sett gresk for meg så hvis noen kunne gi et lite "go" på dette hadde jeg (og mine kolleger) vært evig takknemlige Hvis det trengs noe mer informasjon om systemet så bidrar jeg gjerne med det! Lenke til kommentar
Sorgenfri Skrevet 13. mars 2016 Forfatter Del Skrevet 13. mars 2016 Her er loggen i tilfelle den ikke kom med: mdadm --detail /dev/md0/dev/md0: Version : 01.00.03 Creation Time : Fri Aug 5 11:29:51 2011 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 8Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Mar 13 14:16:23 2016 State : clean Active Devices : 7Working Devices : 8 Failed Devices : 0 Spare Devices : 1 Chunk Size : 64K Name : 0 UUID : a0409715:3c6ac278:6b5cd07e:40c60991 Events : 13456938 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 8 35 2 active sync /dev/sdc3 3 8 51 3 active sync /dev/sdd3 4 8 67 4 active sync /dev/sde3 5 8 83 5 active sync /dev/sdf3 6 8 99 6 active sync /dev/sdg3 7 8 115 - spare /dev/sdh3[~] # mount /dev/md0 /share/MD0_DATA/mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so[~] # dmesg>[ 123.164657] ufsd: driver (lke_9.2.0 QNAP, build_host("BuildServer37"), acl, ioctl, bdi, sd2(0), fua, bz, rsrc) loaded at ffffffffa0194000[ 123.164662] NTFS support included[ 123.164665] Hfs+/HfsJ support included[ 123.164667] optimized: speed[ 123.164668] Build_for__QNAP_Atom_x86_64_k3.4.6_2014-09-17_lke_9.2.0_r245986_b9[ 123.164671][ 123.222145] fnotify: Load file notify kernel module.[ 124.296462] usbcore: registered new interface driver snd-usb-audio[ 124.305527] usbcore: registered new interface driver snd-usb-caiaq[ 124.317161] Linux video capture interface: v2.00[ 124.340393] usbcore: registered new interface driver uvcvideo[ 124.346324] USB Video Class driver (1.1.1)[ 124.502088] 8021q: 802.1Q VLAN Support v1.8[ 125.910956] 8021q: adding VLAN 0 to HW filter on device eth0[ 125.997994] 8021q: adding VLAN 0 to HW filter on device eth1[ 127.564055] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None[ 127.568458] e1000e 0000:03:00.0: eth0: 10/100 speed: disabling TSO[ 128.889040] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx[ 137.467001] kjournald starting. Commit interval 5 seconds[ 137.474246] EXT3-fs (md9): using internal journal[ 137.478213] EXT3-fs (md9): mounted filesystem with ordered data mode[ 142.024063] md: bind<sda2>[ 142.029409] md/raid1:md8: active with 1 out of 1 mirrors[ 142.033390] md8: detected capacity change from 0 to 542851072[ 143.042915] md8: unknown partition table[ 145.088406] Adding 530124k swap on /dev/md8. Priority:-1 extents:1 across:530124k[ 149.298039] md: bind<sdb2>[ 149.329519] RAID1 conf printout:[ 149.329529] --- wd:1 rd:2[ 149.329537] disk 0, wo:0, o:1, dev:sda2[ 149.329544] disk 1, wo:1, o:1, dev:sdb2[ 149.329669] md: recovery of RAID array md8[ 149.333677] md: minimum _guaranteed_ speed: 5000 KB/sec/disk.[ 149.337808] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.[ 149.341995] md: using 128k window, over a total of 530128k.[ 151.447496] md: bind<sdc2>[ 153.618603] md: bind<sdd2>[ 155.795127] md: bind<sde2>[ 158.035808] md: bind<sdf2>[ 159.365966] md: md8: recovery done.[ 159.421233] RAID1 conf printout:[ 159.421244] --- wd:2 rd:2[ 159.421252] disk 0, wo:0, o:1, dev:sda2[ 159.421259] disk 1, wo:0, o:1, dev:sdb2[ 159.434063] RAID1 conf printout:[ 159.434072] --- wd:2 rd:2[ 159.434078] disk 0, wo:0, o:1, dev:sda2[ 159.434083] disk 1, wo:0, o:1, dev:sdb2[ 159.434087] RAID1 conf printout:[ 159.434091] --- wd:2 rd:2[ 159.434095] disk 0, wo:0, o:1, dev:sda2[ 159.434100] disk 1, wo:0, o:1, dev:sdb2[ 159.434104] RAID1 conf printout:[ 159.434107] --- wd:2 rd:2[ 159.434112] disk 0, wo:0, o:1, dev:sda2[ 159.434117] disk 1, wo:0, o:1, dev:sdb2[ 159.434120] RAID1 conf printout:[ 159.434124] --- wd:2 rd:2[ 159.434128] disk 0, wo:0, o:1, dev:sda2[ 159.434133] disk 1, wo:0, o:1, dev:sdb2[ 160.155538] md: bind<sdg2>[ 160.182950] RAID1 conf printout:[ 160.182957] --- wd:2 rd:2[ 160.182963] disk 0, wo:0, o:1, dev:sda2[ 160.182968] disk 1, wo:0, o:1, dev:sdb2[ 160.182972] RAID1 conf printout:[ 160.182976] --- wd:2 rd:2[ 160.182980] disk 0, wo:0, o:1, dev:sda2[ 160.182985] disk 1, wo:0, o:1, dev:sdb2[ 160.182989] RAID1 conf printout:[ 160.182993] --- wd:2 rd:2[ 160.182998] disk 0, wo:0, o:1, dev:sda2[ 160.183019] disk 1, wo:0, o:1, dev:sdb2[ 160.183023] RAID1 conf printout:[ 160.183026] --- wd:2 rd:2[ 160.183031] disk 0, wo:0, o:1, dev:sda2[ 160.183036] disk 1, wo:0, o:1, dev:sdb2[ 160.183040] RAID1 conf printout:[ 160.183043] --- wd:2 rd:2[ 160.183048] disk 0, wo:0, o:1, dev:sda2[ 160.183053] disk 1, wo:0, o:1, dev:sdb2[ 162.146493] md: md0 stopped.[ 162.162975] md: md0 stopped.[ 162.232608] md: bind<sdh2>[ 162.269275] RAID1 conf printout:[ 162.269285] --- wd:2 rd:2[ 162.269292] disk 0, wo:0, o:1, dev:sda2[ 162.269300] disk 1, wo:0, o:1, dev:sdb2[ 162.269305] RAID1 conf printout:[ 162.269310] --- wd:2 rd:2[ 162.269316] disk 0, wo:0, o:1, dev:sda2[ 162.269323] disk 1, wo:0, o:1, dev:sdb2[ 162.269328] RAID1 conf printout:[ 162.269332] --- wd:2 rd:2[ 162.269338] disk 0, wo:0, o:1, dev:sda2[ 162.269344] disk 1, wo:0, o:1, dev:sdb2[ 162.269349] RAID1 conf printout:[ 162.269354] --- wd:2 rd:2[ 162.269360] disk 0, wo:0, o:1, dev:sda2[ 162.269367] disk 1, wo:0, o:1, dev:sdb2[ 162.269372] RAID1 conf printout:[ 162.269376] --- wd:2 rd:2[ 162.269382] disk 0, wo:0, o:1, dev:sda2[ 162.269389] disk 1, wo:0, o:1, dev:sdb2[ 162.269394] RAID1 conf printout:[ 162.269398] --- wd:2 rd:2[ 162.269407] disk 0, wo:0, o:1, dev:sda2[ 162.269413] disk 1, wo:0, o:1, dev:sdb2[ 162.335299] md: bind<sdb3>[ 162.339123] md: bind<sdc3>[ 162.342855] md: bind<sdd3>[ 162.346507] md: bind<sde3>[ 162.350055] md: bind<sdf3>[ 162.353697] md: bind<sdg3>[ 162.357088] md: bind<sdh3>[ 162.360370] md: bind<sda3>[ 162.364535] md/raid:md0: device sda3 operational as raid disk 0[ 162.367444] md/raid:md0: device sdg3 operational as raid disk 6[ 162.370302] md/raid:md0: device sdf3 operational as raid disk 5[ 162.373132] md/raid:md0: device sde3 operational as raid disk 4[ 162.375868] md/raid:md0: device sdd3 operational as raid disk 3[ 162.378520] md/raid:md0: device sdc3 operational as raid disk 2[ 162.381119] md/raid:md0: device sdb3 operational as raid disk 1[ 162.402093] md/raid:md0: allocated 119488kB[ 162.404744] md/raid:md0: raid level 6 active with 7 out of 7 devices, algorithm 2[ 162.407341] RAID conf printout:[ 162.407346] --- level:6 rd:7 wd:7[ 162.407352] disk 0, o:1, dev:sda3[ 162.407357] disk 1, o:1, dev:sdb3[ 162.407363] disk 2, o:1, dev:sdc3[ 162.407367] disk 3, o:1, dev:sdd3[ 162.407372] disk 4, o:1, dev:sde3[ 162.407377] disk 5, o:1, dev:sdf3[ 162.407382] disk 6, o:1, dev:sdg3[ 162.407461] md0: detected capacity change from 0 to 14994931384320[ 162.410309] RAID conf printout:[ 162.410317] --- level:6 rd:7 wd:7[ 162.410323] disk 0, o:1, dev:sda3[ 162.410329] disk 1, o:1, dev:sdb3[ 162.410336] disk 2, o:1, dev:sdc3[ 162.410343] disk 3, o:1, dev:sdd3[ 162.410349] disk 4, o:1, dev:sde3[ 162.410356] disk 5, o:1, dev:sdf3[ 162.410362] disk 6, o:1, dev:sdg3[ 163.584402] md0: unknown partition table[ 165.688169] EXT4-fs (md0): Mount option "noacl" will be removed by 3.5[ 165.688174] Contact [email protected] if you think we should keep it.[ 165.688177][ 165.855566] EXT4-fs (md0): ext4_check_descriptors: Checksum for group 6400 failed (9196!=65175)[ 165.858397] EXT4-fs (md0): group descriptors corrupted![ 192.074049] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None[ 192.076980] e1000e 0000:03:00.0: eth0: 10/100 speed: disabling TSO[ 194.190062] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx[ 216.859851] warning: `proftpd' uses 32-bit capabilities (legacy support in use)[ 237.898791] rule type=2, num=0[ 238.025212] Loading iSCSI transport class v2.0-871.[ 238.057555] iscsi: registered transport (tcp)[ 238.079615] iscsid (8057): /proc/8057/oom_adj is deprecated, please use /proc/8057/oom_score_adj instead.[ 305.033435] sysRequest.cgi[12329]: segfault at 18 ip 00000000f6f77c2e sp 00000000fff90a04 error 6 in libc-2.6.1.so[f6eee000+12d000][ 513.205896] sysRequest.cgi[14629]: segfault at 18 ip 00000000f7007c2e sp 00000000ffa894c4 error 6 in libc-2.6.1.so[f6f7e000+12d000][ 1169.749862] EXT3-fs (md0): error: couldn't mount because of unsupported optional features (240) Lenke til kommentar
AT1S Skrevet 8. april 2016 Del Skrevet 8. april 2016 Du kan prøve å lese litt her: http://qnapsupport.net/raid-system-errors-how-to-fix/ Lenke til kommentar
Sorgenfri Skrevet 8. april 2016 Forfatter Del Skrevet 8. april 2016 Takk! Jeg kontaktet qnap og de koblet set på remote og fikk ordnet lesetilgang. Etter jeg så hadde tatt backup fikset de det med en ny remote sesjon. Fremragende kundestøtte spør du meg! 1 Lenke til kommentar
AT1S Skrevet 8. april 2016 Del Skrevet 8. april 2016 Takk! Jeg kontaktet qnap og de koblet set på remote og fikk ordnet lesetilgang. Etter jeg så hadde tatt backup fikset de det med en ny remote sesjon. Fremragende kundestøtte spør du meg! Supert! Lenke til kommentar
Anbefalte innlegg
Opprett en konto eller logg inn for å kommentere
Du må være et medlem for å kunne skrive en kommentar
Opprett konto
Det er enkelt å melde seg inn for å starte en ny konto!
Start en kontoLogg inn
Har du allerede en konto? Logg inn her.
Logg inn nå