Gå til innhold

BSD/UnixBSD/UnixFilserver med zfs - hva bør man tenke på?


tingo

Anbefalte innlegg

Den har fortsatt status som online da.

Hvis disken var ødelagt så burde zfs ha gitt den status som failed. Men ja, du burde bestille opp en ny disk hvis du ikke har noen i reserve, for den kommer til å feile ganske snart.

Nå nærmer det seg for ad12:

Sep 12 11:57:21 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=734799685120 size=98304 error=5
Sep 12 11:57:21 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=734799685120 size=32768 error=5
Sep 12 11:57:21 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=734799717888 size=32768 error=5
Sep 12 11:57:21 kg-f2 root: ZFS: vdev I/O failure, zpool=storage path=/dev/ad12 offset=734799750656 size=32768 error=5
Sep 12 12:20:52 kg-f2 smartd[860]: Device: /dev/ad12, FAILED SMART self-check. BACK UP DATA NOW!
Sep 12 12:20:53 kg-f2 smartd[860]: Device: /dev/ad12, 47 Currently unreadable (pending) sectors
Sep 12 12:20:53 kg-f2 smartd[860]: Device: /dev/ad12, Failed SMART usage Attribute: 1 Raw_Read_Error_Rate.
Sep 12 12:50:49 kg-f2 smartd[860]: Device: /dev/ad12, FAILED SMART self-check. BACK UP DATA NOW!
Sep 12 12:50:50 kg-f2 smartd[860]: Device: /dev/ad12, 47 Currently unreadable (pending) sectors
Sep 12 12:50:50 kg-f2 smartd[860]: Device: /dev/ad12, Failed SMART usage Attribute: 1 Raw_Read_Error_Rate.

Selv om zfs fortsatt sier den er online:

root@kg-f2# zpool status storage
 pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: scrub in progress for 15h1m, 90.26% done, 1h37m to go
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE   3.93K     0     0  89.0M repaired
    ad14    ONLINE       0     0     0
    ada0    ONLINE       8     0     0  97.5K repaired

errors: No known data errors

Får se om den holder til scrub'en er ferdig.

Lenke til kommentar
  • 1 måned senere...
Videoannonse
Annonse

I dag oppgraderte jeg filserveren til FreeBSD 8.1-stable. På zfs-siden gir det zfs pool version 15, og zfs version 4. Oppgraderingen gikk knirkefritt.

tingo@kg-f2$ zpool upgrade
This system is currently running ZFS pool version 15.

All pools are formatted using this version.

 

(redigert pga skriveleif - zpool versjon 15, ikke 5)

Endret av tingo
Lenke til kommentar
  • 3 måneder senere...

I dag var det endelig på tide å bytte ad12 på filserveren min. Den siste scrub'en hadde da kjørt i over 142 timer (vanligvis blir en scrub ferdig på under 4 timer), så jeg stoppet den:

root@kg-f2# zpool scrub -s storage
root@kg-f2# zpool status storage
 pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
   attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
   using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: scrub stopped after 142h24m with 0 errors on Sat Feb 12 16:08:26 2011
config:

   NAME        STATE     READ WRITE CKSUM
   storage     ONLINE       0     0     0
     raidz1    ONLINE       0     0     0
       ad8     ONLINE       0     0     0
       ad10    ONLINE       0     0     0
       ad12    ONLINE       0     0    73  4.54G repaired
       ad14    ONLINE       0     0     0
       ada0    ONLINE       0     0     0

errors: No known data errors

Så offlina jeg disken:

root@kg-f2# zpool offline storage ad12
root@kg-f2# zpool status storage
 pool: storage
state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
   attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
   using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: scrub stopped after 142h24m with 0 errors on Sat Feb 12 16:08:26 2011
config:

   NAME        STATE     READ WRITE CKSUM
   storage     DEGRADED     0     0     0
     raidz1    DEGRADED     0     0     0
       ad8     ONLINE       0     0     0
       ad10    ONLINE       0     0     0
       ad12    OFFLINE      0     0    73  4.54G repaired
       ad14    ONLINE       0     0     0
       ada0    ONLINE       0     0     0

errors: No known data errors

og bytta den fysisk. Siden diskene i denne poolen står i en hotplug "cage", så trodde jeg at den skulle synes "automagisk" etter byttet, men det gjorden den ikke:

root@kg-f2# atacontrol list
ATA channel 0:
   Master:      no device present
   Slave:       no device present
ATA channel 2:
   Master:  ad4 <SAMSUNG HD252HJ/1AC01118> SATA revision 2.x
   Slave:       no device present
ATA channel 3:
   Master:  ad6 <SAMSUNG HD252HJ/1AC01118> SATA revision 2.x
   Slave:       no device present
ATA channel 4:
   Master:  ad8 <SAMSUNG HD103SJ/1AJ100E4> SATA revision 2.x
   Slave:       no device present
ATA channel 5:
   Master: ad10 <SAMSUNG HD103SJ/1AJ100E4> SATA revision 2.x
   Slave:       no device present
ATA channel 6:
   Master:      no device present
   Slave:       no device present
ATA channel 7:
   Master: ad14 <SAMSUNG HD103SJ/1AJ100E4> SATA revision 2.x
   Slave:       no device present
root@kg-f2#

og litt lett overtalelse funket heller ikke:

root@kg-f2# atacontrol attach ata6
Master:      no device present
Slave:       no device present
root@kg-f2# atacontrol reinit ata6
Master:      no device present
Slave:       no device present

Så da ble det en reboot av maskina. Og da var ad12 på plass igjen.

Da var det bare å replace den:

root@kg-f2# zpool replace storage ad12 ad12

Om en stund vet jeg om dette ble suksess eller ikke:

root@kg-f2# zpool status storage
 pool: storage
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h13m, 9.70% done, 2h4m to go
config:

NAME            STATE     READ WRITE CKSUM
storage         DEGRADED     0     0     0
  raidz1        DEGRADED     0     0     0
    ad8         ONLINE       0     0     0
    ad10        ONLINE       0     0     0
    replacing   DEGRADED     0     0     0
      ad12/old  OFFLINE      0     0     0
      ad12      ONLINE       0     0     0  76.0G resilvered
    ad14        ONLINE       0     0     0
    ada0        ONLINE       0     0     0

errors: No known data errors

Det var det.

Lenke til kommentar

Ja, det gikk bra. :-)

root@kg-f2# zpool status storage
 pool: storage
state: ONLINE
scrub: resilver completed after 2h29m with 0 errors on Sat Feb 12 19:10:53 2011
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE       0     0     0  783G resilvered
    ad14    ONLINE       0     0     0
    ada0    ONLINE       0     0     0

errors: No known data errors

(puuh)

Lenke til kommentar

Tok for sikkerhets skyld en scrub i ettermiddag:

root@kg-f2# zpool status storage
 pool: storage
state: ONLINE
scrub: scrub completed after 2h51m with 0 errors on Tue Feb 15 20:00:19 2011
config:

NAME        STATE     READ WRITE CKSUM
storage     ONLINE       0     0     0
  raidz1    ONLINE       0     0     0
    ad8     ONLINE       0     0     0
    ad10    ONLINE       0     0     0
    ad12    ONLINE       0     0     0
    ad14    ONLINE       0     0     0
    ada0    ONLINE       0     0     0  1.50K repaired

errors: No known data errors

Jeg liker zfs :-)

Lenke til kommentar

Låner denne tråden til ett lite spørsmål;

Må jeg ha en RAID kontroller som står i FREEBSD lista, eller kan jeg bruke den interne kontrolleren som er på hovedkortet? Dersom jeg skal bruke ZFS på serveren

Endret av cdonc
Lenke til kommentar

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
 1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail  Always       -       4
 2 Throughput_Performance  0x0026   252   252   000    Old_age   Always       -       0
 3 Spin_Up_Time            0x0023   073   073   025    Pre-fail  Always       -       8209
 4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       45
 5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail  Always       -       0
 7 Seek_Error_Rate         0x002e   252   252   051    Old_age   Always       -       0
 8 Seek_Time_Performance   0x0024   252   252   015    Old_age   Offline      -       0
 9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       5579
10 Spin_Retry_Count        0x0032   252   252   051    Old_age   Always       -       0
11 Calibration_Retry_Count 0x0032   252   252   000    Old_age   Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       46
191 G-Sense_Error_Rate      0x0022   252   252   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0002   057   053   000    Old_age   Always       -       43 (Lifetime Min/Max 24/47)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   100   100   000    Old_age   Always       -       4
198 Offline_Uncorrectable   0x0030   252   252   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age   Always       -       0
223 Load_Retry_Count        0x0032   252   252   000    Old_age   Always       -       0
225 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       46

 

Jeg ser det er en liten stund siden noe har skjedd i denne tråden, men jeg våger meg med et lite svar allikevel.

 

Disken med leseproblemene hadde 4 sektorer som ventet på bli flyttet, det står i felt 197. Jeg vet ikke hvordan man gjør noe med slikt på FreeBSD med ZFS, men jeg har fulgt denne guiden når jeg hadde sektorer som trengte flytting på Linux med ext4.

Lenke til kommentar

Her er forresten forklaringa på at hot-swap ikke fungerte på min server (fra FreeBSD-forumet, tråden hotswapping sata drives):

Hot-swapping SATA disks is only possible with AHCI compliant controllers, otherwise you must power-down and cold-swap. Most of the modern onboard Intel SATA controllers are AHCI compatible, although it needs to be supported in the BIOS of the machine. Some SIS controllers also support it.

Så man trenger ahci og / eller siis, og disken må vises som adaX devices.

Hmm, lurer på om jeg tør å laste ahci på serveren min nå?

Lenke til kommentar

Skal fungere greit, men er kanskje avhengig om du har root på ZFS eller ikke(?).

Fungerte fint hos meg — med root på UFS.

 

Jeg la til følgende i /boot/loader.conf for å få den "nye" AHCI-driveren til å lastes.

ahci_load="YES"

 

Gjorde så en zpool export.

 

Rebootet, enablet ahci i bios, bootet FreeBSD, og zpool import.

 

Ser slik ut hos meg nå :

$ zpool status
 pool: storage
state: ONLINE
scrub: scrub completed after 2h7m with 0 errors on Thu Feb  3 15:07:10 2011
config:

       NAME        STATE     READ WRITE CKSUM
       storage     ONLINE       0     0     0
         raidz1    ONLINE       0     0     0
           ada1    ONLINE       0     0     0
           ada2    ONLINE       0     0     0
           ada3    ONLINE       0     0     0
           ada4    ONLINE       0     0     0

errors: No known data errors

(Har versjon 8.1, og gjorde dette for en del måneder siden, så mulig jeg ikke husker alle detaljer her)

 

Edit: Kom på én ting: Første device (som i mitt tilfelle var disken jeg bootet fra), får etter endringen devicenavn ada0 — så jeg endret fstab til å gjenspeile dette før jeg rebootet.

Endret av Sokkalf™
Lenke til kommentar
  • 1 måned senere...

I dag lærte jeg om zpool history. Eksempel:

root@kg-f2# zpool history storage
History for 'storage':
2009-12-18.14:56:21 zpool create storage raidz ad8 ad10 ad12 ad14 ada0
2009-12-18.15:04:59 zpool scrub storage
2010-01-05.17:39:24 zpool scrub storage
2010-01-12.08:58:18 zpool scrub storage
2010-01-30.15:22:26 zpool upgrade storage
2010-01-30.15:44:07 zpool scrub storage
2010-02-09.18:23:07 zpool scrub storage
2010-02-11.19:15:48 zpool scrub storage
2010-02-17.09:33:16 zpool scrub storage
2010-02-24.01:03:29 zpool scrub storage
2010-02-26.16:04:30 zpool scrub storage
2010-03-05.09:26:14 zpool scrub storage
2010-03-05.18:36:18 zpool scrub storage
2010-03-07.12:28:31 zpool scrub storage
2010-03-12.00:42:57 zpool scrub storage
2010-03-13.13:49:50 zpool scrub storage
2010-03-13.17:35:57 zpool scrub storage
2010-03-13.20:50:55 zpool scrub storage
2010-03-14.02:15:50 zpool scrub storage
2010-03-15.22:50:05 zpool scrub storage
2010-03-18.18:53:29 zpool scrub storage
2010-03-26.19:24:41 zpool scrub storage
2010-03-30.12:18:13 zpool scrub storage
2010-04-06.18:49:54 zpool scrub storage
2010-04-11.17:24:23 zpool scrub storage
2010-04-18.19:19:39 zpool scrub storage
2010-05-03.19:25:47 zpool scrub storage
2010-05-09.14:31:08 zpool scrub storage
2010-05-16.15:33:41 zpool scrub storage
2010-05-23.11:13:41 zpool scrub storage
2010-05-28.03:33:17 zpool scrub storage
2010-05-28.13:01:29 zpool scrub storage
2010-05-30.11:14:40 zpool scrub storage
2010-06-06.12:45:46 zpool scrub storage
2010-06-12.09:39:08 zpool scrub storage
2010-06-20.20:37:18 zpool scrub storage
2010-06-28.02:15:27 zpool scrub storage
2010-07-05.08:45:09 zpool scrub storage
2010-07-11.23:45:44 zpool scrub storage
2010-07-17.10:31:27 zpool scrub storage
2010-07-18.11:13:47 zpool scrub storage
2010-07-21.10:25:01 zpool scrub storage
2010-08-04.20:15:59 zpool scrub storage
2010-08-07.20:30:32 zpool clear storage ad12
2010-08-07.20:31:16 zpool scrub storage
2010-08-14.10:47:05 zpool scrub storage
2010-08-20.07:26:20 zpool scrub storage
2010-08-26.10:32:59 zpool scrub storage
2010-08-29.13:27:31 zpool scrub storage
2010-09-02.23:37:19 zpool scrub storage
2010-09-11.22:18:13 zpool scrub storage
2010-09-15.19:40:29 zpool scrub storage
2010-10-03.13:13:36 zpool scrub storage
2010-10-25.17:31:39 zpool scrub storage
2010-10-29.12:48:49 zpool upgrade storage
2010-10-29.13:18:10 zpool scrub storage
2010-11-21.10:55:34 zpool scrub storage
2010-12-07.17:19:17 zpool scrub storage
2010-12-28.13:32:06 zpool scrub storage
2011-01-03.16:50:46 zpool scrub storage
2011-01-16.20:13:35 zpool scrub storage
2011-01-30.20:34:42 zpool scrub storage
2011-02-06.17:44:25 zpool scrub storage
2011-02-12.16:08:26 zpool scrub -s storage
2011-02-12.16:14:15 zpool offline storage ad12
2011-02-12.16:41:55 zpool replace storage ad12 ad12
2011-02-15.17:09:17 zpool scrub storage
2011-02-21.17:17:58 zpool scrub storage
2011-03-04.19:08:44 zpool scrub storage
2011-03-08.08:20:42 zpool clear storage ada0
2011-03-15.18:49:24 zpool scrub storage
2011-04-03.11:40:57 zpool scrub storage
2011-04-16.10:48:16 zpool scrub storage

Veeery nice!

Lenke til kommentar
  • 1 måned senere...

Oppdatering: serveren min fikk en panic

 

panic: kmem_malloc(131072): kmem_map too small: 1324613632 total allocated

cpuid = 1

KDB: stack backtrace:

#0 0xffffffff805df92e at kdb_backtrace+0x5e

#1 0xffffffff805ada77 at panic+0x187

#2 0xffffffff80800190 at kmem_alloc+0

#3 0xffffffff807f7e0a at uma_large_malloc+0x4a

#4 0xffffffff8059aee7 at malloc+0xd7

#5 0xffffffff80ed6763 at vdev_queue_io_to_issue+0x1c3

#6 0xffffffff80ed68e9 at vdev_queue_io_done+0x99

#7 0xffffffff80ee6c9f at zio_vdev_io_done+0x7f

#8 0xffffffff80ee7237 at zio_execute+0x77

#9 0xffffffff80e872f3 at taskq_run_safe+0x13

#10 0xffffffff805ea984 at taskqueue_run+0xa4

#11 0xffffffff805eabf6 at taskqueue_thread_loop+0x46

#12 0xffffffff80584278 at fork_exit+0x118

#13 0xffffffff8087f2fe at fork_trampoline+0xe

Uptime: 109d19h47m1s

Cannot dump. Device not defined or unavailable.

Automatic reboot in 15 seconds - press a key on the console to abort

 

Ifølge kunnskapsrike mennesker på freebsd-stable mailinglista, så skyldes det at den gikk tom kjerneminne (kmem), fordi jeg ikke hadde tunet zfs.

De sa også at zfs-tuning er mye enklere i FreeBSD 8.2-stable (serveren kjørte 8.1-stable), derfor er den nå oppgradert:

root@kg-f2# uname -a
FreeBSD kg-f2.kg4.no 8.2-STABLE FreeBSD 8.2-STABLE #5: Fri Jun  3 17:20:39 CEST 2011
    [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64

Og jeg har gjort følgende tuning av zfs: en linje er lagt til i /boot/loader.conf:

vfs.zfs.arc_max="2048M"

serveren har 4 GB minne:

root@kg-f2# sysctl hw.physmem hw.usermem hw.realmem
hw.physmem: 4141666304
hw.usermem: 4019376128
hw.realmem: 4966055936

Så får vi se om den blir mer stabil nå. Jeg var fornøyd med 109 dager oppetid, men det var visstnok flaks...

Endret av tingo
Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
×
×
  • Opprett ny...