Following a file system check, where a RAID volume is not remounted automatically after a reboot is it necessary to run the file system check multiple times?
A message may appear in the event log ‘file system not clean’ or ‘system not shut down correctly’. This often follows a power outage or surge, and it is not uncommon for one or more of the RAID volumes not to mount.
Where a NAS gets shut down unexpectedly, it should automatically complete a file system check. However if this doesn’t work it will be necessary to run a diagnostic assessment. The QNAP NAS boxes run a custom Linux setup based on Ubuntu.
Firstly run mdadm to assess the status of the RAID. This will indicate the following:
-
Version
-
Creation Time
-
Raid Level
-
Array Siz
-
Used Dev Size
-
Raid Devices
-
Total Devices
-
Preferred Minor
-
Persistence
-
Update Time
-
State
-
Active Devices
-
Working Devices
-
Failed Devices
-
Spare Devices
-
Layout
-
Chunk Size
Secondly check the filesystem for errors # e2fsck_64 -fp /dev/md0. Assuming the volumes still exist using the df -h command, run a manual file system check and repair. This may take a long time and it’s important not to rush the process. Once completed attempt to remount the device # mount -t ext4 /dev/md1 /share/MD1_DATA. If this fails, a RAID reconstruction is the next option.
Further reading
Drobo and BeyondRAID data recovery
Data backup versus RAID storage
VMware data recovery