{"id":312,"date":"2018-01-23T22:23:43","date_gmt":"2018-01-24T03:23:43","guid":{"rendered":"https:\/\/www.unliterate.net\/?p=312"},"modified":"2018-01-25T10:45:57","modified_gmt":"2018-01-25T15:45:57","slug":"home-linux-file-server-with-software-raid-and-iscsi-23-10","status":"publish","type":"post","link":"https:\/\/www.unliterate.net\/index.php\/2018\/01\/23\/home-linux-file-server-with-software-raid-and-iscsi-23-10\/","title":{"rendered":"Home Linux File Server with Software RAID and iSCSI (2+3\/10)"},"content":{"rendered":"<style>pre { background-color: rgb(234, 234, 234); }<\/style>\n<p><a href=\"https:\/\/www.unliterate.net\/index.php\/2018\/01\/23\/home-linux-file-server-with-software-raid-and-iscsi-1-10\/\">In my previous post<\/a> I listed 10 steps to acquiring what I needed to do to feel comfortable in making a Linux File Server. This is Challenge 2 and 3 of 10: <strong>Breaking the Raid<\/strong>, and <strong>Add the Spare and Rebuild<\/strong>.<\/p>\n<p><!--more--><\/p>\n<p><strong>Breaking the RAID<\/strong><\/p>\n<p>In Virtuabox this is quite easy. I&#8217;ll shut down the VM and &#8220;detach&#8221; the third volume as if it is dead on boot. Before I do this I need to make sure I can confirm that everything is okay and that we can do what we need to do.<\/p>\n<pre># mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 3\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Tue Jan 23 21:43:59 2018\r\n          State : clean\r\n Active Devices : 3\r\nWorking Devices : 3\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 20\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       1       8       33        1      active sync   \/dev\/sdc1\r\n       3       8       49        2      active sync   \/dev\/sdd1\r\n<\/pre>\n<p>So, we know we can see what we&#8217;ve got. Lets break this! I shutdown the VM, and &#8220;remove&#8221; the second disk in the RAID, and boot it back up.<\/p>\n<pre># mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 2\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Tue Jan 23 21:56:28 2018\r\n          State : clean, degraded\r\n Active Devices : 2\r\nWorking Devices : 2\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 22\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       2       0        0        2      removed\r\n       3       8       33        2      active sync   \/dev\/sdc1\r\n<\/pre>\n<p><strong>Adding and Rebuilding<\/strong><\/p>\n<p>Now I happily have a degraded RAID5. I still can access the data on \/data, which is the main intention. Lets see if we can mount our \/dev\/sdd1 (the extra &#8220;drive&#8221;) and rebuild our array.<\/p>\n<pre># mdadm --add \/dev\/md0 \/dev\/sdd1\r\nmdadm: added \/dev\/sdd1\r\n# mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 3\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Tue Jan 23 22:12:20 2018\r\n          State : clean\r\n Active Devices : 3\r\nWorking Devices : 3\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 41\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       4       8       49        1      active sync   \/dev\/sdd1\r\n       3       8       33        2      active sync   \/dev\/sdc1\r\n<\/pre>\n<p><strong>RMA&#8217;d Drive, adding Hot Spare<\/strong><\/p>\n<p>Now, I&#8217;m going to make and remount the previous drive. This should not break the array, but would be the same as adding in a new drive in the system on the same controller. This is the scenario in that I&#8217;ve replaced the originally dead drive cause of some RMA, and stuck it back in the system and am now going to make it a Spare.<\/p>\n<pre># mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 3\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Tue Jan 23 22:18:28 2018\r\n          State : clean\r\n Active Devices : 3\r\nWorking Devices : 3\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 41\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       4       8       65        1      active sync   \/dev\/sde1\r\n       3       8       49        2      active sync   \/dev\/sdd1\r\n<\/pre>\n<p>Since it&#8217;s a fresh drive i&#8217;ll need to partition it to add it into the array.<\/p>\n<pre># fdisk \/dev\/sdc\r\n... c u n p 1 enter enter t fd p w ...\r\n# mdadm --add \/dev\/md0 \/dev\/sdc1\r\nmdadm: added \/dev\/sdc1\r\n# mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 4\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Tue Jan 23 22:22:07 2018\r\n          State : clean\r\n Active Devices : 3\r\nWorking Devices : 4\r\n Failed Devices : 0\r\n  Spare Devices : 1\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 42\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       4       8       65        1      active sync   \/dev\/sde1\r\n       3       8       49        2      active sync   \/dev\/sdd1\r\n\r\n       5       8       33        -      spare   \/dev\/sdc1\r\n<\/pre>\n<p>Seems this was easy enough \ud83d\ude42<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In my previous post I listed 10 steps to acquiring what I needed to do to feel comfortable in making a Linux File Server. This is Challenge 2 and 3 of 10: Breaking the Raid, and Add the Spare and Rebuild.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[19,20,17],"tags":[],"class_list":["post-312","post","type-post","status-publish","format-standard","hentry","category-centos","category-geek-instructions","category-linux"],"_links":{"self":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts\/312","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/comments?post=312"}],"version-history":[{"count":5,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts\/312\/revisions"}],"predecessor-version":[{"id":329,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts\/312\/revisions\/329"}],"wp:attachment":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/media?parent=312"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/categories?post=312"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/tags?post=312"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}