{"id":332,"date":"2018-01-25T11:47:50","date_gmt":"2018-01-25T16:47:50","guid":{"rendered":"https:\/\/www.unliterate.net\/?p=332"},"modified":"2018-01-25T11:53:54","modified_gmt":"2018-01-25T16:53:54","slug":"home-linux-file-server-with-software-raid-and-iscsi-9-10","status":"publish","type":"post","link":"https:\/\/www.unliterate.net\/index.php\/2018\/01\/25\/home-linux-file-server-with-software-raid-and-iscsi-9-10\/","title":{"rendered":"Home Linux File Server with Software RAID and iSCSI (9\/10)"},"content":{"rendered":"<p>Continuation from <a href=\"https:\/\/www.unliterate.net\/index.php\/2018\/01\/23\/home-linux-file-server-with-software-raid-and-iscsi-1-10\/\">Home Linux File Server<\/a><\/p>\n<p>Challenges 9: <strong>Reinstall Core Operating system, configure it up, and mount and use \/md<\/strong>.<\/p>\n<p>The scenario is simple: We didn&#8217;t mirror our boot partition, so now we have to reinstall the operating system and make sure we can preserve everything we need to keep business consistency.<\/p>\n<p><!--more--><\/p>\n<p><strong>BACKUP SAYS THE LION!!!<\/strong><\/p>\n<p>Before addressing the &#8220;everything is not working&#8221; scenario, we need to backup 2 items:<\/p>\n<p><code>\/etc\/mdadm.conf<\/code>: This is where our \/dev\/md configurations are stored.<\/p>\n<pre>[root@eye-scrunchie ~]# ll \/etc\/mdadm.conf\r\n-rw-r--r-- 1 root root 92 Jan 23 15:35 \/etc\/mdadm.conf\r\n[root@eye-scrunchie ~]# cat \/etc\/mdadm.conf\r\nARRAY \/dev\/md\/0  metadata=1.2 UUID=e47e9e3a:8b2d2d70:430fa6dc:babf2503 name=eye-scrunchie:0<\/pre>\n<p><code>\/etc\/tgt\/targets.conf<\/code>: This is where our iSCSI configuration is.<\/p>\n<pre>[root@eye-scrunchie ~]# ll \/etc\/tgt\/targets.conf\r\n-rw------- 1 root root 7077 Jan 23 22:57 \/etc\/tgt\/targets.conf\r\n[root@eye-scrunchie ~]# cat \/etc\/tgt\/targets.conf\r\n# This is a sample config file for tgt-admin.\r\n# By default, tgt-admin looks for its config file in \/etc\/tgt\/targets.conf\r\n#\r\n# The \"#\" symbol disables the processing of a line.\r\n\r\n\r\n# This one includes other config files:\r\n\r\n#include \/etc\/tgt\/temp\/*.conf\r\n\r\n\r\n# Set the driver. If not specified, defaults to \"iscsi\".\r\n#\r\n# This can be iscsi or iser. To override a specific target set the\r\n# \"driver\" setting in the target's config.\r\ndefault-driver iscsi\r\n&lt;target iqn.2018-01.eye-scrunchie:target1&gt;\r\n        backing-store \/dev\/md0\r\n&lt;\/target&gt;\r\n\r\n#<target iqn.2008-09.com.example:iser>\r\n...to the end...<\/pre>\n<p>And a minor items, such as network information in case the motherboard failed out and we&#8217;re replacing that as well:<\/p>\n<pre>[root@eye-scrunchie ~]# ip addr\r\n1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN\r\n    link\/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00\r\n    inet 127.0.0.1\/8 scope host lo\r\n    inet6 ::1\/128 scope host\r\n       valid_lft forever preferred_lft forever\r\n2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000\r\n    link\/ether 08:00:27:bd:50:65 brd ff:ff:ff:ff:ff:ff\r\n    inet 192.168.1.38\/24 brd 192.168.1.255 scope global eth0\r\n    inet6 fe80::a00:27ff:febd:5065\/64 scope link\r\n       valid_lft forever preferred_lft forever\r\n<\/pre>\n<p><strong>Wipe, Rinse, Repeat<\/strong><\/p>\n<p>In an actual rebuild scenario I&#8217;d power-off the whole system and disconnect the drives that were part of the RAID. Then I&#8217;d plug in the new drive, insert in my OS installation media, and install the base OS. In this case, it&#8217;ll be Centos 6.<\/p>\n<p>After the installation, and reboot, I do the customary update and pre-configuration:<\/p>\n<pre># yum update\r\n# service iptables save\r\n# service iptables stop\r\n# chkconfig iptables off\r\n# cat \/etc\/selinux\/config | sed s\/=enforcing\/=disabled\/ > \/etc\/selinux\/config.new && rm \/etc\/selinux\/config && mv \/etc\/selinux\/config.new \/etc\/selinux\/config<\/pre>\n<p>And i&#8217;ll need to install the iSCSI packages as well:<\/p>\n<pre># yum install scsi-target-utils\r\n# service tgtd start\r\n# chkconfig tgtd on<\/pre>\n<p><strong>Configurations<\/strong><\/p>\n<p>the mdadm configuration file:<\/p>\n<pre># touch \/etc\/mdadm.conf\r\n# vi \/etc\/mdadm.conf\r\nARRAY \/dev\/md\/0  metadata=1.2 UUID=e47e9e3a:8b2d2d70:430fa6dc:babf2503 name=eye-scrunchie:0<\/pre>\n<p>and the targets.conf file:<\/p>\n<pre># vi \/etc\/tgt\/targets.conf\r\n...adding the following below \"default-driver iscsi\"\r\n&lt;target iqn.2018-01.eye-scrunchie:target1&gt;\r\n        backing-store \/dev\/md0\r\n&lt;\/target&gt;\r\n<\/pre>\n<p><strong>Get the drives hooked up<\/strong><\/p>\n<p>At this point we should have all the software necessary to get the drives up and running, and the configuration necessary to make them all run. I shut down the VM and &#8220;hook up the drives&#8221; in the order that they should be on the SATA controller. This includes the 3 &#8220;good&#8221; drives and the 1 &#8220;bad&#8221; 64mb drive. I also didn&#8217;t hook them up with the Hot-Swap flag that I used in a previous write-up.<\/p>\n<p>Once they&#8217;re all connected, I turn on the VM and wait for boot-up.<\/p>\n<pre># mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 2\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Thu Jan 25 11:38:47 2018\r\n          State : clean, degraded\r\n Active Devices : 2\r\nWorking Devices : 2\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 72\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       4       8       65        1      active sync   \/dev\/sde1\r\n       4       0        0        4      removed\r\n<\/pre>\n<p>It seems to have found 2\/3 of the drives. I do have access to my iSCSI volume, as my OS has auto-connected back to it. I also see files on it as well. Lets see what the system sees, and maybe we can fix this.<\/p>\n<pre># lsblk\r\nNAME                               MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT\r\nsr0                                 11:0    1 1024M  0 rom\r\nsda                                  8:0    0    3G  0 disk\r\n\u251c\u2500sda1                               8:1    0  500M  0 part  \/boot\r\n\u2514\u2500sda2                               8:2    0  2.5G  0 part\r\n  \u251c\u2500vg_eyescrunchie-lv_root (dm-0) 253:0    0  2.2G  0 lvm   \/\r\n  \u2514\u2500vg_eyescrunchie-lv_swap (dm-1) 253:1    0  304M  0 lvm   [SWAP]\r\nsdb                                  8:16   0  256M  0 disk\r\n\u2514\u2500sdb1                               8:17   0  255M  0 part\r\n  \u2514\u2500md0                              9:0    0  508M  0 raid5\r\n    \u2514\u2500md0p1                        259:0    0  505M  0 md\r\nsdc                                  8:32   0  256M  0 disk\r\n\u2514\u2500sdc1                               8:33   0  255M  0 part\r\nsdd                                  8:48   0   64M  0 disk\r\nsde                                  8:64   0  256M  0 disk\r\n\u2514\u2500sde1                               8:65   0  255M  0 part\r\n  \u2514\u2500md0                              9:0    0  508M  0 raid5\r\n    \u2514\u2500md0p1                        259:0    0  505M  0 md\r\n<\/pre>\n<p>It appears as though \/dev\/sdc1 is not a part of this, when it should have been. I&#8217;m going to add it into the array and hopefully it picks it all up.<\/p>\n<pre># mdadm --add \/dev\/md0 \/dev\/sdc1\r\nmdadm: added \/dev\/sdc1\r\n# mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 3\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Thu Jan 25 11:44:42 2018\r\n          State : clean, degraded, recovering\r\n Active Devices : 2\r\nWorking Devices : 3\r\n Failed Devices : 0\r\n  Spare Devices : 1\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n Rebuild Status : 51% complete\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 82\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       4       8       65        1      active sync   \/dev\/sde1\r\n       3       8       33        2      spare rebuilding   \/dev\/sdc1\r\n<\/pre>\n<p>Perfect! On complete rebuild, we are HEALTHY!<\/p>\n<pre># mdadm --misc --detail \/dev\/md0\r\n\/dev\/md0:\r\n        Version : 1.2\r\n  Creation Time : Mon Jan 22 22:35:54 2018\r\n     Raid Level : raid5\r\n     Array Size : 520192 (508.00 MiB 532.68 MB)\r\n  Used Dev Size : 260096 (254.00 MiB 266.34 MB)\r\n   Raid Devices : 3\r\n  Total Devices : 3\r\n    Persistence : Superblock is persistent\r\n\r\n    Update Time : Thu Jan 25 11:44:45 2018\r\n          State : clean\r\n Active Devices : 3\r\nWorking Devices : 3\r\n Failed Devices : 0\r\n  Spare Devices : 0\r\n\r\n         Layout : left-symmetric\r\n     Chunk Size : 512K\r\n\r\n           Name : eye-scrunchie:0  (local to host eye-scrunchie)\r\n           UUID : e47e9e3a:8b2d2d70:430fa6dc:babf2503\r\n         Events : 91\r\n\r\n    Number   Major   Minor   RaidDevice State\r\n       0       8       17        0      active sync   \/dev\/sdb1\r\n       4       8       65        1      active sync   \/dev\/sde1\r\n       3       8       33        2      active sync   \/dev\/sdc1\r\n<\/pre>\n<p>And just to make sure we&#8217;re still good, i&#8217;ll reboot the system and check the RAID again and i&#8217;m still good to go!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Continuation from Home Linux File Server Challenges 9: Reinstall Core Operating system, configure it up, and mount and use \/md. The scenario is simple: We didn&#8217;t mirror our boot partition, so now we have to reinstall the operating system and make sure we can preserve everything we need to keep business consistency.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[19,20,17],"tags":[],"class_list":["post-332","post","type-post","status-publish","format-standard","hentry","category-centos","category-geek-instructions","category-linux"],"_links":{"self":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts\/332","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/comments?post=332"}],"version-history":[{"count":3,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts\/332\/revisions"}],"predecessor-version":[{"id":336,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/posts\/332\/revisions\/336"}],"wp:attachment":[{"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/media?parent=332"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/categories?post=332"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.unliterate.net\/index.php\/wp-json\/wp\/v2\/tags?post=332"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}