在路上 ……

Linux系统运维与架构

前言

因为要给家里老人装台电脑,出于简单省事又便宜的考虑,咸鱼上收了个Intel NUC7i5BNK,CPU是Intel 7代i5-7260U,集成的显卡是Iris 620,可intel官网只给出了win10的驱动,看来是彻底不支持在NUC上装win7了啊。可家里老人能会用win7已经不容易了,再让他们去学习适应win10,实在是太辛苦,于是就有了这篇折腾的记录。

一,准备工作

下载需要的文件及驱动

  1. windows 7 SP1 原版ISO,cn_windows_7_ultimate_with_sp1_x64_dvd_u_677408.iso
  2. 两个win7的补丁程序,一个是 2990941,用于给win7添加NVME驱动。一个是 3087873,修复打了上面补丁后可能出现的蓝屏情况。
    BTW: 现在微软居然不直接提供下载了,还要把下载地址发到邮箱再去下载,下载是两个exe文件,执行后会解压出来是msu格式的补丁文件,留待使用。
  3. Intel NMVE驱动,因为我买的是Intel 760P m2接口NVME ssd,所以需要准备相应的驱动,如果你们用其他品牌的NVME ssd,也需要下载相应的驱动。
    官方网址:https://downloadcenter.intel.com/zh-cn/download/27518/-ssd-NVMe-Microsoft-windows-?product=129831,下载地址:https://downloadmirror.intel.com/27518/eng/Client-x64.zip
  4. Intel 200系列芯片组 USB3.0驱动,官方网址:https://downloadcenter.intel.com/download/22824/Intel-USB-3-0-eXtensible-Host-Controller-Driver-for-Intel-8-9-100-Series-and-Intel-C220-C610-Chipset-Family,下载地址:https://downloadmirror.intel.com/22824/eng/Intel(R)_USB_3.0_eXtensible_Host_Controller_Driver_5.0.4.43_v2.zip

准备工作目录及文件

  1. 找一个剩余空间至少15G的硬盘分区,推荐使用ssd,否则后面mount镜像文件时会很慢,这里我以D盘为例,新建工作目录w7sp2
  2. 在D:\w7sp2目录下新建 mount,driver,hotfix 三个目录
  3. 将上面第二步下载的两个补丁文件放入hotfix目录
  4. 将上面第三第四步下载的驱动,解压后放入driver目录
  5. 使用UltraISO打开windows 7 原版ISO,进入sources目录,将boot.wim和install.wim两个文件提取到D:\w7sp2目录下

二,正式开始封装

打开CMD命令行窗口,切换到D:w7sp2工作目录下

D:
cd D:\w7sp2

给win7boot程序添加NVME驱动和USB3驱动

dism /mount-wim /wimfile:D:\w7sp2\boot.wim /index:2 /mountdir:D:\W7SP2\mount
dism /image:D:\w7sp2\mount /add-driver /driver:D:\w7sp2\driver /Recurse
dism /unmount-wim /mountdir:D:\w7sp2\mount /commit

给win7安装程序添加补丁及驱动

dism /mount-wim /wimfile:D:\w7sp2\install.wim /index:4 /mountdir:D:\W7SP2\mount
dism /image:D:\w7sp2\mount /add-package /packagepath:D:\w7sp2\hotfix
dism /image:D:\w7sp2\mount /add-driver /driver:D:\w7sp2\driver /Recurse
dism /unmount-wim /mountdir:D:\w7sp2\mount /commit

将w7sp2目录下封装好的boot.wim和install.wim通过UltraISO添加回win7原版 ISO
将ISO另存为一个新文件,别直接覆盖原版ISO

三,制作U盘,安装系统

这时就可以用UltraISO打开新生成的ISO,写入硬盘镜像功能制作win7安装U盘了
安装过程就和普通的安装无任何区别了。

四,安装后

安装后,大部分驱动都可以用驱动精灵之类的搞定,唯独Intel核心显卡驱动搞不定,使用for win10的驱动会提示不支持的CPU。
经过搜索,终于找到了安装方法:
先在系统-设备管理器-标准VGA显示-属性-详细信息-硬件ID,查看当前设备的信息,记录下来,类似于“PCIVEN_8086&DEV_5926”这样的,重要的是DEV后面的这四位数字。
下载intel核心显卡驱动zip包版本,下载地址:https://downloadmirror.intel.com/26836/eng/win64_154519.4678.zip,解压后,在Graphics目录下,找到igdlh64.inf文件,用记事本打开,查找刚才记下的DEV后的数字5926,找到这样一行“%iKBLULTGT3E15% = iKBLD_w10, PCIVEN_8086&DEV_5926”
igdlh64-1.PNG
将这行复制下来,然后向上查找“iSKLWSGT4”,在这行的下面添加一行,把刚才复制的内容贴上来,然后修改一下,修改为“%iKBLULTGT3E15% = iSKLD_w7, PCIVEN_8086&DEV_5926”
改完后应该是这个样子
igdlh64-2.PNG
保存后,就可以回到上一层目录,执行setup安装显卡驱动了


pre-install:安装依赖包:

apt install lcov pandoc autoconf-archive liburiparser-dev libdbus-1-dev libglib2.0-dev dbus-x11 libssl-dev \
autoconf automake libtool pkg-config gcc  libcurl4-gnutls-dev libgcrypt20-dev libcmocka-dev uthash-dev

一,下载及安装TPM 模拟器

IBMTPM模拟器项目页面:https://sourceforge.net/projects/ibmswtpm2/files/
下载最新的版本wget https://jaist.dl.sourceforge.net/project/ibmswtpm2/ibmtpm1332.tar.gz

mkdir ibmtpm1332
cd ibmtpm1332/
tar zxvf  ../ibmtpm1332.tar.gz
cd src/
make
cp tpm_server /usr/local/bin/

增加tpm-server.service
vi /lib/systemd/system/tpm-server.service

[Unit]
Description=TPM2.0 Simulator Server Daemon
Before=tpm2-abrmd.service

[Service]
ExecStart=/usr/local/bin/tpm_server 
Restart=always
Environment=PATH=/usr/bin:/usr/local/bin

[Install]
WantedBy=multi-user.target

systemctl daemon-reload
systemctl start tpm-server.service

确认tpm模拟器启动正常

二,安装TPM2相关软件包

1,安装tpm2_tss

添加TSS用户
useradd --system --user-group tss

下载地址:
wget https://github.com/tpm2-software/tpm2-tss/releases/download/2.1.0/tpm2-tss-2.1.0.tar.gz

tar zxvf tpm2-tss-2.1.0.tar.gz
cd tpm2-tss-2.1.0/
./configure --enable-unit --enable-integration
make check
make install
ldconfig
cd ..

2,安装tpm2_abrmd

下载地址:
wget https://github.com/tpm2-software/tpm2-abrmd/releases/download/2.0.2/tpm2-abrmd-2.0.2.tar.gz

tar zxvf tpm2-abrmd-2.0.2.tar.gz
cd tpm2-abrmd-2.0.2/
ldconfig
./configure --with-dbuspolicydir=/etc/dbus-1/system.d --with-systemdsystemunitdir=/lib/systemd/system
make
make install

cp /usr/local/share/dbus-1/system-services/com.intel.tss2.Tabrmd.service /usr/share/dbus-1/system-services/

重启 DBUS
pkill -HUP dbus-daemon

修改system tpm2-abrmd.service服务配置
vi /lib/systemd/system/tpm2-abrmd.service
将“ExecStart=/usr/local/sbin/tpm2-abrmd”修改为“ExecStart=/usr/local/sbin/tpm2-abrmd --tcti="libtss2-tcti-mssim.so.0:host=127.0.0.1,port=2321"”

systemctl daemon-reload
systemctl start tpm2-abrmd.service
查看status,确认服务正常启动

3,安装tpm2_tools

git clone https://github.com/tpm2-software/tpm2-tools.git
cd tpm2-tools/
./bootstrap
./configure
make

测试tpm2-tools工具连接abrmd服务是否正常
./tools/tpm2_getrandom 4

没问题的话
make install

安装完毕

执行tpm2_pcrlist,查看是否正常输出

三,tpm2常用命令

设定tpm相关密码(-o ownership password,-e endorsement password,-l lockout password):tpm2_takeownership -o 1 -e 1 -l 1

Create a Primary Object in endorsement hierarchy, with objectpass as the object password, with RSA keys & SHA256 name hash algorithm, with object context saved in file po.ctx:
tpm2_createprimary -H e -K 11 -g 0x000b -G 0x0001 -C po.ctx -P 1

Create a RSA key under the previous primary key, with subobjectpass as the object password, with SHA256 name hash algorithm, with public portion saved in key.pub and private portion saved in key.priv:
tpm2_create -c po.ctx -P 11 -K 111 -g 0x000b -G 0x0001 -u key.pub -r key.priv

Load the created RSA key:
tpm2_load -c po.ctx -P 11 -u key.pub -r key.priv -n key.name -C obj.ctx

Encrypt file data.in with RSA key:
tpm2_rsaencrypt -c obj.ctx -o data.encrypt data.in

Decrypt with RSA key:
tpm2_rsadecrypt -c obj.ctx -I data.encrypt -P 111 -o data.decrypt

使用tpm2_quote对PCR签名,使用OpenSSL校验签名的步骤:

# Generate an ECC key
openssl ecparam -name prime256v1 -genkey -noout -out private.ecc.pem
openssl ec -in private.ecc.pem -out public.ecc.pem -pubout

# Load the private key for signing
tpm2_loadexternal -Q -G ecc -r private.ecc.pem -o key.ctx

# Sign in the TPM and verify with OSSL
tpm2_quote -C key.ctx -G sha256 -L sha256:16,17,18 -f plain -q 11aabb -s pcr.out.signed -m pcr.in.raw
openssl dgst -verify public.ecc.pem -keyform pem -sha256 -signature pcr.out.signed pcr.in.raw 

备注:在使用tpm2_quote时,会报错如下:

ERROR: Could not convert signature hash algorithm selection, got: "sha256"

google查了半天也没结果,最后只能看源码,发现在tools/tpm2_quote.c第191开始的这段代码:
tpm2-tools-quote.png
将命令行输入的-G参数后的值做个转换,然后与预定义的flags比较
但是不知道是什么情况,这里用了“tpm2_alg_util_flags_sig”,去lib/tpm2_alg_util.c里查了定义,flags_sig里并没有sha256,所以导致报错
tpm2_lib_alg_util.png
但是我尝试使用定义里的ecdsa之类的算法,也会报另外一个错:

ERROR: Tss2_Sys_Quote(0x2C3) - tpm:parameter(2):hash algorithm not supported or not appropriate
ERROR: Unable to run tpm2_quote

而这可能就是tpm模拟器不支持了,不知道真实物理tpm芯片是不是支持,以后有条件再测试下

解决办法:暂时只能修改tpm2_quote的代码,将192行 “tpm2_alg_util_flags_sig”改为“tpm2_alg_util_flags_hash”,然后重新编译即可


以太坊多节点私链

一,安装以太坊客户端

系统版本:Ubuntu 16.04

添加Geth repository

apt install software-properties-common
add-apt-repository -y ppa:ethereum/ethereum

升级apt,安装Geth和Supervisor(将Geth作为服务运行)

apt update
apt -y install ethereum supervisor python-pip curl

升级pip & Supervisor

pip install pip --upgrade
pip install supervisor --upgrade
sed -i "s#usr/bin#usr/local/bin#g" /lib/systemd/system/supervisor.service

配置 Geth Supervisor Service – Copy and paste this into /etc/supervisor/conf.d/geth.conf

vi /etc/supervisor/conf.d/geth.conf
[program:geth]
command=bash -c '/usr/bin/geth'
autostart=true
autorestart=true
stderr_logfile=/var/log/supervisor/geth.err.log
stdout_logfile=/var/log/supervisor/geth.out.log

Start supervisor, which will auto-start Geth

systemctl enable supervisor
systemctl start supervisor

至此,以太坊公链就安装好了,运行geth就会自动开始同步区块

二,搭建私链

创建私链目录

mkdir /data/testchain

创建创始块json文件

vi genesis.json
{
    "config": {
        "chainId": 2018,
        "homesteadBlock": 0
    },
    "coinbase" : "0x0000000000000000000000000000000000000000",
    "difficulty" : "0x400",
    "gasLimit" : "0x2fefd8",
    "nonce" : "0x0000000000000142",
    "mixhash" : "0x0000000000000000000000000000000000000000000000000000000000000000",
    "parentHash" : "0x0000000000000000000000000000000000000000000000000000000000000000",
    "timestamp" : "0x00",
    "alloc": {
    }
}

创建创始块

geth init  genesis.json --datadir /data/testchain/
WARN [02-06|17:46:00] No etherbase set and no accounts found as default 
INFO [02-06|17:46:00] Allocated cache and file handles         database=/root/testchain/geth/chaindata cache=16 handles=16
INFO [02-06|17:46:00] Writing custom genesis block 
INFO [02-06|17:46:00] Successfully wrote genesis state         database=chaindata                      hash=ac4e66…7f2921
INFO [02-06|17:46:00] Allocated cache and file handles         database=/root/testchain/geth/lightchaindata cache=16 handles=16
INFO [02-06|17:46:00] Writing custom genesis block 
INFO [02-06|17:46:00] Successfully wrote genesis state         database=lightchaindata

启动私链

geth  --datadir /data/testchain/ --networkid 2018 --rpc --rpcport "8845" --rpccorsdomain "*" --port "30333" --nodiscover

这样,第一个私链节点就已正常启动了,rpc端口和p2p端口都是可以自己随意定义的

第二个节点,前几步和之前一样,只是最后一步启动节点时的命令稍有变化

先查看第一个节点的nodeinfo,用于第二个节点启动
在第一个节点上连接到ipc console

geth attach /data/testchain/geth.ipc
Welcome to the Geth JavaScript console!

instance: Geth/Roadchain/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9
coinbase: 0x81e71d34e8a9e4382c36fd90c3f234549106addd
at block: 6 (Tue, 06 Feb 2018 17:54:11 CST)
 datadir: /root/testchain
 modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0

> admin.nodeInfo
{
  enode: "enode://1f9cf6ef261966099b2d3498a2517a900318c141bea00edac71f1617dc6987852ce0239eea2d6490bd2af07409b2d623072ce3c1d3f3074dd914f31ba06a7c2f@[::]:30333?discport=0",
  id: "1f9cf6ef261966099b2d3498a2517a900318c141bea00edac71f1617dc6987852ce0239eea2d6490bd2af07409b2d623072ce3c1d3f3074dd914f31ba06a7c2f",
  ip: "::",
  listenAddr: "[::]:30333",
  name: "Geth/Roadchain/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9",
  ports: {
    discovery: 0,
    listener: 30333
  },
  protocols: {
    eth: {
      difficulty: 788096,
      genesis: "0x3782eafbc5ab71618f9a6aaa3506a385c50c20d3682ade9ea817e9025cadf804",
      head: "0x74ae4f44d9326a000cd4920e2f9cf4d85ff1b7289c5b04af91a6cc1b8ba032df",
      network: 2018
    }
  }
}

enode的信息就是我们需要记录的enode://1f9cf6ef261966099b2d3498a2517a900318c141bea00edac71f1617dc6987852ce0239eea2d6490bd2af07409b2d623072ce3c1d3f3074dd914f31ba06a7c2f@[::]:30333

这里需要把@后面的'[::]'替换为服务器的IP,例如192.168.1.11

下面开始在第二节点上启动geth

geth  --datadir /data/testchain/ --networkid "2018" --rpc --rpcport "8845" --rpccorsdomain "*"  --port "30333"  --bootnodes "enode://1f9cf6ef261966099b2d3498a2517a900318c141bea00edac71f1617dc6987852ce0239eea2d6490bd2af07409b2d623072ce3c1d3f3074dd914f31ba06a7c2f@192.168.1.11:30333"

现在就可以在两个节点上geth console中分别执行admin.peers查看两个节点是否都看到了对方

三,附加操作

创建账户
在geth console中执行,12345678就是账户密码,请自行修改

> personal.newAccount("12345678")
"0x81e71d34e8a9e4382c36fd90c3f234549106addd"

解锁账户

personal.unlockAccount("0x81e71d34e8a9e4382c36fd90c3f234549106addd","12345678")

单机挖矿和停止挖矿

> miner.start()
null
> miner.stop()
true

查看账户余额

eth.getBalance("0x81e71d34e8a9e4382c36fd90c3f234549106addd")

通过创始块预分配账户余额
按照上面的步骤将第一节点启动后,创建一个账户,复制地址
编辑genesis.json,在alloc段,增加如下内容:

    "alloc": {
        "0x81e71d34e8a9e4382c36fd90c3f234549106addd": { "balance": "20000000000000000000" }
    }

这里的地址就是刚才创建的地址,后面的balance就是你想预分配的余额

然后将数据目录下的geth目录删掉,重新创建创始块(keystore目录不要动)

rm -rf /data/testchain/geth

然后再重新执行geth init创建创始块的命令,然后再启动geth,就可以在console中查看到账户被预分配的余额了


首先使用命令查看物理硬盘的状态信息

/opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL

记住新增加的硬盘的Enclosure Device ID和Slot Number
例如:
Enclosure Device ID: 32

Slot Number: 3

同时查看Foreign State值是否为None

如果状态为Foreign State: Foreign需要执行如下命令确认下

/opt/MegaRAID/MegaCli/MegaCli64 -CfgForeign -Scan -a0

显示:

There are 2 foreign configuration(s) on controller 0.

然后需要将Foreing状态清除掉

/opt/MegaRAID/MegaCli/MegaCli64 -CfgForeign -Clear -a0

重新运行命令查看物理硬盘状态,确认Foreign State为None

然后就可以创建RAID了,这里示例创建RAID 1

/opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r1 [32:2,32:3] WB Direct -a0

创建成功会显示:

Adapter 0: Created VD 1

Adapter 0: Configured the Adapter!!

这时执行命令查看raid的信息

/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -aALL

能看到一个新的Virtual Drive信息:

Virtual Drive: 1 (Target Id: 1)
Name :Virtual Disk 1
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 931.0 GB
Sector Size : 512
Mirror Data : 931.0 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LDs IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: No

执行lsblk

能看到一块新的硬盘sdb

至此创建新RAID完成

对新增加的硬盘分区

fdisk /dev/sdb

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1

后面两项直接回车使用整个硬盘所有空间

然后修改分区格式为LVM分区

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

创建新pv :
pvcreate /dev/sdb1

Physical volume "/dev/sdb1" successfully created

查看原有虚拟卷组的信息

vgdisplay

--- Volume group ---
VG Name VolGroup00
System ID 
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 237.86 GiB
PE Size 4.00 MiB
Total PE 60891
Alloc PE / Size 60000 / 234.38 GiB
Free PE / Size 891 / 3.48 GiB
VG UUID UDad6V-j7j9-36SR-R5jR-EhLy-ofsS-lGUZdO

可以看到原本剩余的空间已很少了

对虚拟卷组扩容

vgextend VolGroup00 /dev/sdb1

Volume group "VolGroup00" successfully extended

再次查看vgdisplay

--- Volume group ---
VG Name VolGroup00
System ID 
Format lvm2
Metadata Areas 2
Metadata Sequence No 8
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 2
Act PV 2
VG Size 1.14 TiB
PE Size 4.00 MiB
Total PE 299226
Alloc PE / Size 60000 / 234.38 GiB
Free PE / Size 239226 / 934.48 GiB
VG UUID UDad6V-j7j9-36SR-R5jR-EhLy-ofsS-lGUZdO

可以看到空闲的空间已经增加了

然后就可以对原本空间不足的逻辑卷扩容了

如下命令是对var分区的卷组新增加100G空间

lvextend -L +100G /dev/VolGroup00/varvol

Extending logical volume varvol to 197.66 GiB
Logical volume varvol successfully resized

然后运行 resize2fs /dev/VolGroup00/varvol 在线调整文件系统

完成后df -h 就能看到var分区新增了100G大小了

另外一种服务器上的简单raid卡创建raid方法:
这种raid卡只支持Raid0和Raid1,工具也很简单,交互式操作,直接贴一份命令行操作的过程记录,红色标记的是需要我们输入的地方--->

./lsiutil.x86_64 --->运行工具

LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009

1 MPT Port found

Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC
1. /proc/mpt/ioc0 LSI Logic SAS1068E B3 105 00192f00 0

Select a device: [1-1 or 0 to quit] 1  ---> 这里列出了服务器上raid卡,只有一块,所以直接输入1

1. Identify firmware, BIOS, and/or FCode
2. Download firmware (update the FLASH)
4. Download/erase BIOS and/or FCode (update the FLASH)
8. Scan for devices
10. Change IOC settings (interrupt coalescing)
13. Change SAS IO Unit settings
16. Display attached devices
20. Diagnostics
21. RAID actions
22. Reset bus
23. Reset target
42. Display operating system names for devices
45. Concatenate SAS firmware and NVDATA files
59. Dump PCI config space
60. Show non-default settings
61. Restore default settings
66. Show SAS discovery errors
69. Show board manufacturing information
97. Reset SAS link, HARD RESET
98. Reset SAS link
99. Reset port
e Enable expert mode in menus
p Enable paged mode
w Enable logging

Main menu, select an option: [1-99 or e/p/w or 0 to quit] 8  ---> 我们先扫描一下,看看新加的硬盘是否都识别到了

SAS1068E links are 3.0 G, 3.0 G, 3.0 G, 3.0 G, off, off, off, off

B___T___L Type Vendor Product Rev SASAddress PhyNum
0 1 0 Disk Dell VIRTUAL DISK 1028 
0 2 0 Disk SEAGATE ST1000NM0023 GS10 5000c50084278645 2  ---> 两块新硬盘都列出来了
0 3 0 Disk SEAGATE ST1000NM0023 GS10 5000c50084279871 3
0 8 0 EnclServ DP BACKPLANE 1.07 5882b0b013553f00 8

Hidden RAID Devices:

B___T Device Vendor Product Rev SASAddress PhyNum
0 9 PhysDisk 0 SEAGATE ST3300657SS ES64 5000c500289afa69 1
0 0 PhysDisk 1 SEAGATE ST3300657SS ES64 5000c50028df2b41 0

Main menu, select an option: [1-99 or e/p/w or 0 to quit] 21  ---> 进入raid操作菜单

1. Show volumes
2. Show physical disks
3. Get volume state
4. Wait for volume resync to complete
23. Replace physical disk
26. Disable drive firmware update mode
27. Enable drive firmware update mode
30. Create volume
31. Delete volume
32. Change volume settings
33. Change volume name
50. Create hot spare
51. Delete hot spare
99. Reset port
e Enable expert mode in menus
p Enable paged mode
w Enable logging

RAID actions menu, select an option: [1-99 or e/p/w or 0 to quit] 30  ---> 创建新的卷

B___T___L Type Vendor Product Rev Disk Blocks Disk MB
1. 0 2 0 Disk SEAGATE ST1000NM0023 GS10 1953525168 953869
2. 0 3 0 Disk SEAGATE ST1000NM0023 GS10 1953525168 953869

To create a volume, select 2 or more of the available targets
select 3 to 10 targets for a mirrored volume
select 2 to 10 targets for a striped volume

Select a target: [1-2 or RETURN to quit] 1  ---> 因为我们要两块硬盘做raid1 ,所以这里分别输入1,2选择列出的两块新加的硬盘
Select a target: [1-2 or RETURN to quit] 2

2 physical disks were created

Select volume type: [0=Mirroring, 1=Striping, default is 0] 0  ---> 这里选择0是做镜像,也就是raid 1
Required metadata size is 512 MB, plus 2 MB
Select volume size: [1 to 953338 MB, default is 953338]   ---> 这里都默认,直接回车即可
Enable write caching: [Yes or No, default is Yes] 
Zero the first and last blocks of the volume? [Yes or No, default is No] 
Skip initial volume resync? [Yes or No, default is No]

Volume was created   ---> 到这里,新的raid1就创建完成了

RAID actions menu, select an option: [1-99 or e/p/w or 0 to quit] 1   ---> 我们再查看一下现在的raid卷

2 volumes are active, 4 physical disks are active   ---> 可以看到列出了两个卷,第二个Volume 1就是我们新加的卷

Volume 0 is Bus 0 Target 1, Type IM (Integrated Mirroring)
Volume Name: 
Volume WWID: 0c3de93603fd6e58
Volume State: optimal, enabled
Volume Settings: write caching enabled, auto configure, priority resync
Volume draws from Hot Spare Pools: 0
Volume Size 285568 MB, 2 Members
Primary is PhysDisk 1 (Bus 0 Target 0)
Secondary is PhysDisk 0 (Bus 0 Target 9)

Volume 1 is Bus 0 Target 2, Type IM (Integrated Mirroring)
Volume Name: 
Volume WWID: 06fe97cc153db8c2
Volume State: degraded, enabled, resync in progress
Volume Settings: write caching enabled, auto configure, priority resync
Volume Size 953338 MB, 2 Members
Primary is PhysDisk 2 (Bus 0 Target 10)
Secondary is PhysDisk 3 (Bus 0 Target 3)

RAID actions menu, select an option: [1-99 or e/p/w or 0 to quit] 3   ---> 再查看一下卷的状态

Volume 0 is Bus 0 Target 1, Type IM (Integrated Mirroring)
Volume 1 is Bus 0 Target 2, Type IM (Integrated Mirroring)

Volume: [0-1 or RETURN to quit] 1   ---> 选1,也就是我们刚刚创建的卷

Volume 1 State: degraded, enabled, resync in progress
Resync Progress: total blocks 1952436224, blocks remaining 1952199736, 99%   ---> 看到卷的状态是正在初始化同步,等到这个99%变成0,卷的状态变成optimal, enabled就是正常可用状态了

RAID actions menu, select an option: [1-99 or e/p/w or 0 to quit] 0   ---> 退出

Main menu, select an option: [1-99 or e/p/w or 0 to quit] 0   ---> 退出

Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC
1. /proc/mpt/ioc0 LSI Logic SAS1068E B3 105 00192f00 0

Select a device: [1-1 or 0 to quit] 0   ---> 退出

待raid卷状态到正常可用状态后,再按照上面说的lvm扩容的方式操作即可


(此文摘自百度文库)

1.查看RAID的信息
mdadm –detail /dev/md0
这里包含RAID的详细信息

2.删除和恢复某个RAID磁盘(假设使用hda1)
先删除某个磁盘:
mdadm /dev/md0 -f /dev/hda1—–标记错误磁盘
mdadm /dev/md0 -r /dev/hda1—–去除错误磁盘

恢复之前删除的磁盘
mdadm /dev/md0 -a /dev/hda1

此时查看RAID信息可以看到/dev/hda1自动成为了热备盘

3.扩展已有的RAID
这里先创建要添加的RAID分区:/dev/hdd1
添加磁盘
mdadm –add /dev/md0 /dev/hdd1
此时md0中增加了一个spare磁盘,接下来就是扩展了
mdadm –grow /dev/md0 –raid-devices=4
这里在grow模式下增加了设备,也可以增加设备容量
fsck.ext3 /raid
校验文件系统,为扩展作准备
resize2fs /raid
扩展文件系统,更新系统信息

4.创建RAID控制文件
echo DEVICE /dev/hd[a-d]1 >> /etc/mdadm.conf
mdadm -Ds >> /etc/mdadm.conf
此时可以看到配置文件如下:
DEVICE /dev/hda1 /dev/hdb1 /dev/hdc1 /dev/hdd1
ARRAY /dev/md0 level=raid5 num-devices=4
UUID = 9ca85577:25660a81:67152b19:3235d3s6

5.控制RAID起停
mdadm -S /dev/md0—–停止raid
怎么启动RAID呢?
如果已经配置了RAID控制文件,则
mdadm -As /dev/md0
根据配置文件的描述,RAID自动启动
如果没有配置文件
mdadm -As /dev/md0 /dev/hd[a-d]1
此时给出RAID的构成盘,RAID启动成功

linux做实验时创建了软raid. 后来重新创建raid时 提示如下
[root@client ~]# mdadm -C /dev/md0 -l 1 -n 2 /dev/sdb5 /dev/sdb6
mdadm: another array by this name is already running.

[root@client ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[root@client ~]# mdadm -D /dev/md0
mdadm: md device /dev/md0 does not appear to be active.

然后就可以创建raid了.

mdadm -S, –stop
deactivate array, releasing all resources.

有些情况还是不行
mdadm -S /dev/md0
mdadm -D /dev/md0
需要重启后生效.


Typecho 强力驱动