KVM工作笔记
0
无    2020-09-02 11:59:17    1    0
myron

3、调整分区
e2fsck -f /dev/sdc2  //检查分区信息
resize2fs /dev/sdc2 //调整分区大小(关键)


4、重新挂载分区
mount或者拔掉重插

注意:在已使用的分区中,使不能重新分区的,要分区,先卸载已经挂载的分区。
sudo mkfs.ext4 -L "backup" /dev/sdc3


————————————————————

动态迁移的配置
本节列出了支持动态迁移的配置,必须确保所有物理主机上配置正确,动态迁移才能成功完成。
Libvirt
libvirt 默认情况下支持远程连接的 TLS 协议,不支持 TCP 协议,因此将 listen_tls=0 listen_tcp=1 使 libvirt 能够支持 TCP 协议。
修改/etc/sysconfig/libvirtd 文件。
1
LIBVIRTD_ARGS="--listen"
在/etc/libvirt/libvirtd.conf 文件中做如下配置。
1
2
3
listen_tls=0
listen_tcp=1
auth_tcp="none"
重启 libvirtd 服务
物理主机上 DNS
配置每个物理主机上的/etc/host,加入每个物理主机的 hostname 和 IP,如下例:
1
2
192.168.0.1 compute-1 compute-1.ibm.com
192.168.0.2 compute-2 compute-2.ibm.com
防火墙
配置/etc/sysconfig/iptables,打开 TCP 端口 16509。
1
-A INPUT -p tcp -m multiport --ports 16509 -m comment --comment "libvirt" -j ACCEPT


————————————————————


03\04 区块链

07\08 大数据

11\13 基础服务

12 运维虚拟化

05、06、09、10 物理机释放

数据库虚拟机 -> 物理机(释放数据库虚拟机)

k8s 资源使用率 资源控制 报告

—————————————————————————
# virsh 远程 ssh 连接 libvirtd

virsh -c qemu+ssh://root@kvm02/system

________________________________________________

Image is corrupt; cannot be opened read/write
[plain] view plain copy
# qemu-img check -r all /work/vdisk/ceph_centos65_x86_64_ceph_osd1.img

________________________________________________

qemu-kvm: Migrate: socket bind failed


1、错误信息
qemu-kvm: Migrate: socket bind failed

2、原因:
其原因是qemu迁移端口与glusterfs端口冲突导致的

3、解决方法:
这种问题有两种解决办法:

第一:

解决办法,在所有安装glusterfs的机器上执行:
# vi /etc/glusterfs/glusterd.vol
在“end-volume”之前加入一行:
option base-port 50152 (写其它端口也可以,反正不要写49152)

然后,重启glusterfs服务

第二:

# virsh migrate --live rhel qemu+ssh://10.66.100.102/system --verbose
error: internal error Process exited while reading console log output: char device redirected to /dev/pts/2
qemu-kvm: Migrate: socket bind failed: Address already in use
Migration failed. Exit code tcp:[::]:49152(-1), exiting.

verify with build:libvirt-0.10.2-37.el6.x86_64
step:
S1:
1:prepare gluster server and client(on migration source and dst.)
2:mount gluster pool both on souce and dst.
10.66.100.103:/gluster-vol1 on /var/lib/libvirt/migrate type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
3:prepare a guest with gluster storage
4:check port on dst.
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 0 194240 29866/glusterfsd
tcp 0 0 10.66.100.102:49152 10.66.100.103:1015 ESTABLISHED 0 194475 29866/glusterfsd
tcp 0 0 10.66.100.102:49152 10.66.100.102:1021 ESTABLISHED 0 194462 29866/glusterfsd
tcp 0 0 10.66.100.102:1016 10.66.100.102:49152 ESTABLISHED 0 194748 30008/glusterfs
tcp 0 0 10.66.100.102:1018 10.66.100.103:49152 ESTABLISHED 0 194464 29879/glusterfs
tcp 0 0 10.66.100.102:1015 10.66.100.103:49152 ESTABLISHED 0 194751 30008/glusterfs
tcp 0 0 10.66.100.102:49152 10.66.100.103:1017 ESTABLISHED 0 194466 29866/glusterfsd
tcp 0 0 10.66.100.102:49152 10.66.100.102:1016 ESTABLISHED 0 194749 29866/glusterfsd
tcp 0 0 10.66.100.102:1021 10.66.100.102:49152 ESTABLISHED 0 194461 29879/glusterfs
4:do migration
5:repeat 20 times, no error occured.
S2:
1:prepare a guest same with S1
2:do live migrate then cancelled
# virsh migrate rhel qemu+ssh://10.66.100.102/system --verbose
Migration: [ 2 %]^Cerror: operation aborted: migration job: canceled by client
3:before migration canceled, check port on dst.:
tcp 0 0 :::49153 :::* LISTEN 107 212413 931/qemu-kvm
tcp 0 0 ::ffff:10.66.100.102:49153 ::ffff:10.66.100.103:52244 ESTABLISHED 107 212487 931/qemu-kvm
after canceled, check port again:
# netstat -laputen | grep 49153
no output
if cancelled the job, the port cleaned up, can reused in next migration
S3:
1:config /etc/libvirt/qemu.conf, edit and restart libvirtd
migration_port_min = 51152
migration_port_max = 51251

2:do migration
3:check dst. port
# netstat -laputen | grep 51
tcp 0 0 10.66.100.102:1015 10.66.100.103:49152 ESTABLISHED 0 194751 30008/glusterfs
tcp 0 0 :::51152 :::* LISTEN 107 214187 1179/qemu-kvm
tcp 0 0 ::ffff:10.66.100.102:51152 ::ffff:10.66.100.103:56922 ESTABLISHED 107 214260 1179/qemu-kvm
# virsh migrate rhel qemu+ssh://10.66.100.102/system --verbose
Migration: [100 %]
migration worked well.


_________________________________________________

使用tcp方式连接libvirtd

Libvirt是一个软件集合,便于使用者管理虚拟机和其他虚拟化功能,比如存储和网络接口管理等等。这些软件包括一个API库、一个daemon(libvirtd)和一个命令行工具(virsh)。Libvirt的主要目标是:提供一种单一的方式管理多种不同的虚拟化提供方式和hypervisor。



只要远程服务器运行了libvirtd,libvirt的客户端就可以连接到服务器。



使用最简单的SSH方式,只要拥有SSH连接到服务器的权限,就可以无需配置:

qemu+ssh://root@example.com/system

例如: qemu+ssh://root@172.16.0.12/system ,本机SSH连接到172.16.0.12时,要使用证书登录,否则每次连接都需要输入SSH用户名和密码。



TCP方式:

qemu+tcp://example.com/system

例如:qemu+tcp://172.16.0.15/system,服务端只需要做简单配置即可:

vim /etc/libvirt/libvirtd.conf:

listen_tls = 0          #禁用tls登录
listen_tcp = 1          #启用tcp方式登录
tcp_port = "16509"       #tcp端口16509
listen_addr = "0.0.0.0"
unix_sock_group = "libvirtd"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"
auth_tcp = "none"       #TCP不使用认证
max_clients = 1024       #最大总的连接客户数1024
min_workers = 50       #libvirtd启动时,初始的工作线程数目
max_workers = 200       #同上,最大数目
max_requests = 1000      #最大同时支持的RPC调用,必须大于等于max_workers
max_client_requests = 200   #每个客户端支持的最大连接数



同时修改libvirt-bin的配置文件:

vim /etc/default/libvirt-bin:

# Start libvirtd to handle qemu/kvm:
start_libvirtd="yes"

# options passed to libvirtd, add "-l" to listen on tcp
libvirtd_opts="-d -l --config /etc/libvirt/libvirtd.conf"



昨晚以上修改后,执行service libvirt-bin restart即可。 netstat -anpt就能看到libvirtd监听在TCP 16509端口。



下面附上一段python连接libvirtd的代码,并包含如何计算CPU使用率:

复制代码
1 import libvirt as _libvirt
2 import time
3
4
5 class libvirt_client(object):
6 def __init__(self,uri):
7 self.ip = uri
8 self.uri = 'qemu+tcp://%s/system' % uri
9 self.connect()
10
11 def connect(self):
12 self.conn = _libvirt.open(self.uri)
13
14 def check(self,uuid_string):
15 result = dict()
16 time_sleep = 3
17 dom = self.conn.lookupByUUIDString(uuid_string)
18 infos_first = dom.info()
19 start_cputime = infos_first[4]
20 start_time = time.time()
21 time.sleep(time_sleep)
22 infos_second = dom.info()
23 end_cputime = infos_second[4]
24 end_time = time.time()
25 cputime = (end_cputime - start_cputime)
26 cores = infos_second[3]
27 cpu_usage = 100 * cputime / (time_sleep*cores*1000000000)
28 print cpu_usage
29
30
31 virt = libvirt_client('172.16.0.209')
32 virt.check('ef809edd-2168-4007-8319-3d2acbc49aff')
复制代码


再附上一个列出服务器上所有运行着的实例的UUID的函数:

复制代码
1 #!/usr/bin/python
2 import sys
3
4 try:
5 import libvirt as _libvirt
6 except (ImportError,ImportWarning) as e:
7 print "Can not find python-libvirt, in ubuntu just run \"sudo apt-get install python-libvirt\"."
8 print e
9 sys.exit(1)
10
11
12 def list_uuids(host):
13 dom_ids = []
14 uri = 'qemu+tcp://%s/system' % host
15 try:
16 conn = _libvirt.open(uri)
17 except Exception,e:
18 print 'libvirt error: can not connect to remote libvirtd'
19 raise e
20 domain_ids = conn.listDomainsID()
21 for domain_id in domain_ids:
22 dom = conn.lookupByID(domain_id)
23 dom_ids.append(dom.UUIDString())
24 print dom_ids
25
26 list_uuids('172.16.0.209')
复制代码


下一篇介绍使用SASL认证的方式,连接libvirtd.

__________________________________________________

ssh 代理配置

# vi /etc/ssh/sshd_config/

# 加入如下行
GatewayPorts clientspecified

#!/bin/bash

ps -ef |grep NfR |grep -v grep |awk '{print $2}' |xargs kill
ssh -gNfR *:80:localhost:80 mgr@125.39.195.63
ps -ef |grep NfR



___________________________________________________


[root@ccms-002 network-scripts]# more ifcfg-eth0
DEVICE=eth0
HWADDR=00:13:3B:0C:25:9D
ONBOOT=yes
HOTPLUG=no
TYPE=Ethernet
BOOTPROTO=none
USERTCL=no
BRIDGE=cloudbr0

[root@ccms-002 network-scripts]# more ifcfg-eth1
DEVICE=eth1
HWADDR=C8:1F:66:15:B6:11
ONBOOT=yes
HOTPLUG=no
TYPE=Ethernet
BOOTPROTO=none
USERTCL=no
BRIDGE=cloudbr1

[root@ccms-002 network-scripts]# more ifcfg-cloudbr0
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.168.202
NETMASK=255.255.255.0
GATEWAY=192.168.168.254
DNS1=202.106.0.20
IPV6INIT=no

[root@ccms-002 network-scripts]# more ifcfg-cloudbr1
DEVICE=cloudbr1
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
#IPADDR=192.168.168.202
#NETMASK=255.255.255.0
#GATEWAY=192.168.168.254
#DNS1=202.106.0.20
IPV6INIT=no




____________________________________________________

在#模式下输入kudzu,让linux帮忙我们检查发现新的硬件和去掉不存在的硬件

____________________________________________________

qemu-kvm -drive if=none,id=cd,file=/mnt/iso/en_win_srv_2003_r2_enterprise_x64_with_sp1_vl_cd1.iso \
-device ide-cd,drive=cd,bootindex=0 \
-drive if=none,id=hd,file=Windows_Server_2003_Enterprise_Edition-000005.raw \
-device virtio-scsi-pci,id=scsi \
--enable-kvm \
-device scsi-hd,drive=hd


____________________________________________________

由于VMware 设置成每2G 就分块。所以就生成了好多块。。



--合并

vmware-vdiskmanager -r Red\ Hat\ Linux\ \(2\).vmdk -t 0 VM.vmdk
--VMware 转 KVM

qemu-img convert -f vmdk VM.vmdk -O qcow2 VM.qcow2
____________________________________________________

# rm -f /etc/udev/rules.d/70-persistent-net.rules

# sed -i 's/HWADDR/\#HWADDR/g' /etc/sysconfig/network-scripts/ifcfg-eth0



_____________________________________________________

http://zfsonlinux.org/

$ sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release-1-3.el6.noarch.rpm
$ sudo yum install zfs



______________________________________________________

e7ed81c64fe62762

/usr/share/cloudstack-common/vms/systemvm.iso

______________________________________________________

[root@ccms-003 ~]# more /etc/yum.repos.d/cloudstack.repo
[cloudstack]
name=cloudstack
baseurl=http://cloudstack.apt-get.eu/rhel/4.2/
enabled=1
gpgcheck=0



254 yum install cloudstack-*
255 hostname --fgdn
256 hostname
257 ping ccms-001
258 yum -y install ntp
259 yum -y install mysql-server
260 Linux VI 命令/etc/my.cnf
261 service mysqld start
262 chkconfig mysqld on
263 mysql_secure_installation
264 mysql -uroot
265 mysql -uroot -pctvit
266 cloudstack-setup-databases cloud:cloud@localhost --deploy-as=root:ctvit -
e file -m ctvit -k ctvit -i 192.168.168.201
267 more /etc/cloudstack/management/db.properties
268 cloudstack-setup-management
269 ps -ef |grep java
270 lsof -i :8080

______________________________________________________

[root@ccms-003 network-scripts]# more ifcfg-em1
DEVICE=em1
HWADDR=74:86:7A:EA:05:2C
ONBOOT=yes
HOTPLUG=no
TYPE=Ethernet
BOOTPROTO=none
[root@ccms-003 network-scripts]# more ifcfg-em1.100
DEVICE=em1.100
HWADDR=74:86:7A:EA:05:2C
ONBOOT=yes
HOTPLUG=no
TYPE=Ethernet
#VLAN=yes
IPADDR=192.168.168.203
NETMASK=255.255.255.0
GATEWAY=192.168.168.254
DNS1=202.106.0.20
[root@ccms-003 network-scripts]# more ifcfg-em1.200
DEVICE=em1.200
HWADDR=74:86:7A:EA:05:2C
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr0
[root@ccms-003 network-scripts]# more ifcfg-em1.300
DEVICE=em1.300
HWADDR=74:86:7A:EA:05:2C
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr1
[root@ccms-003 network-scripts]# more ifcfg-cloudbr0
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes
[root@ccms-003 network-scripts]# more ifcfg-cloudbr1
DEVICE=cloudbr1
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes


______________________________________________________

# grepvnc/etc/libvirt/qemu.confvnc_listen = "0.0.0.0"
_______________________________________________________

ntp.sjtu.edu.cn 202.120.2.101 (上海交通大学网络中心NTP服务器地址)
________________________________________________________
# vncserver
# Linux VI 命令 .vnc/xstart
# /usr/bin/gnome-session &


/etc/sysconfig/network

NOZEROCONF=yes


________________________________________________________
## 网卡绑定

DEVICE=eth0
HWADDR=d4:be:d9:95:bc:3e
#NM_CONTROLLED=yes
BRIDGE=br0
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
IPV6INIT=no
USERCTL=no


DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
NAME=br0
IPADDR=10.3.3.219
NETMASK=255.255.252.0
GATEWAY=10.3.3.254
ONBOOT=yes
DNS1=202.106.0.20
DNS2=8.8.8.8
IPV6INIT=no
USERCTL=no


<interface type='bridge'>
<mac address='00:16:36:4b:52:3a'/>
<source bridge='br0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/
>
</interface>


——————————————————————
## 创建虚拟磁盘

# qemu-img create -f raw tomcat7.img
# qemu-img infotomcat7.img

——————————————————————
## 使用iso安装系统

# virt-install -r 1024 -n tomcat7 -v -f /opt/guestOS/centos6/tomcat7.img --cdrom=/opt/iso/centos6.iso --network bridge=br0 --vnc


———————————————————————

kvm环境快照(snapshot)的使用方法

实例一 使用qemu-img命令使用快照
kvm环境下qcow2的镜像支持快照
1 确认镜像的格式
[root@nc1 boss]# qemu-img info test.qcow2
image: test.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.6G
cluster_size: 65536
2 为镜像test.qcow2创建快照,创建快照并没有产生新的镜像,虚拟机镜像大小增加,快照应属于镜像。
[root@nc1 boss]#qemu-img snapshot -c snapshot01 test.qcow2
[root@nc1 boss]#qemu-img snapshot -c snapshot02 test.qcow2
快照名 镜像名

3 列出某个镜像的所有快照
[root@nc1 boss]#qemu-img snapshot -Lighttpd test.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 snapshot01 0 2011-09-07 15:39:25 00:00:00.000
2 snapshot02 0 2011-09-07 15:39:29 00:00:00.000

4 使用快照
[root@nc1 boss]#qemu-img snapshot -a snapshot01 test.qcow2

5 删除快照
[root@nc1 boss]#qemu-img snapshot -d snapshot01 test.qcow2

附:
'snapshot' is the name of the snapshot to create, apply or delete
'-a' applies a snapshot (revert disk to saved state)
'-c' creates a snapshot
'-d' deletes a snapshot
'-Lighttpd' lists all snapshots in the given image
实例二 利用libvirt使用快照
1 同样先确认镜像的格式为qcow2
[root@nc1 boss]#qemu-img info test.qcow2
image: test.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.1G
cluster_size: 65536

2 创建并启动以test.qcow2作为镜像的虚拟机,假设虚拟机名称为testsnp,如果虚拟机没有启动,也可创建快照,但是意义不大,快照size为0
开始使用配置文件来创建指定虚拟机的快照
<domainsnapshot>
<name>snapshot02</name> //快照名
<description>Snapshot of OS install and updates</description>//描述
<disks>
<disk name='/home/guodd/boss/test.qcow2'> //虚拟机镜像的绝对路径
</disk>
<disk name='vdb' snapshot='no'/>
</disks>
</domainsnapshot>
保存为snp.xml,开始创建
[root@nc1 boss]#virsh snapshot-create testsnp snp.xml //即以snp.xml作为快照的配置文件为虚拟机testsnp创建快照
Domain snapshot snapshot02 created from 'snp.xml'

3 查看虚拟机testsnp已有的快照
[root@nc1 boss]# virsh snapshot-list testsnp
Name Creation Time State
---------------------------------------------------
1315385065 2011-09-07 16:44:25 +0800 running //1315385065创建时间比snapshot02早
snapshot02 2011-09-07 17:32:38 +0800 running
同样地,也可以通过qemu-img命令来查看快照
[root@nc1 boss]# qemu-img info test.qcow2
image: test.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 1.2G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1315385065 149M 2011-09-07 16:44:25 00:00:48.575
2 snapshot02 149M 2011-09-07 17:32:38 00:48:01.341

4 可以通过snapshot-dumpxml命令查询该虚拟机某个快照的详细配置
[root@nc1 boss]# virsh snapshot-dumpxml testsnp 1315385065
<domainsnapshot>
<name>1315385065</name>
<description>Snapshot of OS install and updates</description>
<state>running</state> //虚拟机状态 虚拟机关机状态时创建的快照状态为shutoff(虚拟机运行时创建的快照,即使虚拟机状态为shutoff,快照状态依然为running)
<creationTime>1315385065</creationTime> //虚拟机的创建时间 Readonly 由此可以看出没有给快照指定名称的话,默认以时间值来命名快照
<domain>
<uuid>afbe5fb7-5533-d154-09b6-33c869a05adf</uuid> //此快照所属的虚拟机(uuid)
</domain>
</domainsnapshot>
查看第二个snapshot
[root@nc1 boss]# virsh snapshot-dumpxml testsnp snapshot02
<domainsnapshot>
<name>snapshot02</name>
<description>Snapshot of OS install and updates</description>
<state>running</state>
<parent>
<name>1315385065</name> //当前快照把前一个快照作为parent
</parent>
<creationTime>1315387958</creationTime>
<domain>
<uuid>afbe5fb7-5533-d154-09b6-33c869a05adf</uuid>
</domain>
</domainsnapshot>

5 查看最新的快照信息
[root@nc1 boss]# virsh snapshot-current testsnp
<domainsnapshot>
<name>1315385065</name>
<description>Snapshot of OS install and updates</description>
<state>running</state>
<creationTime>1315385065</creationTime>
<domain>
<uuid>afbe5fb7-5533-d154-09b6-33c869a05adf</uuid>
</domain>
</domainsnapshot>

6 使用快照,指定使用哪一个快照恢复虚拟机
[root@nc1 boss]# virsh snapshot-revert testsnp snapshot02

7 删除指定快照
[root@nc1 boss]# virsh snapshot-delete testsnp snapshot02
Domain snapshot snapshot02 deleted

附:
Snapshot (help keyword 'snapshot')
snapshot-create Create a snapshot from XML
snapshot-create-as Create a snapshot from a set of args
snapshot-current Get the current snapshot
snapshot-delete Delete a domain snapshot
snapshot-dumpxml Dump XML for a domain snapshot
snapshot-list List snapshots for a domain
snapshot-revert Revert a domain to a snapshot

更多详细内容可查看 http://libvirt.org/formatsnapshot.html#SnapshotAttributes

_______________________________________________________________________________________

qemu-img convert -c -f raw -O qcow2 c2f2a458-226d-480f-8234-365d7473bb95 ~/centos5.4.img


________________________________________________________________________________________

1.生成raw文件
./qemu-img create -f raw squeeze.raw 5G
2.分区
sfdisk squeeze.raw
3.挂载分区
losetup /dev/loop0 squeeze.raw
kpartx -a /dev/loop0
4.格式化分区
mkfs.ext3 /dev/mapper/loop××
5.转换为qcow2格式
./qemu-img convert -f raw squeeze.raw -O qcow2 squeeze.qcow2

_______________________________________________________________________________________

IPADDR=125.39.195.63
GATEWAY=125.39.195.1
NETMASK=255.255.255.128

IPADDR=125.39.194.46
NETMASK=255.255.255.192
GATEWAY=125.39.194.1
DNS1=202.99.96.68
DNS2=8.8.8.8
________________________________________________________________________________________

Harbor
文档导航