自定义制作centos6.6 nova镜像

安装准备

  1. KVM虚拟化配套工具集合(含图形界面更好)
  2. centos安装光盘

注意事项

    阅读全文

    Openstack I版部署安装(十)

    配置Ceilometer Server控制服务

    Ceilometer项目创建时最初的目的是实现一个能为计费系统采集数据的框架。社区推动Ceilometer成为OpenStack里数据采集(监控数据、计费数据)的基础设施,采集到的数据提供给监控、计费、面板等项目使用

    阅读全文

    Openstack I版部署安装(九)

    虚拟机热迁移配置

    笔者的环境已经部署双计算及节点,并且同时使用ceph rbd作为后端存储.
    双计算节点可以实现虚拟机的热迁移,注意迁移的主机资源必须足够.同时需要设置节点知之间libvirtd服务无密码互相访问
    在两个计算节点上执行
    vi /etc/libvirt/libvirtd.conf 添加以下配置

    1
    2
    3
    4
    5
    listen_tls = 0
    listen_tcp = 1
    tcp_port = "16509"
    listen_addr = "0.0.0.0"
    auth_tcp = "none"

    阅读全文

    Openstack I版部署安装(八)

    调试验证

    在controller节点上查看服务的状态,笑脸为正常

    1
    nova-manage service list

    阅读全文

    Openstack I版部署安装(七)

    部署配置Neutron网络组件

    配置Neutron网络(计算节点)

    在compute节点上执行
    开启内核转发
    vi /etc/sysctl.conf

    1
    2
    net.ipv4.conf.all.rp_filter=0
    net.ipv4.conf.default.rp_filter=0

    阅读全文

    Openstack I版部署安装(六)

    部署配置Neutron网络组件

    配置Neutron控制节点

    在controller上
    创建neutron用户、角色、端点服务

    1
    2
    3
    4
    5
    6
    7
    8
    keystone user-create --name neutron --pass NEUTRON_PASS --email neutron@example.com
    keystone user-role-add --user neutron --tenant service --role admin
    keystone service-create --name neutron --type network --description "OpenStack Networking"
    keystone endpoint-create \
    --service-id $(keystone service-list | awk '/ network / {print $2}') \
    --publicurl http://controller:9696 \
    --adminurl http://controller:9696 \
    --internalurl http://controller:9696

    安装neutron-server组件
    1
    yum install openstack-neutron openstack-neutron-ml2 python-neutronclient -y

    备份配置文件
    1
    mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

    创建neutron配置文件
    vi /etc/neutron/neutron.conf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    [DEFAULT]
    auth_strategy = keystone
    rpc_backend = neutron.openstack.common.rpc.impl_qpid
    qpid_hostname = controller
    notify_nova_on_port_status_changes = True
    notify_nova_on_port_data_changes = True
    nova_url = http://controller:8774/v2
    nova_admin_username = nova
    nova_admin_password = NOVA_PASS
    nova_admin_auth_url = http://controller:35357/v2.0
    core_plugin = ml2
    service_plugins = router
    verbose = True
    [quotas]
    [agent]
    [keystone_authtoken]
    auth_uri = http://controller:5000
    auth_host = controller
    auth_protocol = http
    auth_port = 35357
    admin_tenant_name = service
    admin_user = neutron
    admin_password = NEUTRON_PASS
    [database]
    connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
    service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

    修改配置文件权限
    1
    chown -R root:neutron /etc/neutron/neutron.conf

    配置文件写入服务身份 的租户ID
    1
    2
    uuid=`keystone tenant-list | awk '/ service / { print $2 }'`
    sed -i '/1/a\'"$uuid"'' /etc/neutron/neutron.conf

    备份ml2配置文件
    1
    mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

    创建ml2配置文件
    vi /etc/neutron/plugins/ml2/ml2_conf.ini
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    [ml2]
    type_drivers = gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    [ml2_type_flat]
    [ml2_type_vlan]
    [ml2_type_gre]
    tunnel_id_ranges = 1:1000
    [ml2_type_vxlan]
    [securitygroup]
    firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True

    修改ml2配置权限
    1
    chown -R root:neutron /etc/neutron/plugins/ml2/ml2_conf.ini

    创建软连接指向ml2配置
    1
    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

    启动neutron-server服务
    1
    2
    3
    4
    5
    service openstack-nova-api restart
    service openstack-nova-scheduler restart
    service openstack-nova-conductor restart
    service neutron-server start
    chkconfig neutron-server on

    阅读全文

    Openstack I版部署安装(五)

    部署配置Compute计算节点

    安装nova-compute

    安装nova-compute的相关组件

    1
    yum install openstack-nova-compute MySQL-python -y

    安装ceph-fuse客户端,以便可以访问cephfs文件系统
    1
    yum install ceph ceph-fuse -y

    添加compute节点访问MDS的认证权限
    在ceph-node01上执行,获取密钥值并且写入compute节点的目录下
    1
    ceph auth get-or-create client.fuse | ssh compute01 tee /ect/ceph/ceph.client.fuse.keyring

    同步ceph配置文件
    1
    scp -r root@ceph-node01:/etc/ceph/ceph.conf root@compute01:/etc/ceph/

    启动ceph-fuse挂载cephfs到nova实例目录,必须加上id和key参数
    1
    ceph-fuse -m ceph-node01:6789 /var/lib/nova/instances --id fuse --keyring=/etc/ceph/ceph.client.fuse.keyring

    写入启动脚本实现开机自动挂载
    1
    echo "ceph-fuse -m ceph-node01:6789 /var/lib/nova/instances --id fuse --keyring=/etc/ceph/ceph.client.fuse.keyring" >> /etc/rc.local

    改变实例目录所属的用户权限
    1
    chown -R nova:nova /var/lib/nova/instances

    启动libvirt管理服务
    1
    2
    service libvirtd start
    chkconfig libvirtd on

    阅读全文

    Openstack I版部署安装(四)

    配置Cinder块设备存储

    安装cinder组件

    1
    yum install openstack-cinder -y

    备份cinder配置文件
    1
    mv /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak

    创建cinder配置文件
    vi /etc/cinder/cinder.conf

    阅读全文

    Openstack I版部署安装(三)

    配置NOVA计算服务控制

    安装服务

    1
    2
    3
    yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor \
    openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \
    python-novaclient -y

    同步ceph的配置文件
    1
    scp -r root@ceph-node01:/etc/ceph/ceph.conf root@controller:/etc/ceph/

    备份nova配置文件
    1
    mv /etc/nova/nova.conf /etc/nova/nova.conf.bak

    创建新的nova.conf
    vi /etc/nova/nova.conf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    [DEFAULT]
    rpc_backend = qpid
    qpid_hostname = controller
    my_ip = 10.0.0.11
    vncserver_listen = 10.0.0.11
    vncserver_proxyclient_address = 10.0.0.11
    auth_strategy = keystone
    libvirt_images_type=rbd
    libvirt_images_rbd_pool=volumes
    libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_user=volumes
    network_api_class = nova.network.neutronv2.api.API
    neutron_url = http://controller:9696
    neutron_auth_strategy = keystone
    neutron_admin_tenant_name = service
    neutron_admin_username = neutron
    neutron_admin_password = NEUTRON_PASS
    neutron_admin_auth_url = http://controller:35357/v2.0
    linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    security_group_api = neutron
    service_neutron_metadata_proxy = true
    neutron_metadata_proxy_shared_secret = neutron
    [baremetal]
    [cells]
    [conductor]
    [database]
    connection = mysql://nova:NOVA_DBPASS@controller/nova
    [hyperv]
    [image_file_url]
    [keymgr]
    [keystone_authtoken]
    auth_uri = http://controller:5000
    auth_host = controller
    auth_protocol = http
    auth_port = 35357
    admin_user = nova
    admin_tenant_name = service
    admin_password = NOVA_PASS
    [libvirt]
    virt_type=kvm
    [matchmaker_ring]
    [metrics]
    [osapi_v3]
    [rdp]
    [spice]
    [ssl]
    [trusted_computing]
    [upgrade_levels]
    [vmware]
    [xenserver]
    [zookeeper]

    修改配置文件权限
    1
    chown -R root:nova /etc/nova/nova.conf

    同步导入nova数据库表
    1
    su -s /bin/sh -c "nova-manage db sync" nova

    创建Nova管理员用户、角色、端点服务
    1
    2
    3
    4
    5
    6
    7
    8
    9
    keystone user-create --name=nova --pass=NOVA_PASS --email=nova@example.com
    keystone user-role-add --user=nova --tenant=service --role=admin
    keystone service-create --name=nova --type=compute \
    --description="OpenStack Compute"
    keystone endpoint-create \
    --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
    --publicurl=http://controller:8774/v2/%\(tenant_id\)s \
    --internalurl=http://controller:8774/v2/%\(tenant_id\)s \
    --adminurl=http://controller:8774/v2/%\(tenant_id\)s

    启动nova相关服务
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    service openstack-nova-api start
    service openstack-nova-cert start
    service openstack-nova-consoleauth start
    service openstack-nova-scheduler start
    service openstack-nova-conductor start
    service openstack-nova-novncproxy start
    chkconfig openstack-nova-api on
    chkconfig openstack-nova-cert on
    chkconfig openstack-nova-consoleauth on
    chkconfig openstack-nova-scheduler on
    chkconfig openstack-nova-conductor on
    chkconfig openstack-nova-novncproxy on

    阅读全文

    Openstack I版部署安装(二)

    通过Ceph-Deploy部署了3节点的Ceph分布式存储后,先要为openstack的集成做配置准备

    配置Ceph集成准备

    在ceph-node01上
    创建卷池和镜像池,分别用来存放cinder卷,实例卷,和镜像

    1
    2
    rados mkpool volumes
    rados mkpool images

    增加两个pool的复制水平,设置为两份
    1
    2
    ceph osd pool set volumes size 2
    ceph osd pool set images size 2

    创建cephx安全认证的密钥
    1
    2
    3
    ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
    ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
    ceph auth get-or-create client.fuse mon 'allow r' mds 'allow' osd 'allow *'

    阅读全文