OpenStack安装流程(juno版)- 添加对象存储服务(swift)- 安装和配置
在controller节点上安装和配置
创建swift的数据库,服务证书和API端点
- 创建服务证书:
启动
admin
证书:$ source admin-openrc.sh
创建
swift
用户:
<pre>$ keystone user-create --name swift --pass SWIFT_PASS
Property | Value |
---|---|
enabled | True |
id | dcf5d53f027b44d38c205ad06717812c |
name | swift |
username | swift |
+----------+----------------------------------+</pre>
用合适的密码代替SWIFT_PASS。
把admin
角色赋予给swift
用户: $ keystone user-role-add --user swift --tenant service --role admin
这条命令不产生输出显示。
创建swift
服务实体:
<pre>$ keystone service-create --name swift --type object-store \
--description "OpenStack Object Storage" | |
---|---|
Property | Value |
description | OpenStack Object Storage |
enabled | True |
id | 11519978722e4fb4be75f086aca49334 |
name | swift |
type | object-store |
+-------------+----------------------------------+</pre>
- 创建对象存储服务的API端点:
<pre>$ keystone endpoint-create \
--service-id $(keystone service-list | awk '/ object-store / {print$2}') \
--publicurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://controller:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://controller:8080 \
--region regionOne | |
---|---|
Property | Value |
adminurl | http://controller:8080 |
id | ec003b88a6144afda3fc2b34acb93ded |
internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
region | regionOne |
service_id | 11519978722e4fb4be75f086aca49334 |
+-------------+----------------------------------------------+</pre>
安装和配置组件
- 安装所需包:
# apt-get install swift swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached
- 创建文件夹
/etc/swift
- 从对象存储的源码仓库中取得代理服务配置文件(proxy service configuration file)。
# curl -o /etc/swift/proxy-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>
- 编辑
# vi /etc/swift/proxy-server.conf
文件:在
[DEFAULT]
部分,设置bind port,用户和配置文件存放目录:
<pre>[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift</pre>
在[pipeline:main]
部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server
</pre>
在[app:proxy-server]
部分,启用帐户管理:
<pre>[app:proxy-server]<br>...<br>allow_account_management = true<br>account_autocreate = true
</pre>
在[filter:keystoneauth]
部分,设定操作者角色(operator role):
<pre>[filter:keystoneauth]<br>use = egg:swift#keystoneauth<br>...<br>operator_roles = admin,_member_
</pre>
在[filter:authtoken]
部分,设定认证服务:
<pre>[filter:authtoken]<br>paste.filter_factory = keystonemiddleware.auth_token:filter_factory<br>...<br>auth_uri = <a href="http://controller" rel="nofollow noreferrer">http://controller</a>:5000/v2.0<br>identity_uri = <a href="http://controller" rel="nofollow noreferrer">http://controller</a>:35357<br>admin_tenant_name = service<br>admin_user = swift<br>admin_password = SWIFT_PASS<br>delay_auth_decision = true
</pre>
SWIFT_PASS为创建swift
用户时使用的密码。注释掉 auth_host,auth_port,和auth_protocol的选项,因为identity_uri选项是直接代替它们的。
在[filter:cache]
部分,设定memcached location:
<pre>[filter:cache]<br>...<br>memcache_servers = 127.0.0.1:11211
</pre>
在object节点上安装和配置
这里要求两个object存储节点,每一个都包含两个空的本地块存储设备(two empty local block storage devices)。每个设备(/dev/sdb
和/dev/sdc
)都必须包含一个合适的分区表,整个设备就一个分区(Each of the devices, /dev/sdb and /dev/sdc, must contain a suitable partition table with one partition occupying the entire device. )。
配置存储
为object节点增添两块硬盘:设置->存储->控制器:SATA->添加虚拟硬盘。
object节点的基础环境配置
由前文所述的虚拟机模版创建两个存储节点,分别为object1和object2节点,基础环境配置如下:
配置网络
object节点虚拟机网络设置,设置->网络:
- 网卡1,连接方式->仅主机(Host-Only)适配器,界面名称->VirtualBox Host-Only Ethernet Adapter #2,控制芯片->准虚拟化网络(virtio-net),混杂模式->全部允许,接入网线->勾选;
- 网卡2,连接方式->网络地址转换(NAT),控制芯片->准虚拟化网络(virtio-net),接入网线->勾选。
启动虚拟机后,配置其网络,通过更改# vi /etc/network/interfaces
文件,添加如下代码:
object1:
<pre># The management network interface<br>auto eth0<br>iface eth0 inet static
address 10.10.10.14 netmask 255.255.255.0
The NAT network
auto eth1
iface eth1 inet dhcp</pre>
object2:
<pre># The management network interface<br>auto eth0<br>iface eth0 inet static
address 10.10.10.15 netmask 255.255.255.0
The NAT network
auto eth1
iface eth1 inet dhcp</pre>
配置命名的解决方案,更改# vi /etc/hostname文件,将主机名改为object1
和object2
,更改# vi /etc/hosts文件,添加以下代码:
<pre>10.10.10.10 controller<br>10.10.10.11 compute<br>10.10.10.12 network<br>10.10.10.13 block<br>10.10.10.14 object1<br>10.10.10.15 object2
</pre>
配置NTP
修改配置文件# vi /etc/ntp.conf
,添加如下代码:
<pre>server controller iburst
</pre>
其他server全部都注释掉。如果/var/lib/ntp/ntp.conf.dhcp
文件存在,则删除之。
重启NTP服务:# service ntp restart
配置存储
- 安装配套的功能包
# apt-get install xfsprogs rsync
- 格式化分区
/dev/sdb
和/dev/sdc
为XFS格式:
<pre># mkfs.xfs /dev/sdb
meta-data=/dev/sdb isize=256 agcount=4, agsize=524288 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=2097152, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0</pre>
<pre># mkfs.xfs /dev/sdc<br>meta-data=/dev/sdc isize=256 agcount=4, agsize=524288 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=2097152, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0></pre>
- 创建挂载点的目录结构
# mkdir -p /srv/node/sdb
# mkdir -p /srv/node/sdc
- 编辑
# vi /etc/fstab
文件,添加如下内容:<pre>
/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2</pre>
- 挂载设备
# mount /srv/node/sdb
# mount /srv/node/sdc
- 新建
# vi /etc/rsyncd.conf
文件,添加如下内容:
<pre>uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = MANAGEMENT_INTERFACE_IP_ADDRESS
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。
- 编辑
# vi /etc/default/rsync
文件,启用rsync服务:
<pre>RSYNC_ENABLE=true
</pre> - 重启rsync服务:
# service rsync start
安装和配置存储节点组件
- 安装所需包:
# apt-get install swift swift-account swift-container swift-object
- 从对象存储的源码仓库中取得accounting,container和object服务配置文件:
# curl -o /etc/swift/account-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>
# curl -o /etc/swift/container-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>
# curl -o /etc/swift/object-server.conf <a href="https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample" rel="nofollow noreferrer">https://raw.githubusercontent...</a>
- 编辑
# vi /etc/swift/account-server.conf
文件:
在[DEFAULT]
部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
<pre>[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。
在[pipeline:main]
部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = healthcheck recon account-server
</pre>
在[filter:recon]
部分,设定recon(metrics)cache目录:
<pre>[filter:recon]<br>...<br>recon_cache_path = /var/cache/swift
</pre>
- 编辑
# vi /etc/swift/container-server.conf
文件:
在[DEFAULT]
部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
<pre>[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。
在[pipeline:main]
部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = healthcheck recon container-server
</pre>
在[filter:recon]
部分,设定recon(metrics)cache目录:
<pre>[filter:recon]<br>...<br>recon_cache_path = /var/cache/swift
</pre>
- 编辑
# vi /etc/swift/object-server.conf
文件:在
[DEFAULT]
部分,设定bind IP地址,bind port,用户,配置文件目录和挂载点目录:
<pre>[DEFAULT]
...
bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node</pre>
MANAGEMENT_INTERFACE_IP_ADDRESS为object节点在management网络中的IP地址,object1为10.10.10.14,object2为10.10.10.15。
在[pipeline:main]
部分,启用合适的模块:
<pre>[pipeline:main]<br>pipeline = healthcheck recon object-server
</pre>
在[filter:recon]
部分,设定recon(metrics)cache目录:
<pre>[filter:recon]<br>...<br>recon_cache_path = /var/cache/swift
</pre>
- 确保挂载点目录结构的权限正确:
# chown -R swift:swift /srv/node
- 创建
recon
目录,并确保权限正确:
# mkdir -p /var/cache/swift
# chown -R swift:swift /var/cache/swift
相关推荐
java的内存管理就是对象的分配和释放问题。但同时,它也加重了JVM的工作。因为,GC为了能够正确释放对象,GC必须监控每一个对象的运行状态,包括对象的申请、引用、被引用、赋值等,GC都需要进行监控。