Corso Cloud Storage Es5
Esercitazione 5: Configurazione di CES e dell'Object Service di Spectrum Scale
Installazione di GPFS sui protocol nodes e inserimento dei nodi nel cluster GPFS
Download del software e di file di utility
Sui due nodi gpfs-obj1-# e gpfs-obj2-# scaricare via wgert dal server http://group-www.recas.ba.infn.it i file:
- Spectrum_Scale_Protocols_Standard-4.2.0.4-x86_64-Linux-install
- gpfs.sh (da mettere in /etc/profile.d/ e farne il source)
- gpfs.repo (da mettere in /etc/yum.repos.d)
[root@gpfs-obj1-0 ~]# wget http://group-www.recas.ba.infn.it/gpfs/Spectrum_Scale_Protocols_Standard-4.2.0.4-x86_64-Linux-install --2016-10-03 06:46:52-- http://group-www.recas.ba.infn.it/gpfs/Spectrum_Scale_Protocols_Standard-4.2.0.4-x86_64-Linux-install ... 2016-10-03 06:47:20 (34.4 MB/s) - ‘Spectrum_Scale_Protocols_Standard-4.2.0.4-x86_64-Linux-install’ saved [1034305711/1034305711] [root@gpfs-obj1-0 ~]# wget -O /etc/profile.d/gpfs.sh http://group-www.recas.ba.infn.it/gpfs/gpfs.sh --2016-10-03 07:06:57-- http://group-www.recas.ba.infn.it/gpfs/gpfs.sh ... 2016-10-03 07:06:57 (5.89 MB/s) - ‘/etc/profile.d/gpfs.sh’ saved [38/38] [root@gpfs-srv-0 ~]# . /etc/profile.d/gpfs.sh [root@gpfs-obj1-0 ~]# wget -O /etc/yum.repos.d/gpfs.repo http://group-www.recas.ba.infn.it/gpfs/gpfs.repo --2016-10-03 07:08:36-- http://group-www.recas.ba.infn.it/gpfs/gpfs.repo ... 2016-10-03 07:08:36 (32.6 MB/s) - ‘/etc/yum.repos.d/gpfs.repo’ saved [377/377] Ripetere le operazioni sul nodo gpfs-obj2-#
Installazione dei pacchetti base e protocol di GPFS e build del GPL
- Spacchettare la distribuzione di GPFS (usare le opzioni --text-only --silent)
- Installare i pacchetti gpfs.base gpfs.docs gpfs.ext gpfs.gpl gpfs.gskit gpfs.msg.en_US
gpfs.protocols-support nfs-ganesha-gpfs nfs-ganesha-utils gpfs.smb spectrum-scale-object
- Fare il build del Portability Layer (comando: mmbuildgpl)
Nota: nella installazione usare "yum --disablerepo=epel per un conflitto tra nfs-ganesha di EPEL e quello presente nel distribution kit
Nota 2: il build necessita di definire la variabile di environment LINUX_DISTRIBUTION=REDHAT_AS_LINUX
[root@gpfs-obj1-0 ~]# chmod +x Spectrum_Scale_Protocols_Standard-4.2.0.4-x86_64-Linux-install [root@gpfs-obj1-0 ~]# ./Spectrum_Scale_Protocols_Standard-4.2.0.4-x86_64-Linux-install --text-only --silent Extracting License Acceptance Process Tool to /usr/lpp/mmfs/4.2.0.4 ... ... [root@gpfs-obj1-0 ~]# yum --disablerepo=epel install gpfs.base gpfs.docs gpfs.ext gpfs.gpl gpfs.gskit gpfs.msg.en_US \ > gpfs.protocols-support nfs-ganesha-gpfs nfs-ganesha-utils Loaded plugins: fastestmirror ... Complete! [root@gpfs-obj1-0 ~]# export LINUX_DISTRIBUTION=REDHAT_AS_LINUX [root@gpfs-obj1-0 ~]# mmbuildgpl -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Mon Oct 3 07:25:44 UTC 2016. ... mmbuildgpl: Building GPL module completed successfully at Mon Oct 3 07:26:06 UTC 2016. --------------------------------------------------------
Inserire i protocol nodes nel cluster GPFS
- Inserire i due nodi gpfs-obj1-# e gpfs-obj2-# in cluster, non quorum, client, e configurare per loro la licenza di tipo server (mmaddnode, mmchlicense).
Eseguire il comando su un nodo del cluster GFPS.
- Far partire GPFS sui due nodi (mmstartup)
- Verificare l'accessibilita' al file system
[root@gpfs-srv-0 gpfs]# mmaddnode -N gpfs-obj1-0:nonquorum-client: Mon Oct 3 08:01:06 UTC 2016: mmaddnode: Processing node gpfs-obj1-0.recas.ba.infn.it mmaddnode: Command successfully completed mmaddnode: Warning: Not all nodes have proper GPFS license designations. Use the mmchlicense command to designate licenses as needed. mmaddnode: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@gpfs-srv-0 gpfs]# mmaddnode -N gpfs-obj2-0:nonquorum-client: =============================================================================== | Warning: | | This cluster contains nodes that do not have a proper GPFS license | | designation. This violates the terms of the GPFS licensing agreement. | | Use the mmchlicense command and assign the appropriate GPFS licenses | | to each of the nodes in the cluster. For more information about GPFS | | license designation, see the Concepts, Planning, and Installation Guide. | =============================================================================== Mon Oct 3 08:01:19 UTC 2016: mmaddnode: Processing node gpfs-obj2-0.recas.ba.infn.it mmaddnode: Command successfully completed mmaddnode: Warning: Not all nodes have proper GPFS license designations. Use the mmchlicense command to designate licenses as needed. mmaddnode: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@gpfs-srv-0 gpfs]# mmchlicense server --accept -N gpfs-obj1-0,gpfs-obj2-0 The following nodes will be designated as possessing server licenses: gpfs-obj1-0.recas.ba.infn.it gpfs-obj2-0.recas.ba.infn.it mmchlicense: Command successfully completed mmchlicense: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@gpfs-srv-0 gpfs]# mmstartup -N gpfs-obj1-0,gpfs-obj2-0 Mon Oct 3 08:02:33 UTC 2016: mmstartup: Starting GPFS ...
[root@gpfs-srv-0 gpfs]# mmgetstate -a Node number Node name GPFS state ------------------------------------------ 1 gpfs-srv-0 active 2 stack-group-00 active 3 gpfs-obj1-0 arbitrating 4 gpfs-obj2-0 arbitrating [root@gpfs-srv-0 gpfs]# mmgetstate -a Node number Node name GPFS state ------------------------------------------ 1 gpfs-srv-0 active 2 stack-group-00 active 3 gpfs-obj1-0 active 4 gpfs-obj2-0 active
Sui due protocol nodes verificare l'accessibilita' del file system
[root@gpfs-obj1-0 ~]# ls /gpfs automountdir openstack [root@gpfs-obj2-0 4.2.0.4]# ls /gpfs automountdir openstack
Configurazione del CES (Clustered Export Service) cluster
Creare la direcotory /gpfs/ces_shared_root e configurarla come cesSharedRoot del cluster (mmchconfig, mmlscluster)
[root@gpfs-srv-0 ~]# mkdir /gpfs/ces_shared_root [root@gpfs-srv-0 ~]# mmchconfig cesSharedRoot=/gpfs/ces_shared_root mmchconfig: Command successfully completed mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@gpfs-srv-0 ~]# mmlscluster --ces GPFS cluster information ======================== GPFS cluster name: gpfs-srv-0.recas.ba.infn.it GPFS cluster id: 17561377055771261101 mmlscluster: There are no Cluster Export Services nodes defined.
Configurazione dei CES nodes
Configurare i nodi gpfs-obj1-# e gpfs-obj2-# come CES nodes (mmchnode, mmlscluster)
[root@gpfs-srv-0 ~]# mmchnode --ces-enable -N gpfs-obj1-0,gpfs-obj2-0 Mon Oct 3 09:47:50 UTC 2016: mmchnode: Processing node gpfs-obj1-0.recas.ba.infn.it Mon Oct 3 09:48:05 UTC 2016: mmchnode: Processing node gpfs-obj2-0.recas.ba.infn.it mmchnode: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@gpfs-srv-0 ~]# mmlscluster --ces GPFS cluster information ======================== GPFS cluster name: gpfs-srv-0.recas.ba.infn.it GPFS cluster id: 17561377055771261101 Cluster Export Services global parameters ----------------------------------------- Shared root directory: /gpfs/ces_shared_root Enabled Services: None Log level: 0 Address distribution policy: even-coverage Node Daemon node name IP address CES IP address list ----------------------------------------------------------------------- 3 gpfs-obj1-0.recas.ba.infn.it 172.20.0.99 Node starting up 4 gpfs-obj2-0.recas.ba.infn.it 172.20.0.108 None
Configurazione del CES pool IP address
Per ogni gruppo sono definiti due indirizzi IP da assegnare al pool: gpfs-ces1-# e gpfs-ces2-#, ed un DNS name gpfs-ces-# che risolve in round robin i due indirizzi IP.
- identificare gli IP e configurare i due indirizzi come indirizzi CES (mmces)
- verificare che gli indirizzi siano attivi sui due nodi
- sospendere uno dei due nodi, e verificare che l'indirizzo CES migra sull'altro nodo (mmces)
[root@gpfs-srv-0 ~]# host gpfs-ces1-0 gpfs-ces1-0.recas.ba.infn.it has address 172.20.0.249 [root@gpfs-srv-0 ~]# host gpfs-ces2-0 gpfs-ces2-0.recas.ba.infn.it has address 172.20.0.250 [root@gpfs-srv-0 ~]# host gpfs-ces-0 gpfs-ces-0.recas.ba.infn.it has address 172.20.0.250 gpfs-ces-0.recas.ba.infn.it has address 172.20.0.249 [root@gpfs-srv-0 ~]# mmces address list Address Node Group Attribute ------------------------------------------------------------------------- 172.20.0.249 gpfs-obj1-0.recas.ba.infn.it none none 172.20.0.250 gpfs-obj2-0.recas.ba.infn.it none none [root@gpfs-srv-0 ~]# ssh gpfs-obj1-0 ip -4 address list ... 2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 172.20.0.99/16 brd 172.20.255.255 scope global ens18 valid_lft forever preferred_lft forever inet 172.20.0.249/16 brd 172.20.255.255 scope global secondary ens18:0 valid_lft forever preferred_lft forever [root@gpfs-srv-0 ~]# ssh gpfs-obj2-0 ip -4 address list ... 2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 172.20.0.108/16 brd 172.20.255.255 scope global ens18 valid_lft forever preferred_lft forever inet 172.20.0.250/16 brd 172.20.255.255 scope global secondary ens18:0 valid_lft forever preferred_lft forever
[root@gpfs-srv-0 ~]# mmces node list Node Name Node Flags Node Groups ----------------------------------------------------------------- 6 gpfs-obj1-0.recas.ba.infn.it none 7 gpfs-obj2-0.recas.ba.infn.it none [root@gpfs-srv-0 ~]# mmces node list --verbose Node Name Node Flags Node Groups ----------------------------------------------------------------- 6 gpfs-obj1-0.recas.ba.infn.it none Addresses: 172.20.0.249 7 gpfs-obj2-0.recas.ba.infn.it none Addresses: 172.20.0.250 [root@gpfs-srv-0 ~]# mmces node suspend -N gpfs-obj1-0 gpfs-obj1-0.recas.ba.infn.it: Node now in suspended state. Reassigning addresses for gpfs-obj1-0.recas.ba.infn.it [root@gpfs-srv-0 ~]# mmces node list --verbose Node Name Node Flags Node Groups ----------------------------------------------------------------- 6 gpfs-obj1-0.recas.ba.infn.it Suspended 7 gpfs-obj2-0.recas.ba.infn.it none Addresses: 172.20.0.250 172.20.0.249 [root@gpfs-srv-0 ~]# ssh gpfs-obj1-0 ip -4 addr show ... 2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 172.20.0.99/16 brd 172.20.255.255 scope global ens18 valid_lft forever preferred_lft forever [root@gpfs-srv-0 ~]# ssh gpfs-obj2-0 ip -4 addr show ... 2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 172.20.0.108/16 brd 172.20.255.255 scope global ens18 valid_lft forever preferred_lft forever inet 172.20.0.250/16 brd 172.20.255.255 scope global secondary ens18:0 valid_lft forever preferred_lft forever inet 172.20.0.249/16 brd 172.20.255.255 scope global secondary ens18:1 valid_lft forever preferred_lft forever [root@gpfs-srv-0 ~]# mmces node list --verbose Node Name Node Flags Node Groups ----------------------------------------------------------------- 6 gpfs-obj1-0.recas.ba.infn.it none 7 gpfs-obj2-0.recas.ba.infn.it none Addresses: 172.20.0.250 172.20.0.249 [root@gpfs-srv-0 ~]# mmces address move --rebalance [root@gpfs-srv-0 ~]# mmces node list --verbose Node Name Node Flags Node Groups ----------------------------------------------------------------- 6 gpfs-obj1-0.recas.ba.infn.it none Addresses: 172.20.0.249 7 gpfs-obj2-0.recas.ba.infn.it none Addresses: 172.20.0.250
Abilitare il protocollo OBJ sul cluster CES
Configurazione iniziale di Swift
Prima di abilitare l'OBJ service e' necessario effettuare la configurazione iniziale di Swift (mmobj swift base ...)
Il keystone da utilizzare e' quello di stack-group-0#. Su questo verificare:
- l'endpoint del keystone service (V3)
- che sul remote keystone non esistono user swift, service swift, endpoint per il service swift
Questi possono essere creati a mano sul keystone (operando su group-stack-0#), oppure si puo' dire al comando di configurazione di creare le necessarie entry in keystone.
Nel primo caso si devono passare al comando mmobj swift base le credenziali di swift come configurate sul keystone server, nel secondo caso si devono passare al comando mmobj swift base le credenziali dello user admin di keystone. Per le credenziali, vedere il file /root/openrc su stack-group-0#.
Verifica della configurazione di keystone su stack-group-0#
root@stack-group-00:~# . openrc root@stack-group-00:~# openstack user list | grep swift root@stack-group-00:~# openstack service list | grep swift root@stack-group-00:~# openstack endpoint list | grep swift root@stack-group-00:~# openstack endpoint list -c "Service Name" -c "Service Type" -c "Interface" -c URL | grep keystone | keystone | identity | admin | http://172.20.0.55:35357/v2.0 | | keystone | identity | internal | http://127.0.0.1:5000/v2.0 | | keystone | identity | public | http://172.20.0.55:5000/v2.0 |
Eseguire la configurazione di base di Swift
Effettuare la configurazione di Swift specificando, tra le altre cose:
- remote keystone authentication
- GPFS mount point: /gpfs
- fileset name per l'object fileset: object_fs
- cluster-hostname (indica il nome con cui i servizi CES devono essere contattati): gpfs-ces-#.recas.ba.infn.it
[root@gpfs-obj1-0 ~]# mmobj swift base -g /gpfs --cluster-hostname gpfs-ces-0 -o object_fs --remote-keystone-url \ > http://172.20.0.55:5000/v3 --configure-remote-keystone --admin-user admin --admin-password a_big_secret \ > --swift-user swift --swift-password a_big_secret mmobj swift base: Validating execution environment. mmobj swift base: Creating fileset /dev/gpfs_dev object_fs. mmobj swift base: Validating Keystone environment. mmobj swift base: Validating Swift values in Keystone. mmobj swift base: Configuring Swift services. mmobj swift base: Setting cluster attribute object_singleton_node=172.20.0.249. mmobj swift base: Uploading configuration changes to the CCR. mmobj swift base: Configuration complete.
Verificare che nel keyston sono state inserite le entry per il servizio swift
root@stack-group-00:~# openstack user list | grep swift | 902b594578e64f33a421e9161c391bb1 | swift | root@stack-group-00:~# openstack service list | grep swift | 1450a34067754beda58bff3359cdcffc | swift | object-store | root@stack-group-00:~# openstack role list --user swift --project service +----------------------------------+-------+---------+-------+ | ID | Name | Project | User | +----------------------------------+-------+---------+-------+ | bd5f05510a0d41098ba1d339161e59fa | admin | service | swift | +----------------------------------+-------+---------+-------+ root@stack-group-00:~# openstack endpoint list | grep swift | 0125c8712a3940219b497b5fc273aee6 | RegionOne | swift | object-store | True | public | http://gpfs-ces-0:8080/v1/AUTH_%(tenant_id)s | | 2b8389483c0548058fc6b3157458a102 | RegionOne | swift | object-store | True | internal | http://gpfs-ces-0:8080/v1/AUTH_%(tenant_id)s | | c1bc1177986a4d27ac8e733614592c78 | RegionOne | swift | object-store | True | admin | http://gpfs-ces-0:8080 |
Abilitare il servizio OBJ per CES
[root@gpfs-obj1-0 ~]# mmces service list No CES services are enabled. [root@gpfs-obj1-0 ~]# mmces service list --verbose No CES services are enabled. [root@gpfs-obj1-0 ~]# mmces service enable OBJ mmchconfig: Command successfully completed mmchconfig: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. [root@gpfs-obj1-0 ~]# mmces service list Enabled services: OBJ OBJ is running [root@gpfs-obj1-0 ~]# mmces service list --verbose Enabled services: OBJ OBJ is running OBJ:openstack-swift-object-updater is running OBJ:openstack-swift-object-expirer is running OBJ:openstack-swift-account-auditor is running OBJ:openstack-swift-container-auditor is running OBJ:openstack-swift-container-updater is running OBJ:ibmobjectizer is not running OBJ:openstack-swift-object-auditor is not running OBJ:openstack-swift-object is running OBJ:openstack-swift-account is running OBJ:openstack-swift-container is running OBJ:openstack-swift-proxy is running OBJ:memcached is running OBJ:openstack-swift-object-replicator is running OBJ:openstack-swift-account-reaper is running OBJ:openstack-swift-account-replicator is running OBJ:openstack-swift-container-replicator is running OBJ:openstack-swift-object-sof is not running
Eseguire alcune prove di accesso all'object service
Creazione di un container
root@stack-group-00:~# . openrc root@stack-group-00:~# openstack container list root@stack-group-00:~# openstack container create container00 +---------------------------------------+-------------+------------------------------------+ | account | container | x-trans-id | +---------------------------------------+-------------+------------------------------------+ | AUTH_d0115388e572440eb53c4a8671fef6ae | container00 | tx7138c9fac35e4cf7a9688-0057f2bfe7 | +---------------------------------------+-------------+------------------------------------+
Creazione di un oggetto nel container
root@stack-group-00:~# cd /bin root@stack-group-00:/bin# cd /etc root@stack-group-00:/etc# openstack object create container00 services +----------+-------------+----------------------------------+ | object | container | etag | +----------+-------------+----------------------------------+ | services | container00 | 3e73cc5c77799fd3e7a02c62474107bb | +----------+-------------+----------------------------------+ root@stack-group-00:/etc# openstack object list container00 +----------+ | Name | +----------+ | services | +----------+
Identificazione del file sul filesystem (funziona perche' e' l'unico: senza abilitare la funzionalita' di unified file and object access non si puo' identificare il file name relativo ad un oggetto)
root@stack-group-00:/etc# du -sk /gpfs/object_fs/o/* | sort -n | tail -n 1 32 /gpfs/object_fs/o/z1device46 root@stack-group-00:/etc# ls -l /gpfs/object_fs/o/z1device46 total 0 drwxr-xr-x. 3 160 160 4096 ott 3 22:31 objects drwxr-xr-x. 2 160 160 4096 ott 3 22:31 tmp root@stack-group-00:/etc# ls -l /gpfs/object_fs/o/z1device46/objects/ total 0 drwxr-xr-x. 3 160 160 4096 ott 3 22:31 14372 root@stack-group-00:/etc# ls -l /gpfs/object_fs/o/z1device46/objects/14372/ total 0 drwxr-xr-x. 3 160 160 4096 ott 3 22:31 cef root@stack-group-00:/etc# ls -l /gpfs/object_fs/o/z1device46/objects/14372/cef/ total 0 drwxr-xr-x. 2 160 160 4096 ott 3 22:31 e093228baeb060af93e659a446529cef root@stack-group-00:/etc# ls -l /gpfs/object_fs/o/z1device46/objects/14372/cef/e093228baeb060af93e659a446529cef/ total 32 -rw-------. 1 160 160 19558 ott 3 22:31 1475526686.31570.data root@stack-group-00:/etc# diff /gpfs/object_fs/o/z1device46/objects/14372/cef/e093228baeb060af93e659a446529cef/1475526686.31570.data /etc/services