Corso Cloud Storage Es6

From Centro Calcolo
Jump to navigation Jump to search

Esercitazione 6

Storage pool e placement policy

Visualizzare NSD e storage pool del file system (mmlsnsd, mmlsfs)

[root@gpfs-srv-0 ~]# mmlsnsd -m

 Disk name    NSD volume ID      Device         Node name                Remarks       
---------------------------------------------------------------------------------------
 data1        AC14005F57F0FBEB   /dev/vdd       gpfs-srv-0.recas.ba.infn.it server node
 metadata1    AC14005F57F0FBE9   /dev/vdb       gpfs-srv-0.recas.ba.infn.it server node
 metadata2    AC14005F57F0FBEA   /dev/vdc       gpfs-srv-0.recas.ba.infn.it server node

[root@gpfs-srv-0 ~]# mmlsfs gpfs_dev -P
flag                value                    description
------------------- ------------------------ -----------------------------------
 -P                 system                   Disk storage pools in file system

Visualizzare i dischi disponibili e creare col disco inutilizzato un nuovo NSD. Il file di stanza deve specificare:

  • nome dell'NSD: premium1
  • utilizzo per solo dati
  • NSD server: gpfs-srv-#
  • storage pool: premium

Comandi: lsblk, mmcrnsd

[root@gpfs-srv-0 ~]# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                           11:0    1 1024M  0 rom  
vda                          252:0    0   32G  0 disk 
├─vda1                       252:1    0  500M  0 part /boot
└─vda2                       252:2    0 31.5G  0 part 
  ├─centos_gpfs--srv--0-root 253:0    0 28.3G  0 lvm  /
  └─centos_gpfs--srv--0-swap 253:1    0  3.2G  0 lvm  [SWAP]
vdb                          252:16   0   10G  0 disk 
└─vdb1                       252:17   0   10G  0 part 
vdc                          252:32   0   10G  0 disk 
└─vdc1                       252:33   0   10G  0 part 
vdd                          252:48   0   50G  0 disk 
└─vdd1                       252:49   0   50G  0 part 
vde                          252:64   0   50G  0 disk 

[root@gpfs-srv-0 ~]# cat nsd_premium_stanza.txt 
%nsd:
  device=/dev/vde
  nsd=premium11
  servers=gpfs-srv-0
  usage=dataOnly
  failureGroup=1
  pool=premium

[root@gpfs-srv-0 ~]# mmcrnsd -F nsd_premium_stanza.txt 
mmcrnsd: Processing disk vde
mmcrnsd: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

[root@gpfs-srv-0 ~]# mmlsnsd

 File system   Disk name    NSD servers                                    
---------------------------------------------------------------------------
 gpfs_dev      metadata1    gpfs-srv-0.recas.ba.infn.it 
 gpfs_dev      metadata2    gpfs-srv-0.recas.ba.infn.it 
 gpfs_dev      data1        gpfs-srv-0.recas.ba.infn.it 
 (free disk)   premium1     gpfs-srv-0.recas.ba.infn.it 

Visualizzate la placement policy di default

Visualizzate la placement policy di default, quindi create un file con dati, e verificate il placement.

[root@gpfs-srv-0 ~]# mmlspolicy gpfs_dev -L
No policy file was installed for file system 'gpfs_dev'.
Data will be stored in pool 'system'.

[root@gpfs-srv-0 ~]# echo OK > /gpfs/fileset2/testfile

[root@gpfs-srv-0 ~]# mmlsattr -L /gpfs/fileset2/testfile 
file name:            /gpfs/fileset2/testfile
metadata replication: 2 max 3
data replication:     1 max 3
immutable:            no
appendOnly:           no
flags:                
storage pool name:    system
fileset name:         fileset2_fs
snapshot name:        
creation time:        Tue Oct  4 12:26:25 2016
Misc attributes:      ARCHIVE
Encrypted:            no

La default placement policy di GPFS e' di mettere i dati nel primo data pool creato sul file system. Se non esistono data pool, i dati vengono messi nel pool system.

Il file e' correttamente messo nel pool system (non potrebbe essere altrimenti)

Inserire il nuovo disco nel file system e verificare la placement policy

Aggiungete il disco al file system.

Dopo visualizzate i dischi del file system, ed i suoi pool, e riguardate la placement policy.

Comandi: mmlspolicy, mmadddisk, mmlsfs, mmdf


[root@gpfs-srv-0 ~]# mmadddisk gpfs_dev -F nsd_premium_stanza.txt 

The following disks of gpfs_dev will be formatted on node gpfs-srv-0.recas.ba.infn.it:
    premium1: size 51200 MB
Extending Allocation Map
Creating Allocation Map for storage pool premium
Flushing Allocation Map for storage pool premium
Disks up to size 547 GB can be added to storage pool premium.
Checking Allocation Map for storage pool premium
Completed adding disks to file system gpfs_dev.
mmadddisk: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
[root@gpfs-srv-0 ~]# mmlsdisk gpfs_dev -L
disk         driver   sector     failure holds    holds                                    storage
name         type       size       group metadata data  status        availability disk id pool         remarks   
------------ -------- ------ ----------- -------- ----- ------------- ------------ ------- ------------ ---------
metadata1    nsd         512           1 Yes      No    ready         up                 1 system        desc
metadata2    nsd         512           2 Yes      No    ready         up                 2 system        desc
data1        nsd         512           1 No       Yes   ready         up                 3 system        desc
premium1     nsd         512           1 No       Yes   ready         up                 4 premium       
Number of quorum disks: 3 
Read quorum value:      2
Write quorum value:     2

[root@gpfs-srv-0 ~]# mmlsfs gpfs_dev -P
flag                value                    description
------------------- ------------------------ -----------------------------------
 -P                 system;premium           Disk storage pools in file system

[root@gpfs-srv-0 ~]# mmdf gpfs_dev -d
disk                disk size  failure holds    holds              free KB             free KB
name                    in KB    group metadata data        in full blocks        in fragments
--------------- ------------- -------- -------- ----- -------------------- -------------------
Disks in storage pool: system (Maximum disk size allowed is 797 GB)
data1                52428800        1 No       Yes        43820032 ( 84%)          5856 ( 0%) 
                -------------                         -------------------- -------------------
(pool total)         52428800                              43820032 ( 84%)          5856 ( 0%)

Disks in storage pool: premium (Maximum disk size allowed is 509 GB)
premium1             52428800        1 No       Yes        52361216 (100%)          1888 ( 0%) 
                -------------                         -------------------- -------------------
(pool total)         52428800                              52361216 (100%)          1888 ( 0%)

                =============                         ==================== ===================
(data)              104857600                              96181248 ( 92%)          7744 ( 0%)
(metadata)                  0                                     0 (  0%)             0 ( 0%)
                =============                         ==================== ===================
(total)             104857600                              96181248 ( 92%)          7744 ( 0%)
[root@gpfs-srv-0 ~]# mmlspolicy gpfs_dev -L
No policy file was installed for file system 'gpfs_dev'.
Data will be stored in pool 'premium'.

La default placement policy, applicata ora che esiste un data pool, e' di mettere i dati nel pool premium.

Create un nuovo file e verificate la nuova placement policy.

[root@gpfs-srv-0 ~]# echo OK > /gpfs/fileset2/testfile2
[root@gpfs-srv-0 ~]# mmlsattr -L /gpfs/fileset2/testfile2
file name:            /gpfs/fileset2/testfile2
metadata replication: 2 max 3
data replication:     1 max 3
immutable:            no
appendOnly:           no
flags:                
storage pool name:    premium
fileset name:         fileset2_fs
snapshot name:        
creation time:        Tue Oct  4 12:39:51 2016
Misc attributes:      ARCHIVE
Encrypted:            no

Modifica della placement policy e placement di un file

Create una placement policy che metta tutti i file del fileset fileset2_fs nel pool premium, gli altri nel pool system.

Quindi verificate la nuova policy. Poi modificate il placement del file /gpfs/fileset2/testfile in modo da mettere i suoi dati nel pool premium.

[root@gpfs-srv-0 ~]# cat policy.txt 
RULE 'premium-fileset2' SET POOL 'premium' FOR FILESET ('fileset2_fs')
RULE 'default' SET POOL 'system'

[root@gpfs-srv-0 ~]# mmchpolicy gpfs_dev policy.txt -I test
Validated policy `policy.txt': Parsed 2 policy rules.
[root@gpfs-srv-0 ~]# mmchpolicy gpfs_dev policy.txt
Validated policy `policy.txt': Parsed 2 policy rules.
Policy `policy.txt' installed and broadcast to all nodes.

[root@gpfs-srv-0 ~]# mmlspolicy gpfs_dev -L
RULE 'premium-fileset2' SET POOL 'premium' FOR FILESET ('fileset2_fs')
RULE 'default' SET POOL 'system'

[root@gpfs-srv-0 ~]# echo OK > /gpfs/in-system
[root@gpfs-srv-0 ~]# echo OK > /gpfs/fileset2/in-premium
[root@gpfs-srv-0 ~]# mmlsattr -L /gpfs/in-system | grep pool
storage pool name:    system

[root@gpfs-srv-0 ~]# mmlsattr -L /gpfs/fileset2/in-premium | grep pool
storage pool name:    premium

La placement policy non modifica il placement dei file precedentemente creati.

Modificate il placement del file /gpfs/fileset2/testfile per metterlo in premium.

[root@gpfs-srv-0 ~]# mmlsattr -L /gpfs/fileset2/testfile | grep pool
storage pool name:    system

[root@gpfs-srv-0 ~]# mmchattr -P premium /gpfs/fileset2/testfile

[root@gpfs-srv-0 ~]# mmlsattr -L /gpfs/fileset2/testfile | grep pool
storage pool name:    premium

Utilizzo degli storage pool su Cinder

Create un nuovo backend storage per Cinder, di nome gpfs-gold, identico al precedente, ma configurato per usare il pool premium.

  • editate il file /etc/cinder/cinder.conf
  • fate ripartire i servizi cinder-api e cinder-volume
  • create un nuovo volume type gpfs-gold, con l'attributo volume_backend_name = GPFS-GOLD
  • create due volumi, uno con volume type gpfs, uno con volume type gpfs-gold, e verificate il placement dei due fiel relativi in /gpfs/openstack/cinder/volumes


root@stack-group-00:/etc/cinder# cat /etc/cinder/cinder.conf
...
enabled_backends=gpfs,gpfs-gold
...
[gpfs-gold]
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver
gpfs_mount_point_base = /gpfs/openstack/cinder/volumes/
gpfs_images_dir = /gpfs/openstack/glance/images/
gpfs_images_share_mode = copy_on_write
gpfs_max_clone_depth = 3
gpfs_sparse_volumes = True
gpfs_storage_pool = premium
volume_backend_name = GPFS-GOLD

root@stack-group-00:/etc/cinder# service cinder-api restart
cinder-api stop/waiting
cinder-api start/running, process 6739

root@stack-group-00:/etc/cinder# service cinder-volume restart
cinder-volume stop/waiting
cinder-volume start/running, process 6784


root@stack-group-00:~# openstack volume type create --property volume_backend_name=GPFS-GOLD gpfs-gold
+---------------------------------+--------------------------------------+
| Field                           | Value                                |
+---------------------------------+--------------------------------------+
| description                     | None                                 |
| id                              | cbb768a7-6cc4-4a8f-9f44-bfc9f3ccb020 |
| is_public                       | True                                 |
| name                            | gpfs-gold                            |
| os-volume-type-access:is_public | True                                 |
| properties                      | volume_backend_name='GPFS-GOLD'      |
+---------------------------------+--------------------------------------+
root@stack-group-00:~# openstack volume create --size 1 --type gpfs-gold volume-gold
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2016-10-04T13:12:42.367754           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 8fec5a48-00fa-42f9-afaf-c68035b65a3e |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume-gold                          |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | gpfs-gold                            |
| updated_at          | None                                 |
| user_id             | f85114ed0d1545a6a56b6391619f9a15     |
+---------------------+--------------------------------------+

root@stack-group-00:~# mmlsattr -L /gpfs/openstack/cinder/volumes/volume-8fec5a48-00fa-42f9-afaf-c68035b65a3e | grep pool
storage pool name:    premium
root@stack-group-00:~# openstack volume create --size 1 --type gpfs volume-bronze
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2016-10-04T13:13:57.115022           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 547d3aa5-fc2d-4593-888b-c6ad7735948e |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume-bronze                        |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | gpfs                                 |
| updated_at          | None                                 |
| user_id             | f85114ed0d1545a6a56b6391619f9a15     |
+---------------------+--------------------------------------+

root@stack-group-00:~# mmlsattr -L /gpfs/openstack/cinder/volumes/volume-547d3aa5-fc2d-4593-888b-c6ad7735948e | grep pool
storage pool name:    system

Come previsto, i file associati ai due volume type finiscono in due pool diversi.


Unified file and object access

Abilitate la capability file-access

I file di confiugrazione dell'object service (inclusi quelli di keystone, se usato, e swift) sono gestiti dal CCR (ospitati da qualche parte sui quorum nodes).

Sul file system esistono file in /etc/, usati dai servizi, ma che non possono essere editati in quanto riscritti ciclicamente dalla versione di riferimento del CCR.

Per modifiarli e' necessario utilizzare il comando mmobg config change, che opera in modo analogo a crudini, ma interagendo con i quorum nodes.

La capability file-access, come le altre, si abilita/disabilita nel file di configurazione spectrum-scale-object.conf.

[root@gpfs-obj1-0 ~]# mmobj config list --ccrfile spectrum-scale-object.conf
[filesystem-details]
filesystem = gpfs_dev
device = /dev/gpfs_dev
mountpoint = /gpfs
fileset = object_fs
[swift-config]
endpoint_hostname = gpfs-ces-0
[capabilities]
file-access-enabled = false
multi-region-enabled = false
s3-enabled = false

[root@gpfs-obj1-0 ~]# mmobj config change --ccrfile spectrum-scale-object.conf --section capabilities --property file-access-enabled --value true

[root@gpfs-obj1-0 ~]# mmobj config list --ccrfile spectrum-scale-object.conf --section capabilities
[capabilities]
file-access-enabled = true
multi-region-enabled = false
s3-enabled = false

Visualizzate la configurazione dell'objectizer

Non e' necessario cambiarla.

[root@gpfs-obj1-0 ~]# mmobj config list --ccrfile spectrum-scale-objectizer.conf
[DEFAULT]
objectization_tmp_dir = /gpfs/ibmobjectizer/tmp
objectization_threads = 24
batch_size = 100
objectization_interval = 1800
[IBMOBJECTIZER-LOGGER]
log_level = INFO

Configurate l'identity management

L'identity management deve essere local_mode. Verificate.

[root@gpfs-obj1-0 ~]# mmobj config list --ccrfile object-server-sof.conf --section DEFAULT --property id_mgmt
id_mgmt = local_mode

Create una storage policy per il file access

Create una storage policy di nome fileaccess-test.

Questo crea un fileset su cui verra' applicata la capability.

[root@gpfs-obj1-0 gpfs]# mmobj policy create fileaccess-test -i 512K --enable-file-access
[I] Getting latest configuration from ccr
[I] Creating fileset /dev/gpfs_dev:obj_fileaccess-test
[I] Creating new unique index and building the object rings
[I] Updating the configuration
[I] Uploading the changed configuration
[root@gpfs-obj1-0 ~]# mmobj policy list

Index       Name            Default Deprecated Fileset             Functions              
------------------------------------------------------------------------------------------
0           SwiftDefault    yes                object_fs                                  
17561610040 fileaccess-test                    obj_fileaccess-test file-and-object-access 
[root@gpfs-obj1-0 gpfs]# mmlsfileset gpfs_dev
Filesets in file system 'gpfs_dev':
Name                     Status    Path                                    
root                     Linked    /gpfs                                   
openstack_fs             Linked    /gpfs/openstack                         
object_fs                Linked    /gpfs/object_fs                         
obj_fileaccess-test      Linked    /gpfs/obj_fileaccess-test

Create un container associato a questa storage policy (va definita alla creazione la property X-Storage-Policy uguale al nome della storage policy creata)

NOTA: il comando openstack non supporta la definizione di property alla creazione del container, usate il comando swift post.

root@stack-group-00:~# swift post fileaccess -H "X-Storage-Policy: fileaccess-test"

Ora identificate il path del container appena creato nel fileset obj_fileaccess, e copiateci dentro il contenuto di /etc.

Poi eseguite a mano l'oggettizzazione dei file copiati, e verificate facendo un list degli oggetti del container.

root@stack-group-00:~# cp /etc/* /gpfs/obj_fileaccess-test/s17561610040z1device1/AUTH_d0115388e572440eb53c4a8671fef6ae/fileaccess/

root@stack-group-00:~# mmobj file-access --object-path /gpfs/obj_fileaccess-test/s17561610040z1device1/AUTH_d0115388e572440eb53c4a8671fef6ae/fileaccess/
Loading objectization configuration from CCR
Fetching storage policy details
Creating container to database map
Performing objectization
Objectization complete

root@stack-group-00:~# openstack object list fileaccess
+-------------------------+
| Name                    |
+-------------------------+
| adduser.conf            |
| bash.bashrc             |
| bash_completion         |
| bindresvport.blacklist  |
...
| zsh_command_not_found   |
+-------------------------+

L'oggettizzazione manuale non e' necessaria, se si puo' aspettare che l'objectizer effettui il suo lavoro.

Object compression policy

Create una storage policy per la compressione degli oggetti.

[root@gpfs-obj1-0 ~]# mmobj policy create compression-test --enable-compression --compression-schedule "0,30:*:*:*"
[I] It is recommended to use a compression schedule frequency that is one hour or longer.
[I] Getting latest configuration from ccr
[I] Creating fileset /dev/gpfs_dev:obj_compression-test
[I] Creating new unique index and building the object rings
[I] Updating the configuration
[I] Uploading the changed configuration
[root@gpfs-obj1-0 ~]# mmobj policy list

Index       Name             Default Deprecated Fileset              Functions              
--------------------------------------------------------------------------------------------
0           SwiftDefault     yes                object_fs                                   
17561610040 fileaccess-test                     obj_fileaccess-test  file-and-object-access 
17561610041 compression-test                    obj_compression-test compression            

[root@gpfs-obj1-0 ~]# mmlsfileset gpfs_dev 
Filesets in file system 'gpfs_dev':
Name                     Status    Path                                    
root                     Linked    /gpfs                                   
openstack_fs             Linked    /gpfs/openstack                         
object_fs                Linked    /gpfs/object_fs                         
obj_fileaccess-test      Linked    /gpfs/obj_fileaccess-test               
obj_compression-test     Linked    /gpfs/obj_compression-test              

Create un container e caricate un oggetto sul container, poi verificate che entro 30 minuti verra' compresso.

root@stack-group-00:~# swift post compressed -H "X-Storage-Policy: compression-test" root@stack-group-00:~# openstack object create compressed /etc/services +---------------+------------+----------------------------------+ | object | container | etag | +---------------+------------+----------------------------------+ | /etc/services | compressed | 3e73cc5c77799fd3e7a02c62474107bb | +---------------+------------+----------------------------------+

Cercate il file sul filesystem e verificate la compressione

[root@gpfs-obj1-0 ~]# find  /gpfs/obj_compression-test/ -type f
/gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data

[root@gpfs-obj1-0 ~]# ls -ls /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data
32 -rw-------. 1 swift swift 19558 Oct  4 16:53 /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data

[root@gpfs-obj1-0 ~]# mmlsattr -L /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data
file name:            /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data
metadata replication: 2 max 3
data replication:     1 max 3
immutable:            no
appendOnly:           no
flags:                
storage pool name:    system
fileset name:         obj_compression-test
snapshot name:        
creation time:        Tue Oct  4 16:53:07 2016
Misc attributes:      ARCHIVE
Encrypted:            no

Non ancora ...

root@stack-group-00:~# mmlsattr -L /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data 
file name:            /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data
metadata replication: 2 max 3
data replication:     1 max 3
immutable:            no
appendOnly:           no
flags:                
storage pool name:    system
fileset name:         obj_compression-test
snapshot name:        
creation time:        Tue Oct  4 18:53:07 2016
Misc attributes:      ARCHIVE COMPRESSED
Encrypted:            no
root@stack-group-00:~# ls -ls /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data
32 -rw-------. 1 160 160 19558 ott  4 18:53 /gpfs/obj_compression-test/s17561610041z1device12/objects-17561610041/4526/f41/46b8407a0ea0b546e86345268a28bf41/1475599987.36914.data

Ora e' compresso. Notate che l'occupazione su disco non e' cambiata (troppo piccolo, occupa comunque 32K).