Ceph 基础篇

发布日期:2022-06-18 17:10    点击次数:108


Ceph 基础篇

对象存储 RGW 基本认识

Ceph Object Gateway is an object storage interface built on top of librados to provide applications with a RESTful gateway to Ceph Storage Clusters. Ceph Object Storage supports two interfaces:

Ceph 对象网关是一个对象存储接口,它设立在 librados 上头提供 RESTful 网关的Ceph 对象存储集群,Ceph 对象存储守旧两种接口;

1.S3-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API.

兼容S3:提供对象存储功能,与大部分 AWS S3 RESTful API 子集兼容;

2.Swift-compatible: Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API.

兼容Swift:提供对象存储功能,与大部分 OpenStack Swift API 子集兼容;

Ceph Object Storage uses the Ceph Object Gateway daemon (radosgw), which is an HTTP server for interacting with a Ceph Storage Cluster. Since it provides interfaces compatible with OpenStack Swift and Amazon S3, the Ceph Object Gateway has its own user management. Ceph Object Gateway can store data in the same Ceph Storage Cluster used to store data from Ceph File System clients or Ceph Block Device clients. The S3 and Swift APIs share a common namespace, so you may write data with one API and retrieve it with the other.

Ceph 对象存储主淌若由 Ceph 对象网关防守技术(radosgw)完结的,它是一个用于与 Ceph 存储集群交互的 HTTP 处事器。因为它提供了兼容 S3 与 Switf 的接口,Ceph 对象网关领有我方的用户束缚,Ceph 对象网关能存储数据在吞并个 Ceph 存储集群,同期存储CephFS和块拓荒客户。S3和Swift API分享一个全球称号空间,因此您不错用一个API编写数据,然后用另一个API检索数据。

Ceph 对象网关主淌若由 radosgw 摄取用户苦求,然后与后端librados交互;它主要起到继往开来的功能,对外它能兼容两种接口,一种是S3,另一种是Openstack Swift 接口,这两种接口,都有各自的用户认证机制,是以 Ceph 也提供了一套孤苦的用户束缚机制,这套用户束缚机制能同期兼容S3和Swift接口,最终数据会落在 OSD 上头,不管使用 S3 还是 Swift ,它们落到 OSD 上头,都是计划的称号空间;是以你使用 S3 存储的,也不错使用 Swift 进行看望。这是对象存储的一个基本架构,是以如果要使用它,咱们需要部署radosgw,智商看望到集群,默许是莫得安设的。

bucket 是什么?不错交融为装载对象的容器,它的后端是无穷可膨胀的存储空间,而况具备安全可靠性。Ceph 对象存储它后端是借助 ceph rados 完结数据容灾的机制,那么他能提供哪些功能呢?

基本功能 RESTful Interface # RESTful作风的接口,完结上传下载及束缚功能; S3- and Swift-compliant APIs # 提供两种作风的 API 接口,兼容 S3 和 Swift; S3-style subdomains Unified S3/Swift namespace # 扁平化、颐养的的S3/Swift的称号空间; User management # 为了安全性,也提供用户束缚,不错截至对象是不错全球看望,还是授权看望; Usage tracking # 跟踪用户使用情况 rados df Striped objects # 守旧分片上传 Cloud solution integration # 守旧云处分决策集成 Multi-site deployment # 守旧多站点部署 Multi-site replication # 守旧多站点复制 安设 RGW 1. 软件安设
[root@ceph-node01 ~]# rpm -qa |grep ceph ceph-base-14.2.11-0.el7.x86_64 ceph-mon-14.2.11-0.el7.x86_64 ceph-deploy-2.0.1-0.noarch python-ceph-argparse-14.2.11-0.el7.x86_64 libcephfs2-14.2.11-0.el7.x86_64 ceph-common-14.2.11-0.el7.x86_64 ceph-selinux-14.2.11-0.el7.x86_64 ceph-mds-14.2.11-0.el7.x86_64 ceph-14.2.11-0.el7.x86_64 python-cephfs-14.2.11-0.el7.x86_64 ceph-osd-14.2.11-0.el7.x86_64 ceph-mgr-14.2.11-0.el7.x86_64 ceph-radosgw-14.2.11-0.el7.x86_64 # 径直使用 yum 安设即可 [root@ceph-node01 ~]# 
2. 运行处事,默许运行在7480端口
[root@ceph-node01 ceph-deploy]# ceph-deploy rgw create ceph-node01 
3. 处事检测
[root@ceph-node01 ceph-deploy]# systemctl status ceph-radosgw@rgw.ceph-node01 ● ceph-radosgw@rgw.ceph-node01.service - Ceph rados gateway    Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)    Active: active (running) since 一 2020-10-05 20:34:36 EDT; 14s ago  Main PID: 33574 (radosgw)    CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-node01.service            └─33574 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph  10月 05 20:34:36 ceph-node01 systemd[1]: Started Ceph rados gateway. 10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/ceph-radosgw@.service:13] Unknown lvalue 'LockPersonality' in section 'Service' 10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/ceph-radosgw@.service:14] Unknown lvalue 'MemoryDenyWriteExecute' in ...Service' 10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/ceph-radosgw@.service:17] Unknown lvalue 'ProtectControlGroups' in se...Service' 10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/ceph-radosgw@.service:19] Unknown lvalue 'ProtectKernelModules' in se...Service' 10月 05 20:34:36 ceph-node01 systemd[1]: [/usr/lib/systemd/system/ceph-radosgw@.service:20] Unknown lvalue 'ProtectKernelTunables' in s...Service' Hint: Some lines were ellipsized, use -l to show in full. [root@ceph-node01 ceph-deploy]# netstat -antp |grep 7480 tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 33574/radosgw [root@ceph-node01 ceph-deploy]# ceph -s   cluster:     id: cc10b0cb-476f-420c-b1d6-e48c1dc929af     health: HEALTH_OK    services:     mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 2d)     mgr: ceph-node01(active, since 2d), standbys: ceph-node02, ceph-node03     osd: 3 osds: 3 up (since 2d), 3 in (since 2d)     rgw: 1 daemon active (ceph-node01)    task status:    data:     pools: 5 pools, 256 pgs     objects: 507 objects, 1.1 GiB     usage: 5.3 GiB used, 395 GiB / 400 GiB avail     pgs: 256 active+clean    io:     client: 23 KiB/s rd, 0 B/s wr, 35 op/s rd, 23 op/s wr  [root@ceph-node01 ceph-deploy]# 
4. 初度看望处事
[root@ceph-node01 ceph-deploy]# curl http://ceph-node01:7480/ <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@ceph-node01 ceph-deploy]# 

教导失实,使用了 anonymous 匿名用户苦求统共列表,到这里诠释安设结束了;

5. 修改 RGW 的 默许端口 7480 到 80
[root@ceph-node01 ceph-deploy]# cat ceph.conf 。。。  [client.rgw.ceph-node01] rgw_frontends = "civetweb port=80" [root@ceph-node01 ceph-deploy]# 

修改建树文献ceph.conf,为什么修改这个文献呢?因为背面添加节点时,默许是copy的这个建树文献,修改这个建树文献,不错确保集群的唯独性,底下推送建树文献到统共节点;

[root@ceph-node01 ceph-deploy]# ceph-deploy --overwrite-conf config push ceph-node01 ceph-node02 ceph-node03 

--overwrite-conf 把稳需要使用这个选项,不然教导无法隐敝;

6. 重启处事
[root@ceph-node01 ceph-deploy]# systemctl restart ceph-radosgw@rgw.ceph-node01 [root@ceph-node01 ceph-deploy]# netstat -antp |grep 80|grep radosgw tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 34352/radosgw tcp 0 0 100.73.18.152:36100 100.73.18.153:6800 ESTABLISHED 34352/radosgw tcp 0 0 100.73.18.152:53018 100.73.18.152:6800 ESTABLISHED 34352/radosgw tcp 0 0 100.73.18.152:36118 100.73.18.153:6800 ESTABLISHED 34352/radosgw tcp 0 0 100.73.18.152:39680 100.73.18.152:6802 ESTABLISHED 34352/radosgw tcp 0 0 100.73.18.152:56320 100.73.18.128:6800 ESTABLISHED 34352/radosgw tcp 0 0 100.73.18.152:39666 100.73.18.152:6802 ESTABLISHED 34352/radosgw tcp 0 0 100.73.18.152:56336 100.73.18.128:6800 ESTABLISHED 34352/radosgw tcp 0 0 100.73.18.152:53034 100.73.18.152:6800 ESTABLISHED 34352/radosgw [root@ceph-node01 ceph-deploy]# 
7. 考证 80 端口
[root@ceph-node01 ceph-deploy]# curl http://ceph-node01/ <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@ceph-node01 ceph-deploy]# 

直此,RGW 处事部署完成;

使用 S3 看望 RGW 1. 创建 s3 的兼容用户
[root@ceph-node01 ceph-deploy]# radosgw-admin user create --uid ceph-s3-user --display-name "Ceph S3 User Demo" {     "user_id": "ceph-s3-user",     "display_name": "Ceph S3 User Demo",     "email": "",     "suspended": 0,     "max_buckets": 1000,     "subusers": [],     "keys": [         {             "user": "ceph-s3-user",             "access_key": "V3J9L4M1WKV5O5ECAKPU",             "secret_key": "f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw"         }     ],     "swift_keys": [],     "caps": [],     "op_mask": "read, write, delete",     "default_placement": "",     "default_storage_class": "",     "placement_tags": [],     "bucket_quota": {         "enabled": false,         "check_on_raw": false,         "max_size": -1,         "max_size_kb": 0,         "max_objects": -1     },     "user_quota": {         "enabled": false,         "check_on_raw": false,         "max_size": -1,         "max_size_kb": 0,         "max_objects": -1     },     "temp_url_keys": [],     "type": "rgw",     "mfa_ids": [] }  [root@ceph-node01 ceph-deploy]# 

把稳上头的 access_key 与 secret_key 很用功,要记下来,以备后用; 不纪录也没相关系,咱们不错使用以下号令检验;

[root@ceph-node01 ceph-deploy]# radosgw-admin user info --uid ceph-s3-user 
2. 使用 Ceph SDK 看望 Ceph 集群

官方SDK 使用诠释:https://docs.ceph.com/en/latest/radosgw/s3/python/#using-s3-api-extensions

[root@ceph-node01 ~]# cat s3client.py import boto import boto.s3.connection access_key = 'V3J9L4M1WKV5O5ECAKPU' secret_key = 'f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw'  conn = boto.connect_s3(         aws_access_key_id = access_key,男女啪啪高潮无遮挡免费         aws_secret_access_key = secret_key,         host = 'ceph-node01', port = 80,         is_secure=False, # uncomment if you are not using ssl         calling_format = boto.s3.connection.OrdinaryCallingFormat(),         ) bucket = conn.create_bucket("ceph-s3-bucket") for bucket in conn.get_all_buckets():         print "{name}\t{created}".format(                 name = bucket.name,                 created = bucket.creation_date,         ) [root@ceph-node01 ~]# [root@ceph-node01 ~]# python s3client.py ceph-s3-bucket 2020-10-06T04:13:10.629Z [root@ceph-node01 ~]# 

安设完rgw后,会自动创建3个pool,一个是rgw.control、rgw.meta、rgw.log,当咱们创建bucket后,还会创建一个rgw.buckets.index pool池;

[root@ceph-node01 ~]# ceph osd lspools 1 ceph-demo 2 .rgw.root 3 default.rgw.control 4 default.rgw.meta 5 default.rgw.log 6 default.rgw.buckets.index [root@ceph-node01 ~]# 
3. 使用号令行方式操作 rgw

安设号令行器具

[root@ceph-node01 ~]# yum -y install s3cmd 

建树号令行器具

[root@ceph-node01 ~]# s3cmd --configure  Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options.  Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: V3J9L4M1WKV5O5ECAKPU Secret Key: f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw Default Region [US]:  Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 100.73.18.152:80  Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 100.73.18.152:80/%(bucket)s  Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]:  When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: no  On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name:  New settings:   Access Key: V3J9L4M1WKV5O5ECAKPU   Secret Key: f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw   Default Region: US   S3 Endpoint: 100.73.18.152:80   DNS-style bucket+hostname:port template for accessing a bucket: 100.73.18.152:80/%(bucket)s   Encryption password:   Path to GPG program: /usr/bin/gpg   Use HTTPS protocol: False   HTTP Proxy server name:   HTTP Proxy server port: 0  Test access with supplied credentials? [Y/n] y Please wait, attempting to list all buckets... Success. Your access key and secret key worked fine :-)  Now verifying that encryption works... Not configured. Never mind.  Save settings? [y/N] y Configuration saved to '/root/.s3cfg' [root@ceph-node01 ~]# 

粗浅使用号令行器具

[root@ceph-node01 ~]# s3cmd ls 2020-10-06 04:13 s3://ceph-s3-bucket [root@ceph-node01 ~]# s3cmd mb s3://s3cmd-demo ERROR: S3 error: 403 (SignatureDoesNotMatch) 

这是需要修改版块,启用v2版块即可

[root@ceph-node01 ~]# sed -i '/signature_v2/s/False/True/g' /root/.s3cfg [root@ceph-node01 ~]# 

再次创建

[root@ceph-node01 ~]# s3cmd mb s3://s3cmd-demo Bucket 's3://s3cmd-demo/' created [root@ceph-node01 ~]# 

上传单个文献

[root@ceph-node01 ~]# s3cmd put /etc/fstab s3://s3cmd-demo/fatab-demo upload: '/etc/fstab' -> 's3://s3cmd-demo/fatab-demo' [1 of 1]  465 of 465 100% in 0s 1751.66 B/s done ERROR: S3 error: 416 (InvalidRange) [root@ceph-node01 ~]# 

出现失实 ERROR: S3 error: 416 (InvalidRange),原因是上传object对象的期间,需要创建 pool 存储数据,创建 pool 是需要 pg,当 pg 数目不够的情况下,不错将现存的 pg 数目调小,梗概修改建树文献移动参数,然后重启 mon 程度,移动措施有三种:

移动 pg_num 和 pgp_num ,默许参数值均为8; 移动 mon_max_pg_per_osd 参数,默许是 300,相宜增大;当每个 OSD 中的 PG数目最初这个参数值时,就会报错;https://www.suse.com/support/kb/doc/?id=000019402 加多更多的 OSD 进来;

咱们领受第2种措施:

[root@ceph-node01 ceph-deploy]# cat ceph.conf [global] fsid = cc10b0cb-476f-420c-b1d6-e48c1dc929af public_network = 100.73.18.0/24 cluster_network = 100.73.18.0/24 mon_initial_members = ceph-node01 mon_host = 100.73.18.152 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx mon_max_pg_per_osd = 1000  [client.rgw.ceph-node01] rgw_frontends = "civetweb port=80" [root@ceph-node01 ceph-deploy]# 

重启下 monitor daemon 程度

[root@ceph-node01 ceph-deploy]# systemctl restart ceph-mon@ceph-node01 [root@ceph-node01 ceph-deploy]# systemctl restart ceph-mon@ceph-node02 [root@ceph-node01 ceph-deploy]# systemctl restart ceph-mon@ceph-node03 

再次上传文献测试

[root@ceph-node01 ceph-deploy]# s3cmd put /etc/fstab s3://s3cmd-demo/ upload: '/etc/fstab' -> 's3://s3cmd-demo/fstab' [1 of 1]  465 of 465 100% in 1s 337.90 B/s done [root@ceph-node01 ceph-deploy]# 
4. 常用操作
# 1. 列出统共 bucket [root@ceph-node01 ~]# s3cmd ls 2020-10-06 04:13 s3://ceph-s3-bucket 2020-10-06 04:34 s3://s3cmd-demo 2020-10-06 08:07 s3://swift-demo  # 2. 创建 bucket [root@ceph-node01 ~]# s3cmd mb s3://gwj-demo/ Bucket 's3://gwj-demo/' created  # 3. 删除空的bucket [root@ceph-node01 ~]# s3cmd rb s3://gwj-demo/ Bucket 's3://gwj-demo/' removed  # 4. 上传文献到bucket [root@ceph-node01 ~]# s3cmd put ip s3://s3cmd-demo/ upload: 'ip' -> 's3://s3cmd-demo/ip' [1 of 1]  78 of 78 100% in 0s 2.21 KB/s done  # 5. 上传目次到 bucket [root@ceph-node01 ~]# s3cmd put ./ s3://s3cmd-demo/ ERROR: Parameter problem: Use --recursive to upload a directory: ./  [root@ceph-node01 ~]# s3cmd put ./ s3://s3cmd-demo/ --recursive 。。。 upload: './ceph-deploy/get-pip.py' -> 's3://s3cmd-demo/ceph-deploy/get-pip.py' [36 of 39]  1885433 of 1885433 100% in 0s 18.00 MB/s done upload: './ip' -> 's3://s3cmd-demo/ip' [37 of 39]  78 of 78 100% in 0s 4.60 KB/s done upload: './s3client.py' -> 's3://s3cmd-demo/s3client.py' [38 of 39]  655 of 655 100% in 0s 10.23 KB/s done upload: './size.log' -> 's3://s3cmd-demo/size.log' [39 of 39]  2448 of 2448 100% in 0s 33.82 KB/s done [root@ceph-node01 ~]#  # 6. 列举 bucket 中的实践 [root@ceph-node01 ~]# s3cmd ls s3://s3cmd-demo/                           DIR s3://s3cmd-demo/.cache/                           DIR s3://s3cmd-demo/.ssh/                           DIR s3://s3cmd-demo/ceph-deploy/ 2020-10-06 10:24 19887 s3://s3cmd-demo/.bash_history 2020-10-06 10:24 18 s3://s3cmd-demo/.bash_logout 2020-10-06 10:24 176 s3://s3cmd-demo/.bash_profile 2020-10-06 10:24 176 s3://s3cmd-demo/.bashrc 2020-10-06 10:24 1077 s3://s3cmd-demo/.cephdeploy.conf 2020-10-06 10:24 100 s3://s3cmd-demo/.cshrc 2020-10-06 10:24 0 s3://s3cmd-demo/.history 2020-10-06 10:24 2140 s3://s3cmd-demo/.s3cfg 2020-10-06 10:24 12288 s3://s3cmd-demo/.swp 2020-10-06 10:24 129 s3://s3cmd-demo/.tcshrc 2020-10-06 10:24 5864 s3://s3cmd-demo/.viminfo 2020-10-06 10:24 974 s3://s3cmd-demo/anaconda-ks.cfg 2020-10-06 10:24 3454 s3://s3cmd-demo/ceph-deploy-ceph.log 2020-10-06 08:57 465 s3://s3cmd-demo/fstab 2020-10-06 10:24 78 s3://s3cmd-demo/ip 2020-10-06 10:24 655 s3://s3cmd-demo/s3client.py 2020-10-06 10:24 2448 s3://s3cmd-demo/size.log [root@ceph-node01 ~]#  # 7. 下载单个文献 [root@ceph-node01 gwj]# s3cmd get s3://s3cmd-demo/size.log download: 's3://s3cmd-demo/size.log' -> './size.log' [1 of 1]  2448 of 2448 100% in 0s 242.11 KB/s done [root@ceph-node01 gwj]# ls size.log [root@ceph-node01 gwj]#  # 8. 删除 bucket 中的实践 [root@ceph-node01 gwj]# s3cmd del s3://s3cmd-demo/size.log delete: 's3://s3cmd-demo/size.log' [root@ceph-node01 gwj]# s3cmd get s3://s3cmd-demo/size.log ERROR: Parameter problem: File ./size.log already exists. Use either of --force / --continue / --skip-existing or give it a new name. [root@ceph-node01 gwj]#  # 9. 得回对应的bucket所占用的空间大小 [root@ceph-node01 gwj]# s3cmd du -H s3://s3cmd-demo/    3M 39 objects s3://s3cmd-demo/ [root@ceph-node01 gwj]# s3cmd du -H s3://s3cmd-demo/.ssh    3K 4 objects s3://s3cmd-demo/.ssh [root@ceph-node01 gwj]#  # 10. 检验bucket文献信息 [root@ceph-node01 gwj]# s3cmd info s3://s3cmd-demo/ip s3://s3cmd-demo/ip (object):    File size: 78    Last mod: Tue, 06 Oct 2020 10:24:29 GMT    MIME type: text/plain    Storage: STANDARD    MD5 sum: fd3066a2b8b805e905aeb073afd970cf    SSE: none    Policy: none    CORS: none    ACL: Ceph S3 User Demo: FULL_CONTROL    x-amz-meta-s3cmd-attrs: atime:1601969067/ctime:1601287013/gid:0/gname:root/md5:fd3066a2b8b805e905aeb073afd970cf/mode:33188/mtime:1601287013/uid:0/uname:root [root@ceph-node01 gwj]#  # 11. 两个bucket之间相互cp [root@ceph-node01 gwj]# s3cmd cp s3://s3cmd-demo/ip s3://test-demo/ remote copy: 's3://s3cmd-demo/ip' -> 's3://test-demo/ip' [root@ceph-node01 gwj]# s3cmd cp --recursive s3://s3cmd-demo/.ssh s3://test-demo/ remote copy: 's3://s3cmd-demo/.ssh/authorized_keys' -> 's3://test-demo/.ssh/authorized_keys' remote copy: 's3://s3cmd-demo/.ssh/id_rsa' -> 's3://test-demo/.ssh/id_rsa' remote copy: 's3://s3cmd-demo/.ssh/id_rsa.pub' -> 's3://test-demo/.ssh/id_rsa.pub' remote copy: 's3://s3cmd-demo/.ssh/known_hosts' -> 's3://test-demo/.ssh/known_hosts' [root@ceph-node01 gwj]#  # 12. 两个bucket之间进行mv操作 [root@ceph-node01 gwj]# s3cmd ls s3://s3cmd-demo/.swp 2020-10-06 10:24 12288 s3://s3cmd-demo/.swp [root@ceph-node01 gwj]# s3cmd mv s3://s3cmd-demo/.swp s3://test-demo/ move: 's3://s3cmd-demo/.swp' -> 's3://test-demo/.swp' [root@ceph-node01 gwj]# s3cmd ls s3://test-demo/.swp 2020-10-06 10:36 12288 s3://test-demo/.swp [root@ceph-node01 gwj]# s3cmd ls s3://s3cmd-demo/.swp [root@ceph-node01 gwj]#  # 13. 列出需要同步的文献和目次,但不进行同步 [root@ceph-node01 ~]# s3cmd sync --dry-run ./ s3://s3cmd-demo upload: './.swp' -> 's3://s3cmd-demo/.swp' upload: './ip' -> 's3://s3cmd-demo/ip' upload: './.cache/abrt/lastnotification' -> 's3://s3cmd-demo/.cache/abrt/lastnotification' remote copy: 'size.log' -> 'gwj/size.log' WARNING: Exiting now because of --dry-run [root@ceph-node01 ~]#  # 14. 在bucket中删除土产货不存在的文献 [root@ceph-node01 a]# ls 10.txt 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt [root@ceph-node01 a]# [root@ceph-node01 a]# s3cmd ls s3://test2-demo/ 2020-10-06 11:17 43 s3://test2-demo/1.txt 2020-10-06 11:17 43 s3://test2-demo/10.txt 2020-10-06 11:17 43 s3://test2-demo/2.txt 2020-10-06 11:17 43 s3://test2-demo/3.txt 2020-10-06 11:17 43 s3://test2-demo/4.txt 2020-10-06 11:17 43 s3://test2-demo/5.txt 2020-10-06 11:17 43 s3://test2-demo/6.txt 2020-10-06 11:17 43 s3://test2-demo/7.txt 2020-10-06 11:17 43 s3://test2-demo/8.txt 2020-10-06 11:17 43 s3://test2-demo/9.txt [root@ceph-node01 a]# rm -rf 10.txt [root@ceph-node01 a]# s3cmd sync --delete-removed ./ s3://test2-demo/ delete: 's3://test2-demo/10.txt' [root@ceph-node01 a]# s3cmd ls s3://test2-demo/ 2020-10-06 11:17 43 s3://test2-demo/1.txt 2020-10-06 11:17 43 s3://test2-demo/2.txt 2020-10-06 11:17 43 s3://test2-demo/3.txt 2020-10-06 11:17 43 s3://test2-demo/4.txt 2020-10-06 11:17 43 s3://test2-demo/5.txt 2020-10-06 11:17 43 s3://test2-demo/6.txt 2020-10-06 11:17 43 s3://test2-demo/7.txt 2020-10-06 11:17 43 s3://test2-demo/8.txt 2020-10-06 11:17 43 s3://test2-demo/9.txt [root@ceph-node01 a]# 
使用 swift 看望 RGW 1. 创建 swift 用户
[root@ceph-node01 ceph-deploy]# radosgw-admin subuser create --uid ceph-s3-user --subuser=ceph-s3-user:swift --access=full {     "user_id": "ceph-s3-user",     "display_name": "Ceph S3 User Demo",     "email": "",     "suspended": 0,     "max_buckets": 1000,     "subusers": [         {             "id": "ceph-s3-user:swift",             "permissions": "full-control"         }     ],     "keys": [         {             "user": "ceph-s3-user",             "access_key": "V3J9L4M1WKV5O5ECAKPU",             "secret_key": "f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw"         }     ],     "swift_keys": [         {             "user": "ceph-s3-user:swift",             "secret_key": "ZIOOU8Xcfe3m6ZZapK5P2rU0GGPaiS31chy9yvMW"         }     ],     "caps": [],     "op_mask": "read, write, delete",     "default_placement": "",     "default_storage_class": "",     "placement_tags": [],     "bucket_quota": {         "enabled": false,         "check_on_raw": false,         "max_size": -1,         "max_size_kb": 0,         "max_objects": -1     },     "user_quota": {         "enabled": false,         "check_on_raw": false,         "max_size": -1,         "max_size_kb": 0,         "max_objects": -1     },     "temp_url_keys": [],     "type": "rgw",     "mfa_ids": [] }  [root@ceph-node01 ceph-deploy]# 
2. 创建 swift 用户的secret
[root@ceph-node01 ceph-deploy]# radosgw-admin key create --subuser=ceph-s3-user:swift --key-type=swift --gen-secret {     "user_id": "ceph-s3-user",     "display_name": "Ceph S3 User Demo",     "email": "",     "suspended": 0,     "max_buckets": 1000,     "subusers": [         {             "id": "ceph-s3-user:swift",             "permissions": "full-control"         }     ],     "keys": [         {             "user": "ceph-s3-user",             "access_key": "V3J9L4M1WKV5O5ECAKPU",             "secret_key": "f5LqLVYOVNu38cuQwi0jXC2ZTboCSJDmdvB8oeYw"         }     ],     "swift_keys": [         {             "user": "ceph-s3-user:swift",             "secret_key": "0M1GdRTvMSU3fToOxEVXrBjItKLBKtu8xhn3DcEE"         }     ],     "caps": [],     "op_mask": "read, write, delete",     "default_placement": "",     "default_storage_class": "",     "placement_tags": [],     "bucket_quota": {         "enabled": false,         "check_on_raw": false,         "max_size": -1,         "max_size_kb": 0,         "max_objects": -1     },     "user_quota": {         "enabled": false,         "check_on_raw": false,         "max_size": -1,         "max_size_kb": 0,         "max_objects": -1     },     "temp_url_keys": [],     "type": "rgw",     "mfa_ids": [] }  [root@ceph-node01 ceph-deploy]# 
3. 需要使用 pip 安设 swift 客户端器具
# 把稳如果有pip的话,就不需要再安设了 [root@ceph-node01 ceph-deploy]# curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py 。。。 [root@ceph-node01 ceph-deploy]# python get-pip.py 。。。 [root@ceph-node01 ceph-deploy]# pip install python-swiftclient 。。。 
4. 使用 swift 号令行器具
[root@ceph-node01 ceph-deploy]# swift -A http://100.73.18.152/auth -U ceph-s3-user:swift -K 0M1GdRTvMSU3fToOxEVXrBjItKLBKtu8xhn3DcEE list ceph-s3-bucket s3cmd-demo [root@ceph-node01 ceph-deploy]# 
5. 建树成环境变量的体式
[root@ceph-node01 ceph-deploy]# cat /etc/profile 。。。 export ST_AUTH=http://100.73.18.152/auth export ST_USER=ceph-s3-user:swift export ST_KEY=0M1GdRTvMSU3fToOxEVXrBjItKLBKtu8xhn3DcEE [root@ceph-node01 ceph-deploy]# source /etc/profile [root@ceph-node01 ceph-deploy]# swift list ceph-s3-bucket s3cmd-demo [root@ceph-node01 ceph-deploy]# 
6. 创建bucket
[root@ceph-node01 ceph-deploy]# swift post swift-demo [root@ceph-node01 ceph-deploy]# swift list ceph-s3-bucket s3cmd-demo swift-demo [root@ceph-node01 ceph-deploy]# 
7. 上传单个文献测试
[root@ceph-node01 ceph-deploy]# swift upload swift-demo /etc/fstab Object HEAD failed: http://100.73.18.152/swift/v1/swift-demo/etc/fstab 416 Requested Range Not Satisfiable [root@ceph-node01 ceph-deploy]# 
8. 上传目次测试
[root@ceph-node01 a]# swift upload swift-demo /etc/fstab etc/fstab [root@ceph-node01 a]# 
9. 常用操作
# 1. 列举统共 bucket [root@ceph-node01 a]# swift list ceph-s3-bucket s3cmd-demo swift-demo test-demo test2-demo [root@ceph-node01 a]#  # 2. 列举统共 bucket [root@ceph-node01 a]# swift list --lh     0 0 2020-10-06 04:13:10 ceph-s3-bucket    37 3.6M 2020-10-06 04:34:49 s3cmd-demo  2360 33M 2020-10-06 08:07:55 swift-demo     7 16K 2020-10-06 10:32:02 test-demo     9 387 2020-10-06 11:17:00 test2-demo  2.4K 36M [root@ceph-node01 a]#  # 3. 列举单个 bucket [root@ceph-node01 a]# swift list swift-demo  # 4. 上传单个文献到bucket [root@ceph-node01 a]# swift upload swift-demo /etc/fstab etc/fstab [root@ceph-node01 a]#  # 5. 上传目次到指定的bucket [root@ceph-node01 a]# swift upload swift-demo /etc/  # 6. swift 情状信息 [root@ceph-node01 a]# swift stat                                     Account: v1                                  Containers: 5                                     Objects: 2413                                       Bytes: 38701415 Objects in policy "default-placement-bytes": 0   Bytes in policy "default-placement-bytes": 0    Containers in policy "default-placement": 5       Objects in policy "default-placement": 2413         Bytes in policy "default-placement": 38701415                      X-Openstack-Request-Id: tx000000000000000001302-005f7c5afd-a638-default                 X-Account-Bytes-Used-Actual: 45948928                                  X-Trans-Id: tx000000000000000001302-005f7c5afd-a638-default                                 X-Timestamp: 1601985277.38095                                Content-Type: text/plain; charset=utf-8                               Accept-Ranges: bytes [root@ceph-node01 a]#  # 7. 创建 bucket [root@ceph-node01 a]# swift post swift-test [root@ceph-node01 a]# swift list ceph-s3-bucket s3cmd-demo swift-demo swift-test test-demo test2-demo [root@ceph-node01 a]#  # 8. 删除 bucket [root@ceph-node01 a]# swift delete swift-demo  # 9. 删除指定 object [root@ceph-node01 a]# swift delete swift-test root/a/1.txt root/a/1.txt [root@ceph-node01 a]#  # 10. 上传大文献时不错使用-S指定分片大小 [root@ceph-node01 a]# swift upload swift-test /home/log.txt home/log.txt [root@ceph-node01 a]# swift upload swift-test -S 102400000 /home/log2.txt home/log2.txt segment 5 home/log2.txt segment 3 home/log2.txt segment 1 home/log2.txt segment 0 home/log2.txt segment 2 home/log2.txt segment 4 home/log2.txt [root@ceph-node01 a]# 
追想

从认识、安设、到践诺使用,粗浅先容了对象存储号令行器具。

本文转载自微信公众号「Linux点滴运维践诺」,不错通过以下二维码神气。转载本文请相关Linux点滴运维践诺公众号。

 

 






Powered by 丰满多毛的大隂户毛茸茸 @2013-2022 RSS地图 HTML地图

栏目分类

热点资讯

相关资讯