Ceph: Difference between revisions
From DWIKI
m (→Links) |
m (→Links) |
||
(8 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
*[https://ceph.io/ Homepage] | *[https://ceph.io/ Homepage] | ||
*[https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster Deploy Hyper-Converged Ceph Cluster] | |||
*[https://docs.ceph.com/en/latest/rados/operations/operating/ Operating a Ceph cluster] | |||
*[https://docs.ceph.com/docs/master/start/os-recommendations/ Documentation] | *[https://docs.ceph.com/docs/master/start/os-recommendations/ Documentation] | ||
*[https://ceph.io/en/discover/technology/ Ceph technology] | *[https://ceph.io/en/discover/technology/ Ceph technology] | ||
*[https://docs.ceph.com/en/latest/architecture/ Cepth Architecture] | |||
*[https://docs.ceph.com/en/latest/mgr/zabbix/ Monitoring ceph with zabbix] | |||
| |||
=Elements= | |||
==OSD== | |||
*[https://docs.ceph.com/en/latest/man/8/ceph-osd/ Object Storage Daemon] | |||
Usually one OSD per disk | |||
= Commands = | = Commands = | ||
= ceph= | == ceph== | ||
== Show status == | === Show status === | ||
ceph status | ceph status | ||
== pveceph == | |||
===list pools === | |||
pveceph pool ls | |||
= Docs = | = Docs = | ||
*[https://blog.zabbix.com/ceph-storage-monitoring-with-zabbix/9665/ Monitoring ceph with zabbix] | |||
==RDB== | ==RDB== | ||
Rados Block Device | Rados Block Device | ||
==PG (Placement Group)== | |||
*[https://docs.ceph.com/en/latest/dev/placement-group/ PG Notes] | |||
== Reload after editing ceph.conf == | == Reload after editing ceph.conf == |
Latest revision as of 16:18, 3 August 2022
Links
- Homepage
- Deploy Hyper-Converged Ceph Cluster
- Operating a Ceph cluster
- Documentation
- Ceph technology
- Cepth Architecture
- Monitoring ceph with zabbix
Elements
OSD
Usually one OSD per disk
Commands
ceph
Show status
ceph status
pveceph
list pools
pveceph pool ls
Docs
RDB
Rados Block Device
PG (Placement Group)
Reload after editing ceph.conf
FAQ
mgr: no daemons active
when running
ceph status
or similar
also shows as "no active mgr"
At least in one case solved by running on one/first node:
pveceph mgr destroy pvetest1 pveceph mgr create