Ceph: Difference between revisions
From DWIKI
(Created page with "=Links= *[https://ceph.io/ Homepage] *[https://docs.ceph.com/docs/master/start/os-recommendations/ Documentation]") |
m (→Links) |
||
(14 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=Links= | |||
*[https://ceph.io/ Homepage] | = Links = | ||
*[https://docs.ceph.com/docs/master/start/os-recommendations/ Documentation] | |||
*[https://ceph.io/ Homepage] | |||
*[https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster Deploy Hyper-Converged Ceph Cluster] | |||
*[https://docs.ceph.com/en/latest/rados/operations/operating/ Operating a Ceph cluster] | |||
*[https://docs.ceph.com/docs/master/start/os-recommendations/ Documentation] | |||
*[https://ceph.io/en/discover/technology/ Ceph technology] | |||
*[https://docs.ceph.com/en/latest/architecture/ Cepth Architecture] | |||
*[https://docs.ceph.com/en/latest/mgr/zabbix/ Monitoring ceph with zabbix] | |||
| |||
=Elements= | |||
==OSD== | |||
*[https://docs.ceph.com/en/latest/man/8/ceph-osd/ Object Storage Daemon] | |||
Usually one OSD per disk | |||
= Commands = | |||
== ceph== | |||
=== Show status === | |||
ceph status | |||
== pveceph == | |||
===list pools === | |||
pveceph pool ls | |||
= Docs = | |||
*[https://blog.zabbix.com/ceph-storage-monitoring-with-zabbix/9665/ Monitoring ceph with zabbix] | |||
==RDB== | |||
Rados Block Device | |||
==PG (Placement Group)== | |||
*[https://docs.ceph.com/en/latest/dev/placement-group/ PG Notes] | |||
== Reload after editing ceph.conf == | |||
=FAQ= | |||
== mgr: no daemons active == | |||
when running | |||
ceph status | |||
or similar | |||
also shows as "no active mgr" | |||
At least in one case solved by running on one/first node: | |||
pveceph mgr destroy pvetest1 | |||
pveceph mgr create |
Latest revision as of 16:18, 3 August 2022
Links
- Homepage
- Deploy Hyper-Converged Ceph Cluster
- Operating a Ceph cluster
- Documentation
- Ceph technology
- Cepth Architecture
- Monitoring ceph with zabbix
Elements
OSD
Usually one OSD per disk
Commands
ceph
Show status
ceph status
pveceph
list pools
pveceph pool ls
Docs
RDB
Rados Block Device
PG (Placement Group)
Reload after editing ceph.conf
FAQ
mgr: no daemons active
when running
ceph status
or similar
also shows as "no active mgr"
At least in one case solved by running on one/first node:
pveceph mgr destroy pvetest1 pveceph mgr create