NVMe: Difference between revisions
m (→FAQ) |
m (→Links) |
||
(23 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
*[https://wiki.archlinux.org/title/Solid_state_drive/NVMe arch linux nvme doc] | *[https://wiki.archlinux.org/title/Solid_state_drive/NVMe arch linux nvme doc] | ||
*[https://www.pogolinux.com/blog/high-performance-multi-tenant-object-storage-nvme/ How to Deploy High-Performance Multi-Tenant Object Storage with NVMe] | *[https://www.pogolinux.com/blog/high-performance-multi-tenant-object-storage-nvme/ How to Deploy High-Performance Multi-Tenant Object Storage with NVMe] | ||
*[https://docs.netapp.com/us-en/ontap-sanhost/nvme_sles15_sp3.html NVMe-oF Host Configuration for SUSE Linux Enterprise Server 15 SP3 with ONTAP] | |||
==NVMe/TCP== | ==NVMe/TCP== | ||
*https://www.networkworld.com/article/3609921/nvme-over-tcp-how-it-supercharges-ssd-storage-using-standard-ip-networks.html | *https://www.networkworld.com/article/3609921/nvme-over-tcp-how-it-supercharges-ssd-storage-using-standard-ip-networks.html | ||
Line 11: | Line 11: | ||
*[https://www.computerweekly.com/feature/NVMe-over-TCP-brings-super-fast-flash-over-standard-IP-networks NVMe over TCP] | *[https://www.computerweekly.com/feature/NVMe-over-TCP-brings-super-fast-flash-over-standard-IP-networks NVMe over TCP] | ||
*https://tekdeeps.com/what-is-nvme-over-tcp-how-to-use/ | *https://tekdeeps.com/what-is-nvme-over-tcp-how-to-use/ | ||
*https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp | *[https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp Data in a Flash, Part III: NVMe over Fabrics Using TCP] | ||
*[https://infohub.delltechnologies.com/l/nvme-nvme-tcp-and-dell-smartfabric-storage-software-overview-ip-san-solution-primer-1/nvme-tcp-storage-operations nvme-tcp storage operations] | *[https://infohub.delltechnologies.com/l/nvme-nvme-tcp-and-dell-smartfabric-storage-software-overview-ip-san-solution-primer-1/nvme-tcp-storage-operations nvme-tcp storage operations] | ||
*[https://www.techtarget.com/searchstorage/post/NVMe-oF-over-IP-A-complete-SAN-platform NVMe-oF over IP: A complete SAN platform] | |||
==Qemu and NVMe== | ==Qemu and NVMe== | ||
*[https://qemu-project.gitlab.io/qemu/system/devices/nvme.html Qemu and nvme] | *[https://qemu-project.gitlab.io/qemu/system/devices/nvme.html Qemu and nvme] | ||
Line 21: | Line 21: | ||
===Slots=== | ===Slots=== | ||
====M.2==== | ====M.2==== | ||
The old way, on standard motherboards | The old way, on standard motherboards. | ||
https://arstechnica.com/gadgets/2015/02/understanding-m-2-the-interface-that-will-speed-up-your-next-ssd/ | |||
====U.2==== | ====U.2==== | ||
*https://www.drewthorst.com/posts/nvme/namespaces/readme/ | |||
Hotplug | Hotplug | ||
*https://en.wikipedia.org/wiki/U.2 | *https://en.wikipedia.org/wiki/U.2 | ||
Line 28: | Line 33: | ||
=Tools= | =Tools= | ||
==nvme-cli== | ==nvme-cli== | ||
== | ==nvmetcli== | ||
NVMe target admin tool | |||
https://github.com/JunxiongGuan/nvmetcli | https://github.com/JunxiongGuan/nvmetcli | ||
Line 39: | Line 46: | ||
Means nvme device 0, namespace 2, partition 3 | Means nvme device 0, namespace 2, partition 3 | ||
The first namespace, '''n1''', will always exist | The first namespace, '''n1''', will always exist | ||
==NVMe multipathing== | |||
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_storage_devices/enabling-multipathing-on-nvme-devices_managing-storage-devices | |||
==Namespaces== | ==Namespaces== | ||
Line 46: | Line 55: | ||
*[https://www.drewthorst.com/posts/nvme/namespaces/readme/ Drew Thorstensen - NVME namespaces] | *[https://www.drewthorst.com/posts/nvme/namespaces/readme/ Drew Thorstensen - NVME namespaces] | ||
*https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html | *https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html | ||
==NVMe over Fiber== | |||
*[https://spdk.io/doc/nvmf.html NVMe-oF Target Getting Started Guide] | |||
=HOWTO= | =HOWTO= | ||
==On NVMe target== | |||
===Deleting an nvme target=== | |||
NAME=nvmetest | |||
PORT=1 | |||
NODENUM=1 | |||
cd /sys/kernel/config/nvmet | |||
rm -rf ports/1/subsystems/$NAME | |||
rmdir ports/$PORT | |||
rmdir subsystems/$NAME-$NODENUM/namespaces/$PORT/ | |||
rmdir subsystems/$NAME-$NODENUM/ | |||
==List devices== | ==List devices== | ||
nvme list | nvme list | ||
Line 53: | Line 78: | ||
get details | get details | ||
nvme id-ctrl /dev/nvme0 | nvme id-ctrl /dev/nvme0 | ||
if you want to find for example IP of a device: | if you want to find for example IP and NQN of a device: | ||
nvme list-subsys /dev/nvme2n1 | nvme list-subsys /dev/nvme2n1 | ||
of just | |||
nvme list-subsys | |||
==Show subnqn of device== | |||
nvme id-ctrl /dev/nvme1n2 | grep subnqn | |||
==Namespaces== | ==Namespaces== | ||
===Difference between size and capacity=== | |||
nsze vs ncap | |||
flbas = blocksize | |||
To check them in a namespace: | |||
nvme id-ns /dev/nvme1n3 | egrep "nsze|ncap|flbas" | |||
Get block size: | |||
nvme id-ns /dev/nvme1n3 | grep "in use" | |||
which gives like | |||
lbaf 1 : ms:0 lbads:12 rp:0 (in use) | |||
where lbads is the blocksize in 2^lbads, so in this case 2^12 = 4096 | |||
===Create namespace=== | |||
*[https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html Managing NVMe namespaces] | |||
Get controller id | |||
nvme id-ctrl /dev/nvme1 | grep cntlid | |||
gives like | |||
cntlid : 0x1 | |||
Create 100G namespace, -s (--nsze) and -c (--ncap) is number of blocks -b, --blocksize | |||
nvme create-ns /dev/nvme1 -s 26214387 -c 26214387 -b 4096 | |||
TODO: what about --flbas instead of blocksize? | |||
To actually use the namespace you need to to attach it first | |||
nvme attach-ns /dev/nvme1 | |||
Line 76: | Line 135: | ||
===Show unallocated capacity=== | ===Show unallocated capacity=== | ||
nvme id-ctrl /dev/nvme1|grep '''unvmcap''' | nvme id-ctrl /dev/nvme1|grep '''unvmcap''' | ||
==NVMe over fabrics== | ==NVMe over fabrics== | ||
Line 86: | Line 144: | ||
Dependencies: python3-configshell-fb | Dependencies: python3-configshell-fb | ||
===NVMe-oF target=== | |||
/bin/mount -t configfs none /sys/kernel/config/ | |||
====Show active targets==== | |||
? | |||
===NVMe/TCP Host=== | ===NVMe/TCP Host=== | ||
In this context '''Host''' means '''client'''/'''initiator''' | |||
See [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/configuring-nvme-over-fabrics-using-nvme-tcp_managing-storage-devices#configuring-an-nvme-tcp-host_configuring-nvme-over-fabrics-using-nvme-tcp Configuring an NVMe/TCP host] | See [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/configuring-nvme-over-fabrics-using-nvme-tcp_managing-storage-devices#configuring-an-nvme-tcp-host_configuring-nvme-over-fabrics-using-nvme-tcp Configuring an NVMe/TCP host] | ||
=== | ==On Client== | ||
Modules | ===Modules=== | ||
nvme_rdma nvme_core nvme_fabrics and more | nvme_rdma nvme_core nvme_fabrics and more | ||
===Find shares on target=== | |||
nvme discover -t rdma -a 192.168.100.8 -s 4420 | |||
===Connect target=== | |||
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.100.8 -s 4420 | |||
Line 106: | Line 179: | ||
*[https://github.com/narbehaj/zabbix-nvme Monitoring nvme with zabbix] | *[https://github.com/narbehaj/zabbix-nvme Monitoring nvme with zabbix] | ||
=Terms and acronyms= | |||
==flbas== | |||
Formatted LBA Sizes | |||
==nlbaf== | |||
=FAQ= | =FAQ= | ||
==Failed to open /dev/nvme-fabrics: No such file or directory== | ==Errors on client/host== | ||
===Failed to open /dev/nvme-fabrics: No such file or directory=== | |||
modprobe nvme-tcp | modprobe nvme-tcp | ||
or maybe rdma | or maybe rdma | ||
==Failed to write to /dev/nvme-fabrics: Connection refused== | ===Failed to write to /dev/nvme-fabrics: Connection refused=== | ||
dmesg will probably show | dmesg will probably show | ||
nvme0: failed to connect socket: -111 | nvme0: failed to connect socket: -111 | ||
Line 120: | Line 202: | ||
nvme discover -t rdma ... | nvme discover -t rdma ... | ||
===Failed to write to /dev/nvme-fabrics: Input/output error=== | |||
Mayve you're trying to connect to a nonexisting nqn | |||
==Failed to write to /dev/nvme-fabrics: Invalid argument== | ==Failed to write to /dev/nvme-fabrics: Invalid argument== | ||
Check dmesg | |||
===nvme nvme0: Invalid MNAN value 1024=== | ===nvme nvme0: Invalid MNAN value 1024=== | ||
Try | Try | ||
modprobe nvme_core multipath=N | modprobe nvme_core multipath=N | ||
(remember to rmmod first :) | (remember to rmmod first :) | ||
Line 132: | Line 218: | ||
Check your /etc/nvme/discover.conf, it should be '''rdma''' :) | Check your /etc/nvme/discover.conf, it should be '''rdma''' :) | ||
==nvmet_tcp: malformed ip/port passed: :4420== | |||
Maybe forgot to set addr_traddr ? | |||
==Duplicate cntlid 1 with nvme0== | |||
Make sure next connection is on new control id, /dev/nvme0x vs /dev/nvme1x | |||
TODO ?? | |||
==IDs don't match for shared namespace 1== | |||
?? | |||
[[Category:Storage]] | [[Category:Storage]] |
Latest revision as of 10:56, 24 September 2024
NVM Express
Links
- nvmexpress.org
- arch linux nvme doc
- How to Deploy High-Performance Multi-Tenant Object Storage with NVMe
- NVMe-oF Host Configuration for SUSE Linux Enterprise Server 15 SP3 with ONTAP
NVMe/TCP
- https://www.networkworld.com/article/3609921/nvme-over-tcp-how-it-supercharges-ssd-storage-using-standard-ip-networks.html
- nvme and tcp
- NVMe over TCP
- https://tekdeeps.com/what-is-nvme-over-tcp-how-to-use/
- Data in a Flash, Part III: NVMe over Fabrics Using TCP
- nvme-tcp storage operations
- NVMe-oF over IP: A complete SAN platform
Qemu and NVMe
Hardware
Slots
M.2
The old way, on standard motherboards.
U.2
Hotplug
Tools
nvme-cli
nvmetcli
NVMe target admin tool
https://github.com/JunxiongGuan/nvmetcli
Documentation
NVMe device names
Example: /dev/nvme0n2p3
Means nvme device 0, namespace 2, partition 3 The first namespace, n1, will always exist
NVMe multipathing
Namespaces
MNAM: Maximum Number of Allowed Namespaces
- nvme namespaces
- Drew Thorstensen - NVME namespaces
- https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html
NVMe over Fiber
HOWTO
On NVMe target
Deleting an nvme target
NAME=nvmetest PORT=1 NODENUM=1 cd /sys/kernel/config/nvmet rm -rf ports/1/subsystems/$NAME rmdir ports/$PORT rmdir subsystems/$NAME-$NODENUM/namespaces/$PORT/ rmdir subsystems/$NAME-$NODENUM/
List devices
nvme list
get details
nvme id-ctrl /dev/nvme0
if you want to find for example IP and NQN of a device:
nvme list-subsys /dev/nvme2n1
of just
nvme list-subsys
Show subnqn of device
nvme id-ctrl /dev/nvme1n2 | grep subnqn
Namespaces
Difference between size and capacity
nsze vs ncap flbas = blocksize
To check them in a namespace:
nvme id-ns /dev/nvme1n3 | egrep "nsze|ncap|flbas"
Get block size:
nvme id-ns /dev/nvme1n3 | grep "in use"
which gives like
lbaf 1 : ms:0 lbads:12 rp:0 (in use)
where lbads is the blocksize in 2^lbads, so in this case 2^12 = 4096
Create namespace
Get controller id
nvme id-ctrl /dev/nvme1 | grep cntlid
gives like
cntlid : 0x1
Create 100G namespace, -s (--nsze) and -c (--ncap) is number of blocks -b, --blocksize
nvme create-ns /dev/nvme1 -s 26214387 -c 26214387 -b 4096
TODO: what about --flbas instead of blocksize?
To actually use the namespace you need to to attach it first
nvme attach-ns /dev/nvme1
List namespaces
nvme list-ns /dev/nvme1
Show info about namespace
nvme id-ns /dev/nvme1n1
Show number of available namespaces
nvme id-ctrl /dev/nvme1|grep nn
Show total capacity
nvme id-ctrl /dev/nvme1|grep tnvmcap
Show unallocated capacity
nvme id-ctrl /dev/nvme1|grep unvmcap
NVMe over fabrics
https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp
You might want to use nvmetcli on the target:
https://github.com/JunxiongGuan/nvmetcli
Dependencies: python3-configshell-fb
NVMe-oF target
/bin/mount -t configfs none /sys/kernel/config/
Show active targets
?
NVMe/TCP Host
In this context Host means client/initiator
See Configuring an NVMe/TCP host
On Client
Modules
nvme_rdma nvme_core nvme_fabrics and more
nvme discover -t rdma -a 192.168.100.8 -s 4420
Connect target
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.100.8 -s 4420
Connect on boot
Check /etc/nvme/discovery.conf
systemctl enable nvmf-autoconnect.service
Show remote connections
nvme list-subsys
Monitoring nvme
Terms and acronyms
flbas
Formatted LBA Sizes
nlbaf
FAQ
Errors on client/host
Failed to open /dev/nvme-fabrics: No such file or directory
modprobe nvme-tcp
or maybe rdma
Failed to write to /dev/nvme-fabrics: Connection refused
dmesg will probably show
nvme0: failed to connect socket: -111
Maybe you're using something like infiniband, try
nvme discover -t rdma ...
Failed to write to /dev/nvme-fabrics: Input/output error
Mayve you're trying to connect to a nonexisting nqn
Failed to write to /dev/nvme-fabrics: Invalid argument
Check dmesg
nvme nvme0: Invalid MNAN value 1024
Try
modprobe nvme_core multipath=N
(remember to rmmod first :)
nvme_fabrics: no handler found for transport drma.
Check your /etc/nvme/discover.conf, it should be rdma :)
nvmet_tcp: malformed ip/port passed: :4420
Maybe forgot to set addr_traddr ?
Duplicate cntlid 1 with nvme0
Make sure next connection is on new control id, /dev/nvme0x vs /dev/nvme1x TODO ??
??