NVMe: Difference between revisions

From DWIKI
 
(47 intermediate revisions by 2 users not shown)
Line 4: Line 4:
*[https://nvmexpress.org/ nvmexpress.org]
*[https://nvmexpress.org/ nvmexpress.org]
*[https://wiki.archlinux.org/title/Solid_state_drive/NVMe arch linux nvme doc]
*[https://wiki.archlinux.org/title/Solid_state_drive/NVMe arch linux nvme doc]
*[https://www.pogolinux.com/blog/high-performance-multi-tenant-object-storage-nvme/ How to Deploy High-Performance Multi-Tenant Object Storage with NVMe]
*[https://docs.netapp.com/us-en/ontap-sanhost/nvme_sles15_sp3.html NVMe-oF Host Configuration for SUSE Linux Enterprise Server 15 SP3 with ONTAP]
==NVMe/TCP==
==NVMe/TCP==
*https://www.networkworld.com/article/3609921/nvme-over-tcp-how-it-supercharges-ssd-storage-using-standard-ip-networks.html
*https://www.networkworld.com/article/3609921/nvme-over-tcp-how-it-supercharges-ssd-storage-using-standard-ip-networks.html
Line 9: Line 11:
*[https://www.computerweekly.com/feature/NVMe-over-TCP-brings-super-fast-flash-over-standard-IP-networks NVMe over TCP]
*[https://www.computerweekly.com/feature/NVMe-over-TCP-brings-super-fast-flash-over-standard-IP-networks NVMe over TCP]
*https://tekdeeps.com/what-is-nvme-over-tcp-how-to-use/
*https://tekdeeps.com/what-is-nvme-over-tcp-how-to-use/
*https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp
*[https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp Data in a Flash, Part III: NVMe over Fabrics Using TCP]
*[https://infohub.delltechnologies.com/l/nvme-nvme-tcp-and-dell-smartfabric-storage-software-overview-ip-san-solution-primer-1/nvme-tcp-storage-operations nvme-tcp storage operations]
*[https://www.techtarget.com/searchstorage/post/NVMe-oF-over-IP-A-complete-SAN-platform NVMe-oF over IP: A complete SAN platform]
==Qemu and NVMe==
==Qemu and NVMe==
*[https://qemu-project.gitlab.io/qemu/system/devices/nvme.html Qemu and nvme]
*[https://qemu-project.gitlab.io/qemu/system/devices/nvme.html Qemu and nvme]
==Hardware==
===Slots===
====M.2====
The old way, on standard motherboards.
https://arstechnica.com/gadgets/2015/02/understanding-m-2-the-interface-that-will-speed-up-your-next-ssd/
====U.2====
*https://www.drewthorst.com/posts/nvme/namespaces/readme/
Hotplug
*https://en.wikipedia.org/wiki/U.2
=Tools=
==nvme-cli==
==nvmetcli==
NVMe target admin tool
https://github.com/JunxiongGuan/nvmetcli
=Documentation=
==NVMe device names==
*[https://utcc.utoronto.ca/~cks/space/blog/linux/NVMeDeviceNames NVME device names]
Example: '''/dev/nvme0n2p3'''
Means nvme device 0, namespace 2, partition 3
The first namespace, '''n1''', will always exist
==NVMe multipathing==
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_storage_devices/enabling-multipathing-on-nvme-devices_managing-storage-devices
==Namespaces==
'''MNAM''': Maximum Number of Allowed Namespaces
*[https://nvmexpress.org/resource/nvme-namespaces/ nvme namespaces]
*[https://www.drewthorst.com/posts/nvme/namespaces/readme/ Drew Thorstensen - NVME namespaces]
*https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html
==NVMe over Fiber==
*[https://spdk.io/doc/nvmf.html NVMe-oF Target Getting Started Guide]


=HOWTO=
=HOWTO=
==On NVMe target==
===Deleting an nvme target===
NAME=nvmetest
PORT=1
NODENUM=1
cd /sys/kernel/config/nvmet
rm -rf ports/1/subsystems/$NAME
rmdir ports/$PORT
rmdir subsystems/$NAME-$NODENUM/namespaces/$PORT/
rmdir subsystems/$NAME-$NODENUM/
==List devices==
==List devices==
  nvme list
  nvme list
Line 19: Line 78:
get details
get details
  nvme id-ctrl /dev/nvme0
  nvme id-ctrl /dev/nvme0
if you want to find for example IP and NQN of a device:
nvme list-subsys /dev/nvme2n1
of just
nvme list-subsys
==Show subnqn of device==
nvme id-ctrl /dev/nvme1n2 | grep subnqn


==Namespaces==
==Namespaces==
*[https://nvmexpress.org/resource/nvme-namespaces/ nvme namespaces]
 
*https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html
===Difference between size and capacity===
nsze vs ncap
flbas = blocksize
 
 
To check them in a namespace:
nvme id-ns /dev/nvme1n3 | egrep "nsze|ncap|flbas"
 
Get block size:
nvme id-ns /dev/nvme1n3 | grep "in use"
which gives like
lbaf  1 : ms:0  lbads:12 rp:0 (in use)
where lbads is the blocksize in 2^lbads, so in this case 2^12 = 4096
 
===Create namespace===
*[https://narasimhan-v.github.io/2020/06/12/Managing-NVMe-Namespaces.html Managing NVMe namespaces]
 
Get controller id
nvme id-ctrl /dev/nvme1 | grep cntlid
gives like
cntlid    : 0x1
 
 
Create 100G namespace, -s (--nsze) and -c (--ncap) is number of blocks -b, --blocksize
nvme create-ns /dev/nvme1 -s 26214387 -c 26214387 -b 4096
TODO: what about --flbas instead of blocksize?
 
To actually use the namespace you need to to attach it first
nvme attach-ns /dev/nvme1
 


===List namespaces===
===List namespaces===
Line 31: Line 127:


===Show number of available namespaces===
===Show number of available namespaces===
  nvme id-ctrl /dev/nvme1|grep nn
  nvme id-ctrl /dev/nvme1|grep '''nn'''
 
 
===Show total capacity===
nvme id-ctrl /dev/nvme1|grep '''tnvmcap'''
 
===Show unallocated capacity===
nvme id-ctrl /dev/nvme1|grep '''unvmcap'''
 
==NVMe over fabrics==
https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp
 
You might want to use nvmetcli on the target:
 
https://github.com/JunxiongGuan/nvmetcli
 
Dependencies: python3-configshell-fb
 
===NVMe-oF target===
/bin/mount -t configfs none /sys/kernel/config/
 
====Show active targets====
?
 
===NVMe/TCP Host===
In this context '''Host''' means '''client'''/'''initiator'''
 
See [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/configuring-nvme-over-fabrics-using-nvme-tcp_managing-storage-devices#configuring-an-nvme-tcp-host_configuring-nvme-over-fabrics-using-nvme-tcp Configuring an NVMe/TCP host]
 
==On Client==
===Modules===
nvme_rdma nvme_core nvme_fabrics and more
 
 
===Find shares on target===
nvme discover -t rdma -a 192.168.100.8 -s 4420
 
 
===Connect target===
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.100.8 -s 4420
 
 
===Connect on boot===
Check /etc/nvme/discovery.conf
systemctl enable nvmf-autoconnect.service
 
====Show remote connections====
nvme list-subsys


=Monitoring nvme=
=Monitoring nvme=


*[https://github.com/narbehaj/zabbix-nvme Monitoring nvme with zabbix]
*[https://github.com/narbehaj/zabbix-nvme Monitoring nvme with zabbix]
=Terms and acronyms=
==flbas==
Formatted LBA Sizes
==nlbaf==
=FAQ=
==Errors on client/host==
===Failed to open /dev/nvme-fabrics: No such file or directory===
modprobe nvme-tcp
or maybe rdma
===Failed to write to /dev/nvme-fabrics: Connection refused===
dmesg will probably show
nvme0: failed to connect socket: -111
Maybe you're using something like infiniband, try
nvme discover -t rdma ...
===Failed to write to /dev/nvme-fabrics: Input/output error===
Mayve you're trying to connect to a nonexisting nqn
==Failed to write to /dev/nvme-fabrics: Invalid argument==
Check dmesg
===nvme nvme0: Invalid MNAN value 1024===
Try
modprobe nvme_core multipath=N
(remember to rmmod first :)
===nvme_fabrics: no handler found for transport drma.===
Check your /etc/nvme/discover.conf, it should be '''rdma''' :)
==nvmet_tcp: malformed ip/port passed: :4420==
Maybe forgot to set addr_traddr ?
==Duplicate cntlid 1 with nvme0==
Make sure next connection is on new control id, /dev/nvme0x vs /dev/nvme1x
TODO ??
==IDs don't match for shared namespace 1==
??


[[Category:Storage]]
[[Category:Storage]]

Latest revision as of 10:56, 24 September 2024

NVM Express

Links

NVMe/TCP

Qemu and NVMe


Hardware

Slots

M.2

The old way, on standard motherboards.

https://arstechnica.com/gadgets/2015/02/understanding-m-2-the-interface-that-will-speed-up-your-next-ssd/

U.2

Hotplug

Tools

nvme-cli

nvmetcli

NVMe target admin tool

https://github.com/JunxiongGuan/nvmetcli

Documentation

NVMe device names

Example: /dev/nvme0n2p3

Means nvme device 0, namespace 2, partition 3 The first namespace, n1, will always exist

NVMe multipathing

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_storage_devices/enabling-multipathing-on-nvme-devices_managing-storage-devices

Namespaces

MNAM: Maximum Number of Allowed Namespaces

NVMe over Fiber

HOWTO

On NVMe target

Deleting an nvme target

NAME=nvmetest
PORT=1
NODENUM=1
cd /sys/kernel/config/nvmet
rm -rf ports/1/subsystems/$NAME
rmdir ports/$PORT
rmdir subsystems/$NAME-$NODENUM/namespaces/$PORT/
rmdir subsystems/$NAME-$NODENUM/


List devices

nvme list

get details

nvme id-ctrl /dev/nvme0

if you want to find for example IP and NQN of a device:

nvme list-subsys /dev/nvme2n1

of just

nvme list-subsys

Show subnqn of device

nvme id-ctrl /dev/nvme1n2 | grep subnqn

Namespaces

Difference between size and capacity

nsze vs ncap flbas = blocksize


To check them in a namespace:

nvme id-ns /dev/nvme1n3 | egrep "nsze|ncap|flbas"

Get block size:

nvme id-ns /dev/nvme1n3 | grep "in use"

which gives like

lbaf  1 : ms:0   lbads:12 rp:0 (in use)

where lbads is the blocksize in 2^lbads, so in this case 2^12 = 4096

Create namespace

Get controller id

nvme id-ctrl /dev/nvme1 | grep cntlid

gives like

cntlid    : 0x1


Create 100G namespace, -s (--nsze) and -c (--ncap) is number of blocks -b, --blocksize

nvme create-ns /dev/nvme1 -s 26214387 -c 26214387 -b 4096

TODO: what about --flbas instead of blocksize?

To actually use the namespace you need to to attach it first

nvme attach-ns /dev/nvme1 


List namespaces

nvme list-ns /dev/nvme1

Show info about namespace

nvme id-ns /dev/nvme1n1

Show number of available namespaces

nvme id-ctrl /dev/nvme1|grep nn


Show total capacity

nvme id-ctrl /dev/nvme1|grep tnvmcap

Show unallocated capacity

nvme id-ctrl /dev/nvme1|grep unvmcap

NVMe over fabrics

https://www.linuxjournal.com/content/data-flash-part-iii-nvme-over-fabrics-using-tcp

You might want to use nvmetcli on the target:

https://github.com/JunxiongGuan/nvmetcli

Dependencies: python3-configshell-fb

NVMe-oF target

/bin/mount -t configfs none /sys/kernel/config/

Show active targets

?

NVMe/TCP Host

In this context Host means client/initiator

See Configuring an NVMe/TCP host

On Client

Modules

nvme_rdma nvme_core nvme_fabrics and more


Find shares on target

nvme discover -t rdma -a 192.168.100.8 -s 4420


Connect target

nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.100.8 -s 4420


Connect on boot

Check /etc/nvme/discovery.conf

systemctl enable nvmf-autoconnect.service

Show remote connections

nvme list-subsys

Monitoring nvme

Terms and acronyms

flbas

Formatted LBA Sizes

nlbaf

FAQ

Errors on client/host

Failed to open /dev/nvme-fabrics: No such file or directory

modprobe nvme-tcp

or maybe rdma

Failed to write to /dev/nvme-fabrics: Connection refused

dmesg will probably show

nvme0: failed to connect socket: -111

Maybe you're using something like infiniband, try

nvme discover -t rdma ...


Failed to write to /dev/nvme-fabrics: Input/output error

Mayve you're trying to connect to a nonexisting nqn

Failed to write to /dev/nvme-fabrics: Invalid argument

Check dmesg

nvme nvme0: Invalid MNAN value 1024

Try

modprobe nvme_core multipath=N

(remember to rmmod first :)


nvme_fabrics: no handler found for transport drma.

Check your /etc/nvme/discover.conf, it should be rdma :)

nvmet_tcp: malformed ip/port passed: :4420

Maybe forgot to set addr_traddr ?


Duplicate cntlid 1 with nvme0

Make sure next connection is on new control id, /dev/nvme0x vs /dev/nvme1x TODO ??

IDs don't match for shared namespace 1

??