SN550 - Why it uses 512B sector instead of 4096?

My WD Blue N550 1TB uses 512B sectors “out of the box”. So I often read modern drives are using 4096B sectors, but in special SSDs need it because its their internal size. If using 512B sectors this would also make double write cycles and so shorting the lifetime of the drive.

So, my smartctl output says:

Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 + 512 0 2
1 - 4096 0 1

Can someone explain me what this means and/or/how is there a way to achive best settings for this drive? I’m using ext4, ntfs, fat32 filesystems and my working system is a Debian 9.

It there something like this also from WD? :


OK, it seems WD isn’t really interested to give detailed support to their products.
Also I don’t understand for what the SSD is delivered in the slower and lifetime shorting setting of 512b.

There is a way to change the firmware setting for (physically) sectorsize in Linux.

“# nvme id-ns -H /dev/nvmeXnY”


LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good (in use)
LBA Format 1 : Metadata Size: 0 bytes - Data Size: 4096 bytes - Relative Performance: 0x1 Better

so there is support for two different sizes :slight_smile:

“# nvme format --lbaf=NUMBER /dev/nvmeXnY”

will set the desired size where NUMBER is the value for LBA Format Code (look above)

That worked for me fine!

Of course the setting is OS indepent in further use of the SSD; the setting is stored in the device itself and keeps resist until you change it again.

!!! You will loose the complete content of the disk after changing the size !!!

I reinitialized to 4096B sector size (LBA Format 1) and experienced a performance drop. I was under the impression that switching to advanced format (the 4096 Byte sector size interface) would be beneficial to performance. I run all the latest (Linux) software on an Intel Q65 chipset (PCIe v2 with 4 PCIe lanes dedicated to the NVMe drive).

Performance @ 512B sector interface (LBA Format 0):

root@xubuntu:~# hdparm -t /dev/nvme0n1p1
 Timing buffered disk reads: 4658 MB in  3.00 seconds = 1552.07 MB/sec

Performance @ 4096B sector interface (LBA Format 1):

# hdparm -t /dev/nvme0n1p1
 Timing buffered disk reads: 2844 MB in  3.00 seconds = 947.51 MB/sec

Seq read performance dropped from 1552.07 MB/sec to 947.51 MB/sec, whereas one would expect a performance gain.

What is going on here?

I verified alignment of the logical file system clusters to the physical sector size of 4096B, it should be okay:

$ sudo parted /dev/nvme0n1 
GNU Parted 3.3
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: WDC WDS500G2B0C-00PXH0 (nvme)
Disk /dev/nvme0n1: 500GB
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End    Size   File system  Name         Flags
 1      300GB  500GB  200GB  ext4         ubuntu-root

(parted) align-check opt 1                                                
1 aligned

What is the internal physical sector size, that the SN550 NVMe drive uses?

WD please help out here? Why do I get a drop in performance when re-initializing to the 4096B sector size?

How can I get the expected performance gain from going to 4096B sector size (advanced format).

I didn’t get this big difference in performance after setting to 4096B sectorsize. When I remind correct it was nearly the same and stays now at

root@localhost:/# hdparm -t /dev/nvme0n1p6
Timing buffered disk reads: 3786 MB in 3.00 seconds = 1261.62 MB/sec

but i have a relative slow machine working only on PCIe V2 …

I also didn’t expect a performance boost but getting longer lifetime for the SSD.

Maybe the internal controler firmware will manage everything of its own with handling the real basic hardware read/write circles and doesn’t care much about settings of the machine interface.

Only WD could answer our questions but they don’t … so if you are interested in a well supported thing it is the best way to choose a well known and serious product …

I don’t think it makes much difference which LBA size an SSD presents to the OS. Internally its block size is much greater than either, so the OS will never be aligned to a block boundary. But I’m prepared for someone who is more knowledgeable to correct me.

This makes the observed performance drop by 33% even more serious. If it can not be explained by physical sector size to logical cluster misalignment then the 4096B format / interface firmware of the SN550 would be responsible…

My question remains the same:
How can I get the expected performance gain from going to 4096B sector size (advanced format).

Input from a WD engineer would be very much appreciated here.


I’ve bought 2 SN750, one is 250GB and the other is 500GB. When I checked if they support 4096 bytes sectors, I was happy to see that they were (even if they are 512B formated by default). But when I tried to format the namespace, I’ve got an error, each time the same :
# nvme format --lbaf=1 -s 1 -f /dev/nvme0n1
NVMe status: INVALID_FORMAT: The LBA Format specified is not supported. This may be due to various conditions(0x410a)
It happens with both the ssd, with different OS and computers.

Any clue ?

This question is a site track to this discussion and deserves its own thread.
But to answer your question.

Every NVMe drive brand/model has its own LBA formatting options. You first have to lookup which ones it offers and then select the appropriate one. Here’s how you can do that in Linux:

Find the NVMe drive/namespace name with:

$ lsblk -l

Lookup the formatting scheme’s it offers (assuming the drive/namespace name that is found above is /dev/nvme0n1):

$ sudo nvme id-ns /dev/nvme0n1 --human-readable
LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good (in use)
LBA Format 1 : Metadata Size: 0 bytes - Data Size: 4096 bytes - Relative Performance: 0x1 Better

For this drive we can see that scheme 1 is the 4069B advanced format scheme, which should deliver “0x1 Better” performance according to the drive itself (0x1: the lower the number the better the performance). We can also see here that formatting scheme 0 (Data Size: 512 bytes) is currently in use.

Now use that number (1) to input into the format command in order to re-initialize the drive to the new physical sector size (all data will be lost!!!):
Re-initialization to 4096 physical sector size/interface (advanced format):

$ sudo nvme format /dev/nvme0n1 --lbaf=1 --reset

Explanation of options:
(-r reset after succesful format)
(–lbaf=1 LBA Format 4096 physical sector size / advanced format)

Now you must partition and format the drive in order to use it or just run the installer of your preferred OS instead.
Partition and Format with GUI under Linux:

$ sudo -i gparted /dev/nvme0n1

1 Like

@marty1: Thank you for this patience and detailed explanation

@toniob: Please tell us about the result depending transfer speeds before and after sectorsize change

@marty1 Thanks for your answer. I’ve opened my own thread. Sorry for the noise here.
To answer, of course I checked if the lba value was the right one. But the command fails with the same error, both with 0 or 1 values.

# nvme id-ns -H /dev/nvme0n1
LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good (in use)
LBA Format 1 : Metadata Size: 0 bytes - Data Size: 4096 bytes - Relative Performance: 0x1 Better

@hucky I will but right now, formating is impossible for me because of the error.

Here some background information … from Seagate !

Transition to Advanced Format 4K Sector Hard Drives - Benefits and pitfalls in moving from 512 to 4,096 bytes

I did rigorous performance testing with fio on the SN550 that was low-level formatted to 4096 sector size.

I found optimal throughput not at 4KB, but at 16 KB block size.
fio config:

fio --loops=5 --size=1000m --filename=$TEST_FILE --stonewall --ioengine=libaio --direct=1 \
–name=Seqread16k --bs=16k --iodepth=32 --rw=read

So it seems that the drive works internally with 16 KB physical blocks.

Interesting article from Seagate. So the sense for 4K sectors is primarely hardware solution for magnetic based HDD.
First SSDs manufactored in SLC design without “intelligent” caching by their controlers could also profit because of less R/W cycles and better alignement. But nowadays in MLC design I think the internal controler on the SSD will optimize everything for speed and durability and so there won’t be difference which kind of sectorsize is used.
I would be very fine if there is a statement like this Seagate article which will describe the things for SSDs.

The only thing I count think of was a physical to logical block misalignment, where one logical block would be spread out over 2 physical sectors, a logical block read/write would then result in two physical block read/writes crossing physical sector boundaries.

@hucky so I managed to format my ssd. I used the benchmark tool in Gnome Disk. The results are easy : I’ve got exactly the same results with 512 and 4096 sector size. So if you want to do it for a boost in performance, you can already forget that.

I experienced a performance drop by going from 512 to 4096.
Have you also tried measuring performance with:

root@xubuntu:~# hdparm -t /dev/nvme0n1p1

Btw What is you PCIe version and how many PCI lanes are dedicated to the NVMe drives?

The problem is not LBS, the problem is inside PBS (physical block size) and the features allowed by manufacturer.

Basically are two kinds of formats, one is e.g. “nvme0” which format the PBS and one is e.g. “nvme0n1” that correspond to LBS.

Some manufacturers allow to instruct the controller (nvme0) to use another format (e.g. 4096 PBS) and some others don’t.

The same is for LBS, some allow to switch between 512 and 4096 (nvme0n1) and some others don’t (like Sa*****).

Basically, Win need a PBS or at least a LBS of 512, all other OSs doing very well with 4k, many applications under Win don’t do well with 4k neither in PBS nor LBS. So, manufactures want to stay compatible with Win use 512 and others give the possibility (at least) to switch to 4k.

The bad thing is, even if all specified performance are given for 4k, none manufacturer tell clear in the data-sheet which kind of PBS is used by the controller.

The worse is, some manufacturers deliver first NVMe with 4k PBS and later on with 512.

Transparency & support is totally different to what we experience and this with all manufacturer.

The PBS and LBS must be specified, and the controller must be opened to modify PBS and LBS, that’s all.

At least, if PBS is 4k you get better performance with 4k LBS, and if PBS is 512 you get better with 512 LBS because of not misalignment of blocks.


Not better performance, I supposed longer life because of more effecient handling the memory.
But as said earlier, i think the internal logic of the controller will do this now the best way, no matter if 512 or 4096 sector size.

Do you have new insights? Is it beneficial to use 4096 sector size?

I’ve been running for a year now with the 4K sector size and am still sailing smoothly.