Linux support for WD Black NVME 2018

This new model 500GB (WDS500G2X0C) doesn’t work, it keeps freezing any live image like Ubuntu 18.04 or Fedora 28 before install.

This support page says: https://support.wdc.com/product.aspx?ID=1804&lang=en

Similarly, new versions of Ubuntu and other flavors of Linux support PCIe NVMe SSDs using the newer storage drivers.

What’s wrong?

I’ve enabled UEFI boot, and running the latest firmware on my bios. I can run a Lubuntu 18.04 live image. The SSD is shown under /dev/nvme0 but the nvme tool is freezing the system and sudo blkid doesn’t show the nvme disk ID. Same for gparted.

1 Like

I am confirming this issue on a brand new system for Fedora 28. For more details see this thread at the fedora forum and this bug report in Red Hat Bugzilla.

@Linuxian: Do I understand you correctly that Lubuntu 18.04 live image boots fine? Did you find a work around for Fedora?

Unfortunately, I did not find a way to ensure that I’m running the latest BIOS version, I have an open request at the manufacturer/vendor of my laptop for more information regarding BIOS updates.

edit: I tested myself with Lubuntu. Lubuntu boots (contrary to Fedora) into the live operating system, however, when I try to install Lubunt to the disk, the system freezes. The system hangs also under Lubuntu when I start gparted when it is scanning for the disks.

See also: Ask Ubuntu.

Nope, also installing Lubuntu is unsuccessful. IMHO any program that accesses the NVMe drive is freezing the system. BIOS and firmware were up to date, probably the new WD controller.

Trying to swap it for a Samsung EVO 970, at least that is working under Linux.

Thanks for the reply. Today I updated my BIOS/EC to the latest available versions, but the WD Black still does not work. Interestingly, I had exactly the same Samsung SSD in mind you are suggesting to replace the WD Black.

I replaced the WD Black SSD with a:
500GB Samsung 970 EVO NVME PCI-e 3.0 x4, M.2
This SSD is recognized correctly and Fedora 28 is running fine on it now.

I am confirming this issue with WDS500G2X0C (got it with Sager NP8954).
Xubuntu 18.4.1 live boots up and recognizes the partitions on NVMe, but any attempt to read the device causes system to freeze completely. That includes an attempt to start gparted or fdisk.
Filed a ticket with WD support but there is no word from them as of now.

Update: I just got a response from WD tech support.
The level of incompetence and ignorance on WD side is astonishing. Here is their answer:
“We regret to inform you that we have not tested our drives with Linux drivers.”
According to wikipedia WD has over 65000 employees.
I am speechless. Time to short WDC stock.

2 Likes

Hi everyone in this thread,

I bought a WDS250G2X0C last week, and I also tried to install Ubuntu 18.04.1 and 16.04.5 on this NVMe SSD.

Firstly I went into same circumstance like you, Ubuntu cannot detect this SSD while installation.

And I found solution how to fix it.

  1. In the GRUB boot menu, press e to edit startup parameter
    Add nvme_core.default_ps_max_latency_us=5500 by the end of quiet splash
    Ctrl-x to boot up, the installer should detect this disk in partition step.
  2. After finishing finish installation, press shift while power on to enter GRUB again, add same kernel parameter nvme_core.default_ps_max_latency_us=5500, Ctrl-x to boot up.
  3. You will see Ubuntu boot up successfully, edit /etc/default/grub, add parameter nvme_core.default_ps_max_latency_us=5500 again, execute sudo update-grub. so that every time boot up will contain this parameter in the grub automatically, no more manually edit.

Wish this solution can help you to install Ubuntu successfully and have fun.

4 Likes

Please take a look on my reply Linux support for WD Black NVME 2018 - #9 by chrisyuan
I think it might be helpful for you to fix this issue.

thank you so much for posting this. I just about completed an order for two of the 1TB NVMe’s through amazon tonight when I saw someone saying they are having problems with Ubuntu 18.04. Had planned on one for windows 10 and one for Ubuntu 18.04 but after reading WD could care less about Linux I’m out, guess I’ll by the 970’s.

I just ordered this drive to replace my existing Ubuntu drive. I appreciate the suggestion to disable low power mode with the latency parameter. I assume there isn’t any negative implications for this fix for desktop use? I’m also curious whether anyone has tried installing the 18.10 beta release to see if the newer kernel has fixed the problem. Although I’d prefer to run the LTS, I’d probably just install 18.10 if a newer kernel resolves the problem. Thanks!

Many thanks to @chrisyuan for pulling out some details from various bugzilla’s around the web. I confirm this works for the WDS512G1X0C as well.

The 5500us is based on Samsung controllers (PM951, PM961, evo 960 etc) having P3 power states that report their entry and exit latency for P4 (the lowest Autonomous Power State Transition or APST) at ~1,500/6,000us meaning that 5,500us will enable P3. On at least my WDS512G1X0C (slightly different than the newer 3d NAND 2018 model this thread references) with firmware version B35500WD the entry/exit latency for the 3 power state is: 51,000/10,000us (yes, those numbers seem awfully fishy to me but they match with 4 at enlat:1,000,000 exlat:100,000). This is from sudo nvme id-ctrl /dev/nvmeX where X is the nvme device in question (often 0). Certainly 5,500 and limiting the drive to P0, P1, P2 works but P3 might be achievable based on many other people’s experience with Samsung controllers.

The real issue COULD be APST or PCIe Active State Power Management (ASPM) or an interaction between the two. Others have had success with firmware updates from their manufacturers however as we’re in Linux not Windows there’s no way to tell if there is a firmware update for the WD Black NVMe drives.

I am running Fedora (now 29) but saw the issue with at least Fedora 27 as well since before 4.16 and across 4.16-4.19 kernels. The fix posted might increase temps a bit since you lose out on the lower power states. I’m going to test if P3 is viable as in my previous comment and will post back.

My temp has been 59C with the fix but the drive reports ~85C as its upper limit.

EDIT 1: Just tested with 52,000us (nvme_core.default_ps_max_latency_us=52000) and the machine froze up over night running 4.19.14-300.fc29.x86_64 . Next step is a new motherboard BIOS to see if that makes any difference.

Having a similar problem with a WDS500G2X0C that I just bought, for an (older) HP Z820 workstation. The HP box is older, so I knew that I was taking a chance. (Updated the BIOS to the latest.) There do seem to be some folk with this model using (other aftermarket) M.2 SDDs.

Bought M.2 PCIe to PCIe adapter: https://www.amazon.com/gp/product/B01N78XZCH/

Linux (Ubuntu 18.04) sees the NVMe device and disk. I can read from the disk (via “dd”) though “only” at 1.4 GB/s. Cannot write to the NVMe disk, either directly via “dd” - which locks up the system) up in attempting to add the disk to LVM (via “pvcreate”).

So … something is not right.

Tried Chris’s grub parameter, but no luck.

Booting off (another WDC) SATA SDD w/o trouble. Want to use the M.2 SDD for fast LVM volumes.

I’m surprised we still haven’t had a firmware update to address this. Anyway, has anyone tried installing Ubuntu 20.04 beta on these drives? Are they working any better with a newer kernel? If not, is the 18.04 fix still working?

I have a WD SN520 stable running MINT 19.3 (with the kernel parameter fix,needs that too) for months now. I just tried the Ubuntu 20.04beta running on an USB stick. Without the parameter set in grub Ubuntu freezes (did not do any deeper digging into) while booting, but with the parameter set it boots up. So it seems that the drive still is not supported by the newer kernel but at least the fix still seem to work, although I did not do further stability tests, so no guarantee that it will run smoothly for long.

additional: For all who own the SN520 (512GB) as well: You can try nvme_core.default_ps_max_latency_us=14000 as parameter, which should give you one deeper power state (P3 with enlat:5000 exlat:9000 ). I did some testing (on mint 19.3 with mainline kernel 5.6.7) and it went smooth, while the P4-state I tried with ...=490000 (enlat:5000 exlat:44000 ) did result in a freeze after a few minutes)
If you test it recheck the enlat,exlat parameters of your drive with sudo nvme id-ctrl /dev/nvmeX where X is the nvme device in question (often 0) for yourself, the values for the Px states result vom simply adding enlat and exlat.

My 500 GB Drive has I believe the Mar 2020 firmware and it hasn’t shown the freeze (yet). Running a Debian derivative AntiX 19 with a 4.19 kernel.

Note, a freeze with low performance mode has also been reported with Samsung and to shiva ssd.

Maybe, the new firmware (install in Windows, use in linux) doesn’t have this problem?

I will repost if I see the freeze.

Stevesr0

I’m seeing the same with an SN850 on Linux 5.11:

[268690.209099] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10
[268690.289109] nvme 0000:01:00.0: enabling device (0000 -> 0002)
[268690.289234] nvme nvme0: Removing after probe failure status: -19

Is there any official word on exactly what latency is required? Is this not a firmware bug?

I really appreciate your answer, you made my day! Everything works with your tip!