PR4100 - High RAM consumption with 0%CPU running?

Hi! I have a PR4100 system, which finished indexing and transcoding/generating thumbnails (both ffmpeg and restsdk-server are not running, with which I mean that ffmpeg is not in the process list while restsdk-server is but with 0% CPU, so “sleep” status). Still more than 70% of the system’s RAM (currently the default 4 GB) is used!
image


Why is this happening? Does anybody have any solutions?

1 Like

On my PR4100 I upgraded my ram from 4gb to 8b.
Anyways, restsdk-server still hit 80%. Indexing already completed. Had to pkill it in the terminal, which restsdk-server restarted itself and my RAM is hovering 28%

1 Like

This is mind-boggling. A process that is doing nothing and keeps eating at our RAM. Mental.

1 Like

Exact same issue for me with restsdk-server, killed that, back climbing again to 50% RAM and intermittent 100% CPU. Decided to reboot and it’s back indexing again! WTF is that about? Why index after every reboot? I thought this would have been a good buy for me last month but all I’ve had so far is hassle!

How does this get through dev never mind release?

1 Like

This has also become a new problem out here since upgrading to 5.04.114.

My CPU/RAM are basically close to maxed out since the upgrade with restsdk-server and ffmpeg consuming basically all the resources on the NAS.

Also since the update I now get temperature warnings with the hard drives spinning constantly even when the system should be at idle with them running maxed out for days on end.

This was never an issue with the previous release version.

Is there a way to resolve this?

@Aaron5 Take a look at my thread for a workaround to the ffmpeg transcoding issue (which essentially goes through each of your video files and creates a transcoded version in order to be able to play within the Web and Mobile Cloud apps, without giving the user an option to disable).

Yeh that sounds exactly like what it’s doing. Going through every single file on the NAS. I notice I’ve now lost a few TB of HDD space also over the past few days.

I’m regretting upgrading to MyCloud 5 now.

Is there anyway to remove these transcoded versions?

Same issue as @HoldMyTech with restsdk-server and mono using well over 50% RAM constantly, even in idle state, when prior to OS 5 my system would use almost no CPU or RAM when idle.

I killed both restsdk-server and mono processes, only to have them start up again a few seconds later.

I do not use WD Cloud Access and have WD Cloud Access disabled, so should not have any indexing or transcoding, but still why high CPU and RAM usage.

I don’t need any of this bloat running 24x7, issue needs to be resolved and WD need to be far more active on this thread in particular to address all the concerns everyone has with this update being incredibly resource intensive. I want my system to be back to how it was before, when it’s idle it’s idle, not running all this bloatware “c r a p” 24x7.

1 Like

Would love to know if there are still any remnants of the transcoded media, so let us know if you find 'em. I did notice that my capacity stopped fluctuating and increased by about half a TB once I put my ffmpeg workaround into place.

Your work around certainly fixed the wild CPU and HDD r/w loop it was stuck in for about a week gone now.

If I find the transcoded content I’ll be sure to post here but what a seriously shithouse feature they’ve included in the latest release.

1 Like

To add an update here, restsdk-server is constantly fluctuating to high CPU counts and also consuming a lot of RAM as stated by previous posts even with ffmpeg disabled.

I suspect this service is also doing a crazy amount of disk I/O as the drives are noisy as hell since the upgrade and the system itself it ridiculously slow to respond. The WD NAS browser control UI takes 10-20 seconds to load anything of use between clicks, likely as a result of a overloaded CPU and disk I/O.

Seriously considering of demoting this NAS box for an alterntaive upgrade given it no longer works as it originally did.

@Aaron5 Just curious, do you have any apps installed and running?

Just the following both of which are more or less benign unless in operation

Screen Shot 2020-11-05 at 10.07.06 am

@Aaron5 Just as a test, would you try removing the Internal Backups app and see how that affects the system? Even for an hour or so would be interesting.

While rest-sdk is still chugging RAM without actually doing much (%CPU stable at 0), it’s hunger seems to be diminished after manually upgrading to the latest firmware version: My Cloud OS 5 Firmware Release Note v5.05.111

Specifically, RAM consumption by the process went from over 1.6GB to ~ 600MB :slight_smile:

Similar result as @ggirelli after manually updating to V5.05.111. CPU is hovering mostly around 4-10% at idle, however even when idle continues to kick up to about 40% every minute or so seemingly for no reason, then drop back down.

RAM usage has stayed pretty constant at 30-50% at idle, this all with WD Cloud disabled. - Should not be requiring this amount of RAM for an idle machine.

This update has definitely added a slight improvement to the CPU and RAM intensity of OS 5. However it’s still ripping through system resources for absolutely no value or reason. When OS 3 would use almost nothing at idle.

WD still has a lot more work to get OS 5 to a point where it’s being as efficient as possible and minimise system resource usage to as little as possible.

1 Like

Absolutely! I have to say that I am quite grateful that the devs seem to be on top of things, delivering on their promises (e.g., Transmission), rolling out quick updates, and listening to the community. I hope they keep up the good work :smiley:

1 Like

Disabling the internal backups appears to have settled the drive.

From what I can tell it looks like the nightly backups (which are mostly symlinks) are being reindexed every day by whatever ballsed up analyser has been built into WD v5.

It appears the drive indexing takes about 2-3 days on my unit so you can imagine a nightly backup keeps adding +2/3 days of additional indexing everyday.

Wild.

1 Like

@Aaron5 what a bummer - but at least we nailed the suspect.