A moment to think about what you are trying to achieve. You are dumping hundreds of thousands of images at a server. I have no idea how big your files are but each one of thee files needs a place on the disk - obvious, you think. It was driving me crazy too, for a few hours - 14hours to transfer 35Gb of data, not all images. My disk volume was about 90% full. I could download that amount faster from the Internet!
There is a likely a demon/daemon (depending on how you look at it - bad/good), working away trying to catagorise your newly submitted images. On top of this there is logging going on reporting every issue. These facilities, however, want to put data about your images/activity somewhere else on the disk. Unless you have separate disks in use - more than one disk[group]. not necessarily 2 or more disks RAIDed - you will be suffering from an indexing issue. Disk head hogging, to give it a name, that will mean that your heads are constantly active trying to move data. Without a decent size cache on your drives, there will be a lot of head shuffling, leading to very, very slow performance. Deletes on one area will cause the indexer to review the images and refresh the database … somewhere else on your disk. RAID are all very good but the write performance can take a knock unless you build your NAS to your needs. No one says you should separate the operating system from your data, and doing so in a NAS is not easy because you are usually forced to use a set of disks with the same capacity so that there is no wasted space on any disk. It would be very useful if NAS’s were built with a separate disk purely for the OS and its logging etc… These days that is more possible with small SSD NVE cards, but …
That said, check your indexing first. If you have Twonky running, and not set up to your needs, it will be fighting you all the time, because its default setup is to reactively refresh/update the database. Just check its settings via http://your.ext2.nas:9000 (not https), You can look around and turn off what you think you do not need. For me turning off the Rescan Interval - setting to 0, was enough to get back to work. I could set the rescan interval to an hour or something but I do not want it clogging up my activities. It would be more useful to be able to CRON Twonky so that it only runs at night for instance. Also make sure that you turn off ‘Media’ on the share in the control panel if you want to limit what the indexer does. You will find that the indexer stay operating for a while after you make these changes. You can then set the indexer on its way again just before you finish for the day with a manual rescan. Such a pain not having CRON or a sensible timer for this but there you are. The same with turning off logging in the main control panel if absolutely necessary, but i would avoid this, as indexing is normally the culprit.
Hope this helps.