You've made a lot of generalizations there, and most are incorrect.
Drive caches are definitely not pointless. As I explained, all it takes is a BSOD or kernel panic and caching writes to system RAM can be a catasrophe. You say that all operating systems released in the past 10+ years have done their own caching, but it's easily provable that Windows XP had no such caching enabled. (Or at least it didn't work properly)
Here's an example (I don't like to generalize - I prefer specifics that other people can check) - the game League of Legends loads tons of level data before the map/round starts. On Windows XP loadtimes are quite long. On Windows 7 the first loadtime is long, and then data has been cached to RAM and they are super fast. If you dump LoL into a RAMdisk, cached loadtimes match RAMdisk loadtimes, indicating the caching is working properly. On Windows XP loadtimes will stay slow. (Much to my annoyance, and prompting me to upgrade)
"There is no point in the drive caching things that the OS already has cached ( and thus, won't ask for again )."
The extra cache will not be wasted. It can be used for writes, to eliminate stutters and jolts.
"Games generally read whatever data they will need when loading a level or similar; they don't write a bunch of data to disk and then try to read other data that they need immediately and thus, can not continue to play smothly. "
Another generalization (easy to prove false) - you're also lumping every game genre and engine type together, which is another bad move. Some do streaming, some have levels and loading screens, some (like MMOs) let you load zones as fast as possible to get somewhere quicker. Not all benefit from more cache, but some do. Also, as mentioned above, Team Fortress 2 has introduced stutters numerous times, then later fixed them with patches. The drives with the most cache fared the best when the developers made these screwups. (And believe me, devs everywhere are making such screwups)
The small/indie game Sanctum (very popular - I think it sold over a million copies?) had stutters from exactly the same cause - writing something, locking up the I/O, then wanting to read something immediately after.
"Game designers figured out long ago how to do seamless zone transitions by reading new map data when you get close to a seamless border so it is availible when you cross the border and the data is actually needed. "
Yes, some have. But most definitely have not. Probably 80% of developers are clueless about that sort of thing. Luckily most just build their games off engines like UE3, which has most of it figured out. But those that build their own, or even mangle it in some way (Sanctum - built on UE3) can still introduce jolts. Unfortunately there's a lot of them.
How difficult is it to add more cache? Not very. It's not very expensive either. It's more a question of how "dangerous" it is, and whether it'll make desktop drives popular in servers. (Undercutting another market)
I'd also like to bring up MMOs - most people have said that MMOs benefit immensely from SSDs due to all the streaming that they do (levels, models, textures, etc.); HDDs have trouble keeping up, and you obviously don't have enough RAM for your operating system to cache a 20-40GB MMO... HDDs with extra cache won't be able to fix it, but at least they'll absorb jolts caused by writes and would be an improvement.
"Operating systems also try to flush writes to the disk slowly in the background so that reads remain responsive."
Correct. But that may actually hurt performance.
Games generally issue read requests serially (one after the other) - if a write requires a seek, it's going to take 5-20ms to complete, and knock the head out of position. Then the drive has to seek back. Doing a read then write them read then write (issued from different programs on the same drive) actually harms performance far more than you'd think.
Your operating system may see some reads coming from a game, issue them to the drive, then also issue some writes since it's been a few miliseconds and it wants those writes to happen soon. More reads could come in at any time, but it won't wait to see. Extra cache would let the drive put those writes on the backburner while important reads are dealt with. Currently, once you run out of drive cache (for example, when unzipping a file) the drive has to deal with the writes immediately. This can be quite detrimental to the performance of other software, and is one of the reasons people consider HDDs so slow.
After experimenting with huge write caches with the FancyCache software, I can tell you that hard drives feel way way way quicker when writes can be put on the backburner near indefinitely. (In practice the writes drain to disk as soon as possible - but it would keep stockpiling them if it needed to.)
Here's an example you can probably wrap your head around. If I'm launching the game Borderlands, it takes about 25 seconds to start. If I'm launching it while unzipping something in the background, it takes about 3-4 minutes to start. (The effect of having to seek around to write, then seek back to read, etc.) If however I have a large FancyCache write cache, then it takes about 25 seconds to start. (again, while unzipping something in the background) However, I can hear my drive clicking away for another ~20 seconds afterwards, presumably writting all the unzipped stuff. That's the effect of more write cache. That's what I want, but without the danger of a BSOD or kernel panic nuking huge amounts of data (and my filesystem) - more drive cache improves performance safely. It's a good thing. I want it.