Mapping table caching is a promising technique to reduce the RAM footprint of the FTL mapping tables in modern SSDs. The mapping cache can achieve a high hit ratio under the disk workloads of many production systems because there are spatial and temporal localities in the disk workloads. However, the mapping cache suffers from severe miss penalty and degrades the SSD performance under random write patterns, which are common in benchmarks and database applications. Our main result is that optimizing the mapping cache for random-write workloads is completely different from that for non-random workloads. We propose partitioning all flash blocks into a group of user data and a group of mapping information. By strategically shifting free flash blocks between the two groups, the best balance of the garbage collection overhead between the two groups is achieved. We conducted a series of experiments using the disk workloads from industry-standard SSD benchmarks, and the results show that our approach improved the write performance by up to 30% compared to a conventional map caching method.