Overprovisioning definitely makes a big impact on the performance of all but the most recent SSD controller architectures out there, but only once the disk is mostly full.
Check out any of Anandtech's benchmarks from the past year or so. They now include graphs showing the consistency of I/O completion times; reserving more than the default ~7.5% makes the GC pauses a lot less severe and makes a huge improvement in worst-case performance. Under sustained load, having more spare area often makes the difference between always being stuck at worst-case performance and always being near best-case.
For example, under a solid random write workload a full Samsung 850 Pro will complete arount 7-8k IOPS, but with 25% spare area it will hover around 40k IOPS. That's a very enticing space/speed tradeoff, especially if you've already decided that SSDs are to be preferred over hard drives for your workload.
The default amount of overprovisioning in most drives is chosen to be roughly enough to allow for reasonable performance and lifespan (affected by write amplification), and in MLC drives usually corresponds exactly to the discrepancy between a binary/memory gigabyte and a decimal/hard drive gigabyte, which simplifies marketing. Drives intended for high lifespan often have odd sizes due to their higher default overprovisioning.
Check out any of Anandtech's benchmarks from the past year or so. They now include graphs showing the consistency of I/O completion times; reserving more than the default ~7.5% makes the GC pauses a lot less severe and makes a huge improvement in worst-case performance. Under sustained load, having more spare area often makes the difference between always being stuck at worst-case performance and always being near best-case.
For example, under a solid random write workload a full Samsung 850 Pro will complete arount 7-8k IOPS, but with 25% spare area it will hover around 40k IOPS. That's a very enticing space/speed tradeoff, especially if you've already decided that SSDs are to be preferred over hard drives for your workload.
The default amount of overprovisioning in most drives is chosen to be roughly enough to allow for reasonable performance and lifespan (affected by write amplification), and in MLC drives usually corresponds exactly to the discrepancy between a binary/memory gigabyte and a decimal/hard drive gigabyte, which simplifies marketing. Drives intended for high lifespan often have odd sizes due to their higher default overprovisioning.