But deep down, it is a performance principle: luxuries like pointers, virtual functions etc. are suitable for complicated but rarely executed code, while important loops should access arrays of components (or something very close) without unnecessary cache misses.
If memory accesses were as cheap as 40 years ago, inheritance vs. composition would be a lofty debate about elegance and architecture and programming language semantics and taste; now it has become a much more practical and detailed tradeoff between difficult design and bad performance.
> if memory accesses were as cheap as 40 years ago
To clarify for readers not familiar with the motivations here:
Memory access is in fact cheaper than it used to be, as you might expect. The concern is that it did not get cheaper nearly as fast as processors got faster, meaning the relative cost of accessing memory became much higher when you could be doing more calculations in that time instead.
Specially what allows for better performance is accessing memory in a pattern that can be predicted and prefetched. Linear access is best, because looking up a single byte in memory has approximately the same overhead as looking up a contiguous chunk of bytes.
Think Wiley E. Coyote picking up train tracks from behind himself and putting them down in front. If your memory access looks like that then you are less likely to suffer from memory access bottlenecks.
If memory accesses were as cheap as 40 years ago, inheritance vs. composition would be a lofty debate about elegance and architecture and programming language semantics and taste; now it has become a much more practical and detailed tradeoff between difficult design and bad performance.