Glad you liked the book. We really did try hard to relate it to SAP HANA
Your understanding concerning how too many partitions affect compression factors sounds about right. It is indeed the value dictionaries that tend to take roughly the same amount of space then for each partition, while the actual valueID vectors often are RLE or block compressed, which means there is no linear growth of storage requirement compared to stored data.
Other factors that come into play is that during join processing more partitions (with more dictionaries) mean more translation structures are required to match up values across different tables. So not only the static space requirement increases, but also the processing time memory requirement.
And that's also the reason where what you called 'conventional wisdom' goes wrong: the scan operation is not in any way row oriented.
Instead it is implemented to work on whole blocks of data at the same time (SIMD instructions on CPU level) - many times this even works with the compressed data representation.
This makes scanning through 10.000 rows pretty much the same as scanning through 10.000.000 rows - depending on the compression and the total number of distinct values.
Our rule of thumb recommendation works well (that is, it strikes a balance between potential issues and compression) for many cases. I would recommend to start of which this and see how that works for the situation.
Something that might be handy in this regards is the Data Distribution Optimizer from the Data Warehousing Foundation option SAP HANA Data Warehousing Foundation.