or why I hope for better JPEGs out of D2x and D2hs
First of all, it separates luminance from colour - "Y" is luminance coordinate, "C" are two colour coordinates.
Second, YCC includes wider colour range then (s)RGB.
From the first, we can hope for much less colour artifacts and maze patterns; from the second, we can expect better colour fidelity. If new ASIC used in these cameras is converting RAW data using YCC colour representation (which would be very natural, as RAW data is actually much closer to YCC then to RGB; but conversion is more computation-intensive), we will see about 20% more resolution, lower noise, less blotches, less moire, less maze/jigsaw etc. YCC files are easier and cleaner to resize.
One of the reasons why RAWMagick interpolation is pretty clean (and slow) is because we do it in a kind of YCC colour model.
First of all, it separates luminance from colour - "Y" is luminance coordinate, "C" are two colour coordinates.
Second, YCC includes wider colour range then (s)RGB.
From the first, we can hope for much less colour artifacts and maze patterns; from the second, we can expect better colour fidelity. If new ASIC used in these cameras is converting RAW data using YCC colour representation (which would be very natural, as RAW data is actually much closer to YCC then to RGB; but conversion is more computation-intensive), we will see about 20% more resolution, lower noise, less blotches, less moire, less maze/jigsaw etc. YCC files are easier and cleaner to resize.
One of the reasons why RAWMagick interpolation is pretty clean (and slow) is because we do it in a kind of YCC colour model.