The era of digital photography has fostered a wealth of myths, the intelligent behaviour of lenses is one of them. Thus, if a lens changes DOF and focal length just by being mounted on another camera, the lens must be intelligent enough to talk to the camera and change its inherent optical properties according to the information it gets from the camera. This is of course nonsense, but a surprisingly high number of people believe this is true and believe in crop factors and suchlike.
As to CA with lenses on a film-based system ("silver halide" camera in official Nikon-speech), this is caused entirely by the optics. The film itself doesn't add to the issue.
With digital however, the picture is much more complex. Since CA typically gets worse towards the periphery of the image circle, you might think the reduced area recorded by DX-format DSLRS would avoid the most troubled outer zones. However, on a DSLR the sensor chip itself may generate CA by direct or indirect means. A typical example is light rays striking the sensor surface in an oblique angle and causing trouble for the Bayer filtering. Thus, light might spill over into adjacent pixels and register in the wrong colour. The microlenses on top of the pixel wells may cause CA. In a later stage, demosaicing may create artificial colour fringes, and so on. So there is an interaction between the lens, the way it projects its image onto the imaging chip, the pixel pitch and resolution of the sensor, and the algorithms needed to extract the completed image from the recorded data. With the introduction of the high-resolution D2X, we see that some lenses previously believed to be star performers no longer are superb, whilst others shine.