Eric,
I appreciate your comments. I suspect that I am going down a rabbit hole (whatever that means) with this particular quest for clarity.
My new mini-computer has an Intel i5 processor which handles 64 bits for computations. So Bob Denny seems to thin that is fine for imaging. So far so good.

Since I spent 6 months trying with only partial success to get my macbook air with parallels run ACP and Maxim DL, I don't want to go down another dead end. The good news is that my mini-computer has showing encouraging potential for imaging, which was the main reason for buying it. But it would be nice if it could do imaging processing too.

By color bit depth, I mean the number of bits that are available to display the various shades of grey. For example, 8 bits can display 256 shades or levels. 10 bits can display 1024 levels.
15-16 bits of color depth is called high color. 24 bit seems to be called true color.

I have just found this on the PixInsight website, which is the software I am experimenting with:
A minimum color depth of 16 bits per pixel is required, but for serious work a true-color, 24-bit video mode is a must.

I am definitely getting confused--and must now pay the price for not having seriously studied computer science.
Warmly, Roger