+ *****************************************************************************
+ *
+ * TODO:
+ *
+ * - When a new image is loaded, there is a glitch: animation pauses during
+ * the period when we're loading the image-to-fade-in. On fast (2GHz)
+ * machines, this stutter is short but noticable (usually around 1/10th
+ * second.) On slower machines, it can be much more pronounced.
+ * This turns out to be hard to fix...
+ *
+ * Image loading happens in three stages:
+ *
+ * 1: Fork a process and run xscreensaver-getimage in the background.
+ * This writes image data to a server-side X pixmap.
+ *
+ * 2: When that completes, a callback informs us that the pixmap is ready.
+ * We must then download the pixmap data from the server with XGetImage
+ * (or XShmGetImage.)
+ *
+ * 3: Once we have the bits, we must convert them from server-native bitmap
+ * layout to 32 bit RGBA in client-endianness, to make them usable as
+ * OpenGL textures.
+ *
+ * 4: We must actually construct a texture.
+ *
+ * So, the speed of step 1 doesn't really matter, since that happens in
+ * the background. But steps 2, 3, and 4 happen in *this* process, and
+ * cause the visible glitch.
+ *
+ * Step 2 can't be moved to another process without opening a second
+ * connection to the X server, which is pretty heavy-weight. (That would
+ * be possible, though; the other process could open an X connection,
+ * retrieve the pixmap, and feed it back to us through a pipe or
+ * something.)
+ *
+ * Step 3 might be able to be optimized by coding tuned versions of
+ * grab-ximage.c:copy_ximage() for the most common depths and bit orders.
+ * (Or by moving it into the other process along with step 2.)
+ *
+ * Step 4 is the hard one, though. It might be possible to speed up this
+ * step if there is some way to allow two GL processes share texture
+ * data. Unless, of course, all the time being consumed by step 4 is
+ * because the graphics pipeline is flooded, in which case, that other
+ * process would starve the screen anyway.
+ *
+ * Is it possible to use a single GLX context in a multithreaded way?
+ * Or use a second GLX context, but allow the two contexts to share data?
+ * I can't find any documentation about this.
+ *
+ * How does Apple do this with their MacOSX slideshow screen saver?
+ * Perhaps it's easier for them because their OpenGL libraries have
+ * thread support at a lower level?