# Hidden steps in 3D

At right, find a red (filter-left) and green (filter-right) 3D picture of a single sphere. The image is also available here.

Believe it or not there are a series of at least 6 levels in this image, delineated by at least five nearly-concentric sharp closed-boundaries well within the outer circle.

Where are these closed boundaries located e.g. at what distances from the circle center? Can you see them with your unaided eye?

The easiest (only?) way to see them may be to use red-green 3D glasses i.e. a red transparency over your left eye and a green one over your right eye. It may help if you imagine that you are looking at a three dimensional ball bulging toward you out of the screen.

You'll know it when and if you can see them. How many sharp edges can you make out?

## Vertical fat-bits in stereo-pairs

These edges were discovered by accident. They arise because finite lateral-pixel sizes (common in today's electronic images but perhaps first used by pointillists like Georges-Pierre Seurat to create many colors from few) translate into discrete "vertical steps" in stereo vision.

Basically, we were creating stereopairs of some textured molecular models. I was shocked to find that the molecule positions rendered nicely in three-dimensions but that the individual atoms weren't spheres at all but a series of concentric terraces. As discussed below, these terraces were not programmed-in but were an automatic result of the screen's finite lateral resolution.

Suppose that a distance-change of Δz gives rise to a lateral-displacement of Δx = Δz × 2tan[θ] in images recorded by a pair of eyes whose separation half-angle is θ. Then an image with pixels of width w can only contain information on discrete height steps of Δh = nw/(2tan[θ]), where n is a positive or negative integer.

This predicts that as you move your face toward the screen, θ will get larger and therefore the apparent depths of everything (including the spacing between steps) will get smaller. Is that consistent with your observations?

This is also a testament to the amazing calculation capabilities of the mammalian visual system. When we first started to try to get a computer to recognize these edges, back in the mid-1990's, it took like a week of calculation to do poorly something that a person walking in off the street can do easily in seconds. Of course, we still haven't figured out how to "record" what that talented person is seeing for more quantitative study.

## Generating the image

The texture on the sphere was generated as a series of random dots whose frequency was proportional to the sphere surface-area associated with a specific projected region. The height of each projected region was then calculated, and the texture in the red and green images was displaced left-right by an amount proportional to that region's height from the surrounding (zero-height and therefore red+green=yellow) plane.

You can see this in detail by looking for laterally-displaced red-green motifs in the center part of the sphere. Near the center their displacement might be as large as 10 pixels. Similar motifs with smaller lateral displacement exist as one moves to the edge of the sphere, and our brains do an amazing job of statistically-analyzing these offsets and "explaining them" with a model of sharp-edged layers.

This particular image was generated by the MOLECULE program linked here, using a "single-atom" molecule e.g. a BALL.MOL file whose ASCII contents were "0, 0, 0, 6".

## Finding the edges

How would you program a computer to find these sharp-edges? Even the correlation math that our brains do when looking at images in sequence (at right) does not seem up to the task. Notes about a program that can find these edges may be found here*.

* Chang Shen, "Lateral displacement maps obtained from scanning probe microscope images" (1997) 154pp (cf. Dissertation Abstracts International, 58-11B, p6029), in partial fulfillment of the requirements for a Ph.D. in Physics and Astronomy at UM-StL and UM-Rolla (pdf).