Preview only show first 10 pages with watermark. For full document please download

Stereoscopy Wikipedia, Printable Format June 10

   EMBED


Share

Transcript

3/5/14 Stereoscopy - Wikipedia, the free encyclopedia Stereoscopy From Wikipedia, the free encyclopedia This is an old revision of this page, as edited by Volkan Yuksel (talk | contribs) at 03:35, 10 June 2013. It may differ significantly from the current revision (//en.wikipedia.org/wiki/Stereoscopy). (diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff) Stereoscopy (also called stereoscopics or 3D imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from the Greek "στερεός" (stereos), "firm, solid"[2] + "σκοπέω" (skopeō), "to look", "to see".[3] Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements. Pocket stereoscope with original test image. Used by military to examine stereoscopic pairs of aerial photographs. Contents 1 Background 1.1 Visual requirements 2 Side-by-side 2.1 Freeviewing 2.2 Autostereogram 2.3 Stereoscope and stereographic cards 2.4 Transparency viewers 2.5 Head-mounted displays 2.6 Virtual retinal displays 3 3D viewers 3.1 Active 3.1.1 Shutter systems 3.2 Passive 3.2.1 Polarization systems 3.2.2 Interference filter systems 3.2.3 Color anaglyph systems 3.2.4 Chromadepth system 3.2.5 Pulfrich method 3.2.6 Over/under format 4 Other display methods without viewers 4.1 Autostereoscopy 4.1.1 Holography 4.1.2 Volumetric displays 4.1.3 Integral imaging 4.2 Wiggle stereography 5 Stereo photography techniques en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes View of Boston, c. 1860; an early stereoscopic card for viewing a scene from nature Kaiserpanorama consisted of a multistation viewing apparatus and sets of stereo slides. Patented by A. Fuhrmann around 1890. [1] Petition for Inter Partes Review of U.S. Pat. No. 7,477,284 Petition for Inter Partes Review  of U.S. Pat. No. 6,665,003 IPR2013‐00219 IPR2013‐00218 EXHIBIT EXHIBIT Sony‐ 1042 Sony‐ 1040 1/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia 5.1 Film photography 5.2 Digital photography 5.3 Digital stereo bases (baselines) 6 Base line selection 6.1 Longer base line for distant objects "Hyper Stereo" 6.1.1 Limitations of hyperstereo 6.1.2 A practical example 6.2 Shorter baseline for ultra closeups "Macro stereo" 6.3 Baseline tailored to viewing method 6.4 Variable base for "geometric stereo" 6.4.1 Precise stereoscopic baseline calculation methods 6.4.2 Multi-rig stereoscopic cameras 7 Stereo Window 8 Bibliography 8.1 Footnotes 8.2 References 8.3 Sources 9 External links Company of ladies watching stereoscopic photographs, painting by Jacob Spoel, before 1868. A very early depiction of people using a stereoscope. Background Stereoscopy creates the illusion of three-dimensional depth from given two-dimensional images. Human vision, including the perception of depth, is a complex process which only begins with the acquisition of visual information taken in through the eyes; much processing ensues within the brain, as it strives to make intelligent and meaningful sense of the raw information provided. One of the very important visual functions that occur within the brain as it interprets what the eyes see is that of assessing the relative distances of various objects from the viewer, and the depth dimension of those same perceived objects. The brain makes use of a number of cues to determine relative distances and depth in a perceived scene, including:[4] Stereopsis Accommodation of the eye Overlapping of one object by another Subtended visual angle of an object of known size Linear perspective (convergence of parallel edges) Vertical position (objects higher in the scene generally tend to be perceived as further away) Haze, desaturation, and a shift to bluishness Change in size of textured pattern detail (All the above cues, with the exception of the first two, are present in traditional two-dimensional images such as paintings, photographs, and television.) Stereoscopy is the production of the illusion of depth in a photograph, movie, or other two-dimensional image by presenting a slightly different image to each eye, and thereby adding the first of these cues (stereopsis) as well. Both of the 2D offset images are then combined in the brain to give the perception of 3D depth. It is important to note that since all points in the image focus at the same plane regardless of their depth in the original scene, the second cue, focus, is still not duplicated and therefore the illusion of depth is incomplete. There are also primarily two effects of stereoscopy that are unnatural for the human vision: first, the mismatch between convergence and accommodation, caused by the difference between an object's perceived position in front of or behind the display or screen and the real origin of that light and second, possible crosstalk between the eyes, caused by imperfect image separation by some methods. Although the term "3D" is ubiquitously used, it is also important to note that the presentation of dual 2D images is distinctly different from displaying an image in three full dimensions. The most notable difference is that, in the case of "3D" displays, the 2/19 en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia different from displaying an image in three full dimensions. The most notable difference is that, in the case of "3D" displays, the observer's head and eye movement will not increase information about the 3-dimensional objects being displayed. Holographic displays or volumetric display are examples of displays that do not have this limitation. Similar to the technology of sound reproduction, in which it is not possible to recreate a full 3-dimensional sound field merely with two stereophonic speakers, it is likewise an overstatement of capability to refer to dual 2D images as being "3D". The accurate term "stereoscopic" is more cumbersome than the common misnomer "3D", which has been entrenched after many decades of unquestioned misuse. Although most stereoscopic displays do not qualify as real 3D display, all real 3D displays are also stereoscopic displays because they meet the lower criteria as well. Most 3D displays use this stereoscopic method to convey images. It was first invented by Sir Charles Wheatstone in 1838.[5][6] Wheatstone originally used his stereoscope (a rather bulky device)[7] with drawings because photography was not yet available, yet his original paper seems to foresee the development of a realistic imaging method:[8] For the purposes of illustration I have employed only outline figures, for had either shading or colouring been introduced it might be supposed that the effect was wholly or in part due to these circumstances, whereas by leaving them out of consideration no room is left to doubt that the entire effect of Wheatstone mirror stereoscope relief is owing to the simultaneous perception of the two monocular projections, one on each retina. But if it be required to obtain the most faithful resemblances of real objects, shadowing and colouring may properly be employed to heighten the effects. Careful attention would enable an artist to draw and paint the two component pictures, so as to present to the mind of the observer, in the resultant perception, perfect identity with the object represented. Flowers, crystals, busts, vases, instruments of various kinds, &c., might thus be represented so as not to be distinguished by sight from the real objects themselves.[5] Stereoscopy is used in photogrammetry and also for entertainment through the production of stereograms. Stereoscopy is useful in viewing images rendered from large multi-dimensional data sets such as are produced by experimental data. An early patent for 3D imaging in cinema and television was granted to physicist Theodor V. Ionescu in 1936. Modern industrial threedimensional photography may use 3D scanners to detect and record three-dimensional information.[9] The three-dimensional depth information can be reconstructed from two images using a computer by corresponding the pixels in the left and right images (e.g.,[10]). Solving the Correspondence problem in the field of Computer Vision aims to create meaningful depth information from two images. Visual requirements Anatomically, there are 3 levels of binocular vision required to view stereo images: 1. Simultaneous perception 2. Fusion (binocular 'single' vision) 3. Stereopsis These functions develop in early childhood. Some people who have strabismus disrupt the development of stereopsis, however orthoptics treatment can be used to improve binocular vision. A person's stereoacuity determines the minimum image disparity they can perceive as depth. It is believed that approximately 12% of people are unable to properly see 3D images, due to a variety of medical conditions.[11][12] According to another experiment up to 30% of people have very weak stereoscopic vision preventing them from depth perception based on stereo disparity. This nullifies or greatly decreases immersion effects of stereo to them.[13] Side-by-side en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 3/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia Traditional stereoscopic photography consists of creating a 3D illusion starting from a pair of 2D images, a stereogram. The easiest way to enhance depth perception in the brain is to provide the eyes of the viewer with two different images, representing two perspectives of the same object, with a minor deviation equal or nearly equal to the perspectives that both eyes naturally receive in binocular vision. If eyestrain and distortion are to be avoided, each of the two 2D images preferably should be presented to each eye of the viewer so that any object at infinite distance seen by the viewer should be perceived by that eye while it is oriented straight ahead, the viewer's eyes being neither crossed nor diverging. When the picture contains no object at infinite distance, such as a horizon or a cloud, the pictures should be spaced correspondingly closer together. "The early bird catches the worm" Stereograph published in 1900 by North-Western View Co. of Baraboo, Wisconsin, digitally restored. The principal advantages of side-by-side viewers is that there is no diminution of brightness so images may be presented at very high resolution and in full spectrum color. The side-by-side method is simple to create. Little or no additional image processing is required. Under some circumstances, such as when a pair of images is presented for crossed or parallel eye viewing, no device or additional optical equipment is needed. But it can be difficult or uncomfortable to view without optical aids. Freeviewing Freeviewing is viewing a side-by-side image without using a viewer.[14] Two methods are available to freeview:[15][16] The parallel view method uses two images not more than 65mm between corresponding image points; this is the average distance between the two eyes. The viewer looks through the image while keeping the vision parallel; this can be difficult with normal vision since eye focus and binocular convergence normally work together. The cross-eyed view method uses the right and left images exchanged and views the images cross-eyed with the right eye viewing the left image and vice-versa. Prismatic, self-masking glasses are now being used by cross-view advocates. These reduce the degree of convergence and allow large images to be displayed. Printable cross eye viewer. Autostereogram Main article: Autostereogram An autostereogram is a single-image stereogram (SIS), designed to create the visual illusion of a three-dimensional (3D) scene within the human brain from an external two-dimensional image. In order to perceive 3D shapes in these autostereograms, one must overcome the normally automatic coordination between focusing and vergence. Stereoscope and stereographic cards Main article: Stereoscope The stereoscope is essentially an instrument in which two photographs of the same object, taken from slightly different angles, are simultaneously presented, one to each eye. A simple stereoscope is limited in the size of the image that may be used. A more complex stereoscope uses a pair of horizontal periscope-like devices, allowing the use of larger images that can present more detailed information in a wider field of view. 4/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia more detailed information in a wider field of view. Transparency viewers Main article: Slide viewer#Stereo slide viewer Pairs of stereo views are printed on translucent film which is then mounted around the edge of a cardboard disk, images of each pair being diametrically opposite. An advantage offered by transparency viewing is that a wider field of view may be presented since images, being illuminated from the rear, may be placed much closer to the lenses. The practice of viewing film-based transparencies in stereo via a viewer dates to at least as early as 1931, when Tru-Vue began to market filmstrips that were fed through a handheld device made from Bakelite. In the 1940s, a modified and miniaturized variation of this technology was introduced as the ViewMaster. Other key companies that developed and marketed stereoscopic viewers and cards include Bruguiere, Lestrade and ROMO - Robert Mouzillat. Head-mounted displays A View-Master Model E of the 1950s Main article: Head-mounted display The user typically wears a helmet or glasses with two small LCD or OLED displays with magnifying lenses, one for each eye. The technology can be used to show stereo films, images or games, but it can also be used to create a virtual display. Head-mounted displays may also be coupled with head-tracking devices, allowing the user to "look around" the virtual world by moving their head, eliminating the need for a separate controller. Performing this update quickly enough to avoid inducing nausea in the user requires a great amount of computer image processing. If six axis position sensing (direction and position) is used then wearer may move about within the limitations of the equipment used. Owing to rapid advancements in computer graphics and the continuing miniaturization of video and other equipment these devices are beginning to become available at more reasonable cost. An HMD with a separate video source displayed in front of each eye to achieve a stereoscopic effect Head-mounted or wearable glasses may be used to view a see-through image imposed upon the real world view, creating what is called augmented reality. This is done by reflecting the video images through partially reflective mirrors. The real world view is seen through the mirrors' reflective surface. Experimental systems have been used for gaming, where virtual opponents may peek from real windows as a player moves about. This type of system is expected to have wide application in the maintenance of complex systems, as it can give a technician what is effectively "x-ray vision" by combining computer graphics rendering of hidden elements with the technician's natural vision. Additionally, technical data and schematic diagrams may be delivered to this same equipment, eliminating the need to obtain and carry bulky paper documents. Augmented stereoscopic vision is also expected to have applications in surgery, as it allows the combination of radiographic data (CAT scans and MRI imaging) with the surgeon's vision. Virtual retinal displays Main article: Virtual retinal display A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), not to be confused with a "Retina Display", is a display technology that draws a raster display (like a television) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them. en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 5/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia 3D viewers There are two categories of 3D viewer technology, active and passive. Active viewers have electronics which interact with a display. Active Shutter systems Main article: Active shutter 3D system A Shutter system works by openly presenting the image intended for the left eye while blocking the right eye's view, then presenting the right-eye image while blocking the left eye, and repeating this so rapidly that the interruptions do not interfere with the perceived fusion of the two images into a single 3D image. It generally uses liquid crystal shutter glasses. Each eye's glass contains a liquid crystal layer which has the property of becoming dark when voltage is applied, being otherwise transparent. The glasses are controlled by a timing signal that allows the glasses to alternately darken over one eye, and then the other, in synchronization with the refresh rate of the screen. A pair of LCD shutter glasses used to view XpanD 3D films. The thick frames conceal the electronics and batteries. Passive RealD circular polarized glasses Polarization systems Main article: Polarized 3D system To present stereoscopic pictures, two images are projected superimposed onto the same screen through polarizing filters or presented on a display with polarized filters. For projection, a silver screen is used so that polarization is preserved. The viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As each filter only passes light which is similarly polarized and blocks the opposite polarized light, each eye only sees one of the images, and the effect is achieved. Interference filter systems Main article: Anaglyph 3D#Interference filter systems This technique uses specific wavelengths of red, green, and blue for the right eye, and different wavelengths of red, green, and blue for the left eye. Eyeglasses which filter out the very specific wavelengths allow the wearer to see a full color 3D image. It is also known as spectral comb filtering or wavelength multiplex visualization or super-anaglyph. Dolby 3D uses this principle. The Omega 3D/Panavision 3D system has also used an improved version of this technology[17] In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO Theatrical, who marketed it on behalf of Panavision, citing ″challenging global economic and 3D market conditions″.[18] Although DPVO dissolved its business operations, Omega Optical continues promoting and selling 3D systems to non-theatrical markets. Omega Optical’s 3D system contains projection filters and 3D glasses. In addition to the passive stereoscopic 3D system, Omega Optical has produced enhanced anaglyph 3D glasses. The Omega’s red/cyan anaglyph glasses use complex metal oxide thin film coatings and high quality annealed glass optics. Color anaglyph systems Main article: Anaglyph 3D en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 6/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia Anaglyph 3D is the name given to the stereoscopic 3D effect achieved by means of encoding each eye's image using filters of different (usually chromatically opposite) colors, typically red and cyan. Anaglyph 3D images contain two differently filtered colored images, one for each eye. When viewed through the "color-coded" "anaglyph glasses", each of the two images reaches one eye, revealing an integrated stereoscopic image. The visual cortex of the brain fuses this into perception of a three dimensional scene or composition. Chromadepth system Main article: ChromaDepth The ChromaDepth procedure of American Paper Optics is based on the fact that with a prism, colors are separated by varying degrees. The ChromaDepth eyeglasses contain special view foils, which consist of microscopically small prisms. This causes the image to be translated a certain amount that depends on its color. If one uses a prism foil now with one eye but not on the other eye, then the two seen pictures – depending upon color – are more or less widely separated. The brain produces the spatial impression from this difference. The advantage of this technology consists above all of the fact that one can regard ChromaDepth pictures also without eyeglasses (thus two-dimensional) problem-free (unlike with two-color anaglyph). However the colors are only limitedly selectable, since they contain the depth information of the picture. If one changes the color of an object, then its observed distance will also be changed.[citation needed] Anaglyph 3D glasses ChromaDepth glasses with prism-like film Pulfrich method Main article: Pulfrich effect The Pulfrich effect is based on the phenomenon of the human eye processing images more slowly when there is less light, as when looking through a dark lens. Because the Pulfrich effect depends on motion in a particular direction to instigate the illusion of depth, it is not useful as a general stereoscopic technique. For example, it cannot be used to show a stationary object apparently extending into or out of the screen; similarly, objects moving vertically will not be seen as moving in depth. Incidental movement of objects will create spurious artifacts, and these incidental effects will be seen as artificial depth not related to actual depth in the scene. KMQ stereo prismatic viewer with openKMQ plastics extensions Over/under format Stereoscopic viewing is achieved by placing an image pair one above one another. Special viewers are made for over/under format that tilt the right eyesight slightly up and the left eyesight slightly down. The most common one with mirrors is the View Magic. Another with prismatic glasses is the KMQ viewer.[19] A recent usage of this technique is the openKMQ project.[20] Other display methods without viewers Autostereoscopy Main article: Autostereoscopy Autostereoscopic display technologies use optical components in the display, rather than worn by the user, to enable each eye to see a different image. Because headgear is not required, it is also called "glasses-free 3D". The optics split the images directionally into the viewer's eyes, so the display viewing geometry requires limited head positions that will achieve the stereoscopic effect. Automultiscopic displays provide multiple views of the same scene, rather than just two. Each view is en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 7/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia stereoscopic effect. Automultiscopic displays provide multiple views of the same scene, rather than just two. Each view is visible from a different range of positions in front of the display. This allows the viewer to move left-right in front of the display and see the correct view from any position. The technology includes two broad classes of displays: those that use headtracking to ensure that each of the viewer's two eyes sees a different image on the screen, and those that display multiple views so that the display does not need to know where the viewers' eyes are directed. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, volumetric display, holography and light field displays. Holography Main articles: Holography and Computer Generated Holography Research into holographic displays has produced devices which are able to create a light field identical to that which would emanate from the original scene, with both horizontal and vertical parallax across a large range of viewing angles. The effect is similar to looking through a window at the scene being reproduced; this may make CGH the most convincing of the 3D display technologies, but as yet the large amounts of calculation required to generate a detailed hologram largely prevent its application outside of the laboratory. The Nintendo 3DS uses parallax barrier autostereoscopy to display a 3D image. Volumetric displays Main article: Volumetric display Volumetric displays use some physical mechanism to display points of light within a volume. Such displays use voxels instead of pixels. Volumetric displays include multiplanar displays, which have multiple display planes stacked up, and rotating panel displays, where a rotating panel sweeps out a volume. Other technologies have been developed to project light dots in the air above a device. An infrared laser is focused on the destination in space, generating a small bubble of plasma which emits visible light. Laser plasma volumetric display Integral imaging Main article: Integral imaging Integral imaging is an autostereoscopic or multiscopic 3D display, meaning that it displays a 3D image without the use of special glasses on the part of the viewer. It achieves this by placing an array of microlenses (similar to a lenticular lens) in front of the image, where each lens looks different depending on viewing angle. Thus rather than displaying a 2D image that looks the same from every direction, it reproduces a 4D light field, creating stereo images that exhibit parallax when the viewer moves. Wiggle stereography Main article: Wiggle stereoscopy Wiggle stereoscopy is an image display technique achieved by quickly alternating display of left and right sides of a stereogram. Found in animated GIF format on the web. Online examples are visible in the New-York Public Library stereogram collection (http://stereo.nypl.org/create). The technique is also known as "Piku-Piku".[21] en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 8/19 Stereoscopy - Wikipedia, the free encyclopedia Stereo photography techniques Film photography It is necessary to take two photographs for a stereoscopic image. This can be done with two cameras, with one camera moved quickly to two positions, or with a stereo camera incorporating two or more side-by-side lenses. In the 1950s, stereoscopic photography regained popularity when a number of manufacturers began introducing stereoscopic cameras to the public. The new cameras were developed to use 135 film, which had gained popularity after the close of World War II. Many of the conventional cameras used the film for 35 mm transparency slides, and the new stereoscopic cameras utilized the film to make stereoscopic slides. The Stereo Realist camera was the most popular, and its 5P picture format became a standard. The stereoscopic cameras were marketed with special viewers that allowed for the use of such slides. With these cameras the public could easily create their own stereoscopic memories. Although their popularity has waned, some of these cameras are still in use today. The Stereo Realist, which defined a new stereo format. The 1980s saw a minor revival of stereoscopic photography extent when point-andshoot stereo cameras were introduced. Most of these cameras suffered from poor optics and plastic construction, and were designed to produce lenticular prints, a format which never gained wide acceptance, so they never gained the popularity of the 1950s stereo cameras. Digital photography The beginning of the 21st century marked the coming of the age of digital photography. Stereo lenses were introduced which could turn an ordinary film camera into a stereo camera by using a special double lens to take two images and direct them through a single lens to capture them side by side on the film. Although current digital stereo cameras cost hundreds of dollars,[22] cheaper models also exist, for example those produced by the company Loreo. It is also possible to create a twin camera rig, together with a "shepherd" device to synchronize the shutter and flash of the two cameras. By mounting two cameras on a bracket, spaced a bit, with a mechanism to make both take pictures at the same time. Newer cameras are even being used to shoot "step video" 3D slide shows with many pictures almost like a 3D motion picture if viewed properly. A modern camera can take ten pictures per second, with images that greatly exceed HDTV resolution. Sputnik stereo camera (Soviet Union, 1960s). Although there are three lenses present, only the lower two are used for the photograph – the third lens serves as a viewfinder for composition. The Sputnik produces two side-by-side square images on 120 film. If anything is in motion within the field of view, it is necessary to take both images at once, either through use of a specialized two-lens camera, or by using two identical cameras, operated as close as possible to the same moment. A single camera can also be used if the subject remains perfectly still (such as an object in a museum display). Two exposures are required. The camera can be moved on a sliding bar for offset, or with practice, the photographer can simply shift the camera while holding it straight and level. This method of taking stereo photos is sometimes referred to as the "Cha-Cha" or "Rock and Roll" method.[23] It is also sometimes referred to as the "astronaut shuffle" because it was used to take stereo pictures on the surface of the moon using normal monoscopic equipment.[24] For the most natural looking stereo most stereographers move the camera about 65mm or the distance between the eyes,[25] but some experiment with other distances. A good rule of thumb is to shift sideways 1/30th of the distance to the closest subject for 'side by side' display, or just 1/60th if the image is to be also used for color anaglyph or anachrome image display. For example, when enhanced depth beyond natural vision is desired and a photo of a person in front of a house is being taken,9/19 en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia For example, when enhanced depth beyond natural vision is desired and a photo of a person in front of a house is being taken, and the person is thirty feet away, then the camera should be moved 1 foot between shots.[25] The stereo effect is not significantly diminished by slight pan or rotation between images. In fact slight rotation inwards (also called 'toe in') can be beneficial. Bear in mind that both images should show the same objects in the scene (just from different angles) – if a tree is on the edge of one image but out of view in the other image, then it will appear in a ghostly, semitransparent way to the viewer, which is distracting and uncomfortable. Therefore, the images are cropped so they completely overlap, or the cameras 'toed-in' so that the images completely overlap without having to discard any of the images. However, too much 'toe-in' can cause 'keystoning' and eye strain for reasons best described here.[26] Digital stereo bases (baselines) There are different cameras with different stereobase (distance between the two camera lenses) in the not professional market of 3D digital cameras used for video and also for stills: 10 mm Panasonic 3 D Lumix H-FT012 lens (for the GH2, GF2, GF3, GF5 cams and also for the hybrid W8 cam). 12 mm Praktica and Medion 3D (two clones of the DXG-5D8 cam). 20 mm Sony Blogie 3D. 23 mm Loreo 3D Macro lens. 25 mm LG Optimus 3D and LG Optimus 3D MAX smartphones and the close-up macro adapter for the W1 and W3 Fujifilm cams. 28 mm Sharp Aquos SH80F smartphone and the Toshiba Camileo z100 camcorder. 30 mm Panasonic 3D1 camera. 32 mm HTC EVO 3D smartphone. 35 mm JVC TD1, DXG-5G2V and Vivitar 790 HD (only for anagliph stills and video) camcorders. 40 mm Aiptek I2, Aiptek IS2, Aiptek IH3 and Viewsonic 3D cams. 50 mm Loreo for full frame cams, and the 3D FUN cam of 3dInlife. 55 mm SVP dc-3D-80 cam (parallel & anagliph, stills & video). 60 mm Vivitar 3D cam (only for anagliph pictures. 75 mm Fujifilm W3 cam. 77 mm Fujifilm W1 cam. 88 mm Loreo 3D lens for digital cams. 140mm Cyclopital3D base extender for the JVC TD1 and Sony TD10. 200mm Cyclopital3D base extender for the Panasonic AG-3DA1. 225mm Cyclopital3D base extender for the Fujifilm W1 and W3 cams. Base line selection en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 10/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia For general purpose stereo photography, where the goal is to duplicate natural human vision and give a visual impression as close as possible to actually being there, the correct baseline (distance between where the right and left images are taken) would be the same as the distance between the eyes.[27] When images taken with such a baseline are viewed using a viewing method that duplicates the conditions under which the picture is taken then the result would be an image pretty much the same as what would be seen at the site the photo was taken. This could be described as "ortho stereo." An example would be the Realist format that was so popular in the late 1940s to mid-1950s and is still being used by some today. When these images are viewed using high quality viewers, or seen with a properly set up projector, the impression is, indeed, very close to being at the site of photography. Fujifilm FinePix Real 3D W3 The baseline used in such cases will be about 50mm to 80mm. This is what is generally referred to as a "normal" baseline, used in most stereo photography. There are, however, situations where it might be desirable to use a longer or shorter baseline. The factors to consider include the viewing method to be used and the goal in taking the picture. Note that the concept of baseline also applies to other branches of stereography, such as stereo drawings and computer generated stereo images, but it involves the point of view chosen rather than actual physical separation of cameras or lenses. Longer base line for distant objects "Hyper Stereo" If a stereo picture is taken of a large, distant object such as a mountain or a large building using a normal base it will appear to be flat.[28] This is in keeping with normal human vision, it would look flat if one were actually there, but if the object looks flat, there doesn't seem to be any point in taking a stereo picture, as it will simply seem to be behind a stereo window, with no depth in the scene itself, much like looking at a flat photograph from a distance. One way of dealing with this situation is to include a foreground object to add depth interest and enhance the feeling of "being there", and this is the advice commonly given to novice stereographers.[29][30] Caution must be used, however, to ensure that the foreground object is not too prominent, and appears to be a natural part of the scene, otherwise it will seem to become the subject with the distant object being merely the background.[31] In cases like this, if the picture is just one of a series with other pictures showing more dramatic depth, it might make sense just to leave it flat, but behind a window.[31] For making stereo images featuring only a distant object (e.g., a mountain with foothills), the camera positions can be separated by a larger distance (called the "interaxial" or stereo base, often mistakenly called "interocular") than the adult human norm of 62–65mm. This will effectively render the captured image as though it was seen by a giant, and thus will enhance the depth perception of these distant objects, and reduce the apparent scale of the scene proportionately.[32] However, in this case care must be taken not to bring objects in the close foreground too close to the viewer, as they will show excessive parallax and can complicate stereo window adjustment. Midtown manhattan stereo photograph crosseyed ( ) There are two main ways to accomplish this. One is to use two cameras separated by the required distance, the other is to shift a single camera the required distance between shots. The shift method has been used with cameras such as the Stereo Realist to take hypers, either by taking two pairs and selecting the best frames, or by alternately capping each lens and recocking the shutter.[28][33] Hyperstereo example taken out from airplane while flying over Greenland and arranged for Crosseyed viewing It is also possible to take hyperstereo pictures using an ordinary single lens camera aiming out an airplane. One must be careful, however, about movement of clouds between shots.[34] 11/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia It has even been suggested that a version of hyperstereo could be used to help pilots fly planes.[35] In such situations, where an ortho stereo viewing method is used, a common rule of thumb is the 1:30 rule.[36] This means that the baseline will be equal to 1/30 of the distance to the nearest object included in the photograph. The results of hyperstereo can be quite impressive,[37][38][39] and examples of hyperstereo can be found in vintage views.[40] This technique can be applied to 3D imaging of the Moon: one picture is taken at moonrise, the other at moonset, as the face of the Moon is centered towards the center of the Earth and the diurnal rotation carries the photographer around the perimeter, though the results are rather poor,[41] and much better results can be obtained using alternative techniques.[41] This is why high quality published stereos of the moon are done using libration,[42][43] [44][45] the slight "wobbling" of the moon on its axis relative to the earth.[46] Similar techniques were used late in the 19th century to take stereo views of Mars and other astronomical subjects.[46] Moon stereo from 1897 taken using libration. Anaglyph, red left. 3D red cyan glasses are recommended to view this image correctly. Limitations of hyperstereo Vertical alignment can become a big problem, especially if the terrain on which the two camera positions are placed is uneven. Movement of objects in the scene can make syncing two widely separated cameras a nightmare. When a single camera is moved between two positions even subtle movements such as plants blowing in the wind and the movement of clouds can become a problem.[33] The wider the baseline, the more of a problem this becomes. Pictures taken in this fashion take on the appearance of a miniature model, Illustration of parallax multiplication limits taken from a short distance,[47][48][49] and those not familiar with such pictures with A at 30 and 2000 feet often cannot be convinced that it is the real object. This is because we cannot see depth when looking at such scenes in real life and our brains aren't equipped to deal with the artificial depth created by such techniques, and so our minds tell us it must be a smaller object viewed from a short distance, which would have depth. Though most eventually realize it is, indeed, an image of a large object from far away, many find the effect bothersome.[50] This doesn't rule out using such techniques, but it is one of the factors that need to be considered when deciding whether or not such a technique should be used. In movies and other forms of "3D" entertainment, hyperstereo may be used to simulate the viewpoint of a giant, with eyes a hundred feet apart. The miniaturization would be just what the photographer (or designer in the case of drawings/computer generated images) had in mind. On the other hand, in the case of a massive ship flying through space the impression that it is a miniature model is probably not what the film makers intended! Hyper stereo can also lead to cardboarding, an effect that creates stereos in which different objects seem well separated in depth, but the objects themselves seem flat. This is because parallax is quantized.[51] Illustration of the limits of parallax multiplication, refer to image at left. Ortho viewing method assumed. The line represents the Z axis, so imagine that it is laying flat and stretching into the distance. If the camera is at X point A is on an object at 30 feet. Point B is on an object at 200 feet and point C is on the same object but 1 inch behind B. Point D is on an object 250 feet away. With a normal baseline point A is clearly in the foreground, with B,C, and D all at stereo infinity. With a one foot base en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 12/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia line, which multiplies the parallax, there will be enough parallax to separate all four points, though the depth in the object containing B and C will still be subtle. If this object is the main subject, we may consider a baseline of 6 feet 8 inches but then the object at A would need to be cropped out. Now imagine that the camera is point Y, now the object at A is at 2,000 feet, point B is on an object at 2,170 feet C is a point on the same object 1 inch behind B. Point D is on an object at 2,220 feet. With a normal baseline, all four points are now at stereo infinity. With a 67 foot basline, the multiplied parallax allows us to see that all three objects are on different planes, yet points B and C, on the same object, appear to be on the same plane and all three objects appear flat. This is because there are discrete units of parallax, so at 2,170 feet the parallax between B and C is zero and zero multiplied by any number is still zero. A practical example In the red-cyan anaglyph example below, a ten-meter baseline atop the roof ridge of a house was used to image the mountain. The two foothill ridges are about four miles (6.5 km) distant and are separated in depth from each other and the background. The baseline is still too short to resolve the depth of the two more distant major peaks from each other. Owing to various trees that appeared in only one of the images the final image had to be severely cropped at each side and the bottom. In the wider image, taken from a different location, a single camera was walked about one hundred feet (30 m) between pictures. The images were converted to monochrome before combination.(below) Small anaglyphed image 3D red cyan glasses are recommended to view this image correctly. Long base line image showing prominent foothill ridges; click the image for more information on the technique 3D red cyan glasses are recommended to view this image correctly. Shorter baseline for ultra closeups "Macro stereo" When objects are taken from closer than about 6 1/2 feet a normal base will produce excessive parallax and thus exaggerated depth when using ortho viewing methods. At some point the parallax becomes so great that the image is difficult or even impossible to view. For such situations, it becomes en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes Left frame 13/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia view. For such situations, it becomes necessary to reduce the baseline in keeping with the 1:30 rule. When still life scenes are stereographed, an ordinary single lens camera can be moved using a slide bar or similar method to generate a stereo pair. Multiple views can be taken and the best pair selected for the desired viewing method. For moving objects, a more sophisticated approach is used. In the early 1970s, Realist incorporated introduced the Macro Realist designed to stereograph subjects 4 to 5 1/2 inches away, for viewing in Realist format viewers and projectors. It featured a 15mm base and fixed focus.[52] It was invented by Clarence G. Henning.[53] In recent years cameras have been produced which are designed to stereograph subjects 10" to 20" using print film, with a 27mm baseline.[54] Another technique, usable with fixed base cameras such as the Fujifilm FinePix Real 3D W1/W3 is to back off from the subject and use the zoom function to zoom to a closer view, such as was done in the image of a cake. This has the effect of reducing the effective baseline. Similar techniques could be used with paired digital cameras. Right frame Parallel view ( ) Cross-eye view ( ) Another way to take images of very small objects, "extreme macro", is to use an ordinary flatbed scanner. This is a variation on the shift technique in which the object is turned upside down and placed on the scanner, scanned, moved over and scanned again. This Closeup stereo of a cake photographed using a Fuji W3. Taken by backing off several produces stereos of a range objects as feet and then zooming in. large as about 6" across down to objects as small as a carrot seed. This technique goes back to at least 1995. See the article Scanography for more details. In stereo drawings and computer generated stereo images a smaller than normal baseline may be built into the constructed images to simulate a "bug's eye" view of the scene. Baseline tailored to viewing method en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 14/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia How far the picture is viewed from requires a certain separation between the cameras. This separation is called stereo base or stereo base line and results from the ratio of the distance to the image to the distance between the eyes (usually about 2.5 inches). In any case the farther the screen is viewed from the more the image will pop out. The closer the screen is viewed from the flatter it will appear. Personal anatomical differences can be compensated for by moving closer or farther from the screen. To provide close emulation of natural vision for images viewed on a computer monitor, a fixed stereo base of 6 cm might be appropriate. This will vary depending on the size of the monitor and the viewing distance. For hyper stereo, a ratio smaller than 1:30 could be used. For example if a stereo image is to be viewed on a computer monitor from a distance of 1000 mm there will be an eye to view ratio of 1000/63 or about 16. To set the cameras the appropriate distance apart for the desired effect, the distance to the subject (say a person at a distance from the cameras of 3 meters) is divided by 16 which yields a stereo base of 188 mm between the cameras. However, images optimized for a small screen viewed from a short distance will show excessive parallax when viewed with more ortho methods, such as a projected image or a head mounted display, possibly causing eyestrain and headaches, or doubling, so pictures optimized for this viewing method may not be usable with other methods. A mineral specimen imaged with scanner. Anaglyph, red left. Where images may also be used for anaglyph display a narrower base, say 40mm will allow for less ghosting in the display. Variable base for "geometric stereo" As mentioned previously, the goal of the photographer may be a reason for using a baseline that is larger than normal. Such is the case when, instead of trying to achieve a close emulation to natural vision, a stereographer may be trying to achieve geometric perfection. This approach means that objects are shown with the shape they actually have, rather than the way they are seen by humans. Objects at 25 to 30 feet, instead of having the subtle depth that one being there would see, or what would be recorded with a normal baseline, will have the much more dramatic depth that would be seen from 7 to 10 feet. So instead seeing objects as one would with eyes 2 1/2" apart, they would be seen as they would appear if one's eyes were 12" apart. In other words, the baseline is chosen to produce the same depth effect, regardless of the distance from the subject. As with true ortho, this effect is impossible to achieve in a literal sense, since different objects in the scene will be at different distances and will thus show different amounts of parallax, but the geometric stereographer, like the ortho stereographer attempts to come as close as possible. Achieving this could be as simple as using the 1:30 rule to find a custom base for every shot, regardless of distance, or it could involve using a more complicated formula.[55] This could be thought of as a form of hyperstereo,[56] but less extreme. As a result, it has all of the same limitations of hyperstereo. When objects are given enhanced depth, but not magnified to take up a larger portion of the view, there is a certain miniaturization effect. Of course, this may be exactly what the stereographer has in mind. While geometric stereo neither attempts nor achieves a close emulation of natural vision, there are valid reasons for this approach. It does, however, represent a very specialized branch of stereography. Precise stereoscopic baseline calculation methods Recent research has led to precise methods for calculating the stereoscopic camera baseline.[57] These techniques consider the geometry of the display/viewer and scene/camera spaces independently and can be used to reliably calculate a mapping of the 15/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia scene depth being captured to a comfortable display depth budget. This frees up the photographer to place their camera wherever they wish to achieve the desired composition and then use the baseline calculator to work out the camera inter-axial separation required to produce the desired effect. This approach means there is no guess work in the stereoscopic setup once a small set of parameters have been measured, it can be implemented for photography and computer graphics and the methods can be easily implemented in a software tool. Multi-rig stereoscopic cameras The precise methods for camera control have also allowed the development of multi-rig stereoscopic cameras where different slices of scene depth are captured using different inter-axial settings,[58] the images of the slices are then composed together to form the final stereoscopic image pair. This allows important regions of a scene to be given better stereoscopic representation while less important regions are assigned less of the depth budget. It provides stereographers with a way to manage composition within the limited depth budget of each individual display technology. Stereo Window For any branch of stereoscopy the concept of the stereo window is important. If a scene is viewed through a window the entire scene would normally be behind the window, if the scene is distant, it would be some distance behind the window, if it is nearby, it would appear to be just beyond the window. An object smaller than the window itself could even go through the window and appear partially or completely in front of it. The same applies to a part of a larger object that is smaller than the window. The goal of setting the stereo window is to duplicate this effect. To truly understand the concept of window adjustment it is necessary to understand where the stereo window itself is. In the case of projected stereo, including "3D" movies, the window would be the surface of the screen. With printed material the window is at the surface of the paper. When stereo images are seen by looking into a viewer the window is at the position of the frame. In the case of Virtual Reality the window seems to disappear as the scene becomes truly immersive. In the case of paired images, moving the images further apart will move the entire scene back, moving the images closer together will move the scene forward. Note that this does not affect the relative positions of objects within the scene, just their position relative to the window. Similar principles apply to anaglyph images and other stereoscopy techniques. There are several considerations in deciding where to place the scene relative to the window. First, in the case of an actual physical window, the left eye will see less of the left side of the scene and the right eye will see less of the right side of the scene, because the view is partly blocked by the window frame. This principle is known as "less to the left on the left" or 3L, and is often used as a guide when adjusting the stereo window where all objects are to appear behind the window. When the images are moved further apart, the outer edges are cropped by the same amount, thus duplicating the effect of a window frame. Another consideration involves deciding where individual objects are placed relative to the window. It would be normal for the frame of an actual window to partly overlap or "cut off" an object that is behind the window. Thus an object behind the stereo window might be partly cut off by the frame or side of the stereo window. So the stereo window is often adjusted to place objects cut off by window behind the window. If an object, or part of an object, is not cut off by the window then it could be placed in front of it and the stereo window may be adjusted with this in mind. This effect is how swords, bugs, flashlights, etc. often seem to "come off the screen" in 3D movies. If an object which is cut off by the window is placed in front of it, an effect results that is somewhat unnatural and is usually considered undesirable, this is often called a "window violation". This can best be understood by returning to the analogy of an actual physical window. An object in front of the window would not be cut off by the window frame but would, rather, continue to the right and/or left of it. This can't be duplicated in stereography techniques other than Virtual Reality so the stereo window will normally be adjusted to avoid window violations. There are, however, circumstances where they could be en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 16/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia window will normally be adjusted to avoid window violations. There are, however, circumstances where they could be considered permissible. A third consideration is viewing comfort. If the window is adjusted too far back the right and left images of distant parts of the scene may be more than 2.5" apart, requiring that the viewers eyes diverge in order to fuse them. This results in image doubling and/or viewer discomfort. In such cases a compromise is necessary between viewing comfort and the avoidance of window violations. In stereo photography window adjustments is accomplished by shifting/cropping the images, in other form of stereoscopy such as drawings and computer generated images the window is built into the design of the images as they are generated. It is by design that in CGI movies certain images are behind the screen whereas others are in front of it. Bibliography Footnotes References 1. ^ "The Kaiser (Emperor) Panorama" (http://ignomini.com/photographica/stereophotovintage/kaiserpanorama/kaiserpanorama.html). June 9, 2012. 2. ^ στερεός Tufts.edu (http://www.perseus.tufts.edu/hopper/text? doc=Perseus%3Atext%3A1999.04.0057%3Aentry%3Dstereo%2Fs), Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library 3. ^ σκοπέω (http://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext%3A1999.04.0057%3Aentry%3Dskope%2Fw), Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library 4. ^ Flight Simulation, J. M. Rolfe and K. J. Staples, Cambridge University Press, 1986, page 134 5. ^ a b Contributions to the Physiology of Vision.—Part the First. On some remarkable, and hitherto unobserved, Phenomena of Binocular Vision. By CHARLES WHEATSTONE, F.R.S., Professor of Experimental Philosophy in King's College, London. Stereoscopy.com (http://www.stereoscopy.com/library/wheatstone-paper1838.html) 6. ^ Welling, William. Photography in America, page 23 7. ^ Stereo Realist Manual, p. 375. 8. ^ Stereo Realist Manual, pp. 377–379. 9. ^ Fay Huang, Reinhard Klette, and Karsten Scheibe: Panoramic Imaging (Sensor-Line Cameras and Laser Range-Finders). Wiley & Sons, Chichester, 2008 10. ^ Dornaika, F.; Hammoudi, K (2009). "Extracting 3D Polyhedral Building Models from Aerial Images using a Featureless and Direct Approach" (http://www.mva-org.jp/Proceedings/2009CD/papers/12-02.pdf) (PDF). Machine Vision Applications. Proc. IAPR/MVA. Retrieved 2010-09-26. 11. ^ "Eyecare Trust" (http://www.eyecaretrust.org.uk/view.php?item_id=566). Eyecare Trust. Retrieved 29 March 2012. 12. ^ "Daily Telegraph Newspaper" (http://www.telegraph.co.uk/technology/news/7887422/Six-million-Britons-cant-see-3DTV.html). The Daily Telegraph. Retrieved 29 March 2012. 13. ^ Posted on (19 December 2011). "Understanding Requirements for High-Quality 3D Video: A Test in Stereo Perception" (http://3droundabout.com/2011/12/5788/understanding-requirements-for-high-quality-3d-video-a-test-in-stereoperception.html). 3droundabout.com. Retrieved 29 March 2012. 14. ^ The Logical Approach to Seeing 3D Pictures (http://www.vision3d.com/3views.html). www.vision3d.com by Optometrists Network. Retrieved 2009-08-21 15. ^ How To Freeview Stereo (3D) Images (http://www.angelfire.com/ca/erker/freeview.html). Greg Erker. Retrieved 2009-0821 16. ^ How to View Photos on This Site (http://www.3dphoto.net/text/viewing/technique.html). Stereo Photography – The World in 3D. Retrieved 2009-08-21 17. ^ "Seeing is believing""; Cinema Technology, Vol 24, No.1 March 2011 18. ^ http://www.dpvotheatrical.com/ 19. ^ "Glossary" (http://www.berezin.com/3d/Glossary.htm). June 8, 2012. 20. ^ "openKMQ" (http://www.pixelpartner.de/openKMQen.htm). June 8, 2012. 21. ^ http://www.shortcourses.com/stereo/stereo1-17.html 22. ^ "Fuji W3" (http://www.shopfujifilm.com/detail/FUJ+16082969). Shopfujifilm.com. Retrieved 2012-03-04. 17/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia 23. ^ Mac Digital Photography – 2003, Wiley, p. 125, Dennis R. Cohen, Erica Sadun – 2003 (http://books.google.com/books? id=QUvXMDYqx_MC&pg=PA125). Books.google.com. 2003-10-17. ISBN 9780470113288. Retrieved 2012-03-04. 24. ^ Stereo World, National Stereoscopic Association Vol 17 #3 pp. 4–10 25. ^ a b "The chacha method" (http://nzphoto.tripod.com/3d/010chacha.html). Nzphoto.tripod.com. Retrieved 2012-03-04. 26. ^ 3-D Revolution Productions. "How a 3-D Stereoscopic Movie is made – 3-D Revolution Productions" (http://www.the3drevolution.com/3dscreen.html). The3drevolution.com. Retrieved 2012-03-04. 27. ^ DrT (2008-02-25). "Dr. T" (http://drt3d.blogspot.com/2008/02/what-is-best-stereo-base.html). Drt3d.blogspot.com. Retrieved 2012-03-04. 28. ^ a b "Stereo Realist Guide, by Kenneth Tydings, Greenberg, 1951 page 100" (http://digitalstereoscopy.com/tydings/t100n101.html). Digitalstereoscopy.com. Retrieved 2012-03-04. 29. ^ Stereo Realist Manual, p. 27. 30. ^ Stereo Realist Manual, p. 261. 31. ^ a b Stereo Realist Manual, p. 156. 32. ^ "Buckingham Palace In Hyperstereo" (http://www.brianmay.com/brian/brianssb/brianssbaug09.html). Brianmay.com. Retrieved 2012-03-04. 33. ^ a b Stereo World Volume 37 #1 Inside Front Cover 34. ^ Stereoworld Vol 21 #1 March/April 1994 IFC, 51 35. ^ Stereoworld Vol 16 #1 March/April 1989 pp 36–37 36. ^ "Lens separation in stereo photography" (http://www.berezin.com/3d/Tech/lens_separation_in_stereo_photog.htm). Berezin.com. Retrieved 2012-03-04. 37. ^ Stereoworld Vol 16 #2 May/June 1989 pp. 20–21 38. ^ Stereoworld Vol 8 #1 March/April 1981 pp. 16–17 39. ^ Stereoworld Vol 31 #6 May/June 2006 pp. 16–22 40. ^ Stereoworld Vol 17 #5 Nov/DEC 1990 pp. 32–33 41. ^ a b Stereo Lunar Photos by John C. Ballou (http://home.comcast.net/~jlballou/LunarStereo/index.html) An in depth looks at moon stereos with examples using several techniques 42. ^ Stereoworld Vol 23 #2 May/June 1996 pp. 25–30 43. ^ "Stereo moon photo" (http://christensenastroimages.com/moon/stereomoon.html). Christensenastroimages.com. Retrieved 2012-03-04. 44. ^ "Brians Soapbox February 2009" (http://www.brianmay.com/brian/brianssb/brianssbfeb09.html). Brianmay.com. Retrieved 2012-03-04. 45. ^ London Stereoscopic Company – Official Web Site (http://www.londonstereo.com/stereophotography2.html) a more indepth explanation 46. ^ a b Stereoworld Vol 15 #3 July/August 1988 pp. 25–30 47. ^ "Stereo Realist Guide, by Kenneth Tydings, Greenberg, 1951 page 101" (http://digitalstereoscopy.com/tydings/t100n101.html). Digitalstereoscopy.com. Retrieved 2012-03-04. 48. ^ The Vision of Hyperspace, Arthur Chandler, 1975, Stereo World , vol 2 #5 pp. 2–3, 12 49. ^ "Historical World Trade Center Photographs" (http://www.mymedialibrary.com/WTC/index.html). Mymedialibrary.com. Retrieved 2012-03-04. 50. ^ Hyperspace a comment, Paul Wing, 1976, Stereo World , vol 2 #6 page 2 51. ^ "Cardboarding" (http://nzphoto.tripod.com/stereo/macrostereo/macro3dwindows.htm#cardboard). Nzphoto.tripod.com. Retrieved 2012-03-04. 52. ^ Willke & Zakowski 53. ^ Simmons 54. ^ 3dstereo.com. "The 3D Mac" (http://www.3dstereo.com/viewmaster/cam-3dmac.html). 3dstereo.com. Retrieved 2012-0304. 55. ^ "Bercovitz Formulae for stereo base" (http://nzphoto.tripod.com/stereo/3dtake/fbercowitz.htm). Nzphoto.tripod.com. Retrieved 2012-03-04. 56. ^ "Rocky Mountain Memories" (http://www.rmm3d.com/3d.encyclopedia/hyper.html). Rmm3d.com. Retrieved 2012-03-04. 57. ^ Jones, G.R.; Lee, D., Holliman, N.S., Ezra, D. (2001). "Controlling perceived depth in stereoscopic images" (http://www.dur.ac.uk/n.s.holliman/Presentations/EI4297A-07Protocols.pdf) (PDF). Stereoscopic Displays and Applications. Proc. SPIE 4297A. 58. ^ Holliman, N. S. (2004). "Mapping perceived depth to regions of interest in stereoscopic images" (http://www.dur.ac.uk/n.s.holliman/Presentations/EI5291A-12.pdf) (PDF). Stereoscopic Displays and Applications. Proc. SPIE 5291. en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 18/19 3/5/14 Stereoscopy - Wikipedia, the free encyclopedia Sources Simmons, Gordon (March/April 1996). "Clarence G. Henning: The Man Behind the Macro". Stereo World 23 (1): 37– 43. Willke, Mark A.; Zakowski, Ron (March/April 1996). "A Close Look into the Realist Macro Stereo System". Stereo World 23 (1): 14–35. Morgan, Willard D.; Lester, Henry M. (October 1954). Stereo Realist Manual. and 14 contributors. New York: Morgan & Lester. OCLC 789470 (//www.worldcat.org/oclc/789470). External links Stereoscopy (http://www.dmoz.org/Arts/Photography/Techniques_and_Styles/3D/) on the Open Directory Project The Quantitative Analysis of Stereoscopic Effect (http://www.vicgi.com/lenticular-printing-quantitive-analysis.html) Durham Visualization Laboratory stereoscopic imaging methods and software tools (http://www.binocularity.org) University of Washington Libraries Digital Collections Stereocard Collection (http://content.lib.washington.edu/stereoweb/) Stereographic Views of Louisville and Beyond, 1850s–1930 (http://digital.library.louisville.edu/cdm/landingpage/collection/stereographs/) from the University of Louisville Libraries Stereoscopy (http://www.flickr.com/photos/boston_public_library/sets/72157604192771132/) on Flickr Extremely rare and detailed Stereoscopic 3D scenes (http://www.panoramio.com/user/63737/tags/3D) International Stereoscopic Union (http://www.ISU3D.org) 3D STEREO PORTAL Videos & Photos Collection (http://www.3dstreaming.it) American University in Cairo Rare Books and Special Collections Digital Library Underwood & Underwood Egypt Stereoviews Collection (http://digitalcollections.aucegypt.edu/cdm/landingpage/collection/p15795coll8) Views of California and the West, ca. 1867–1903 (http://www.oac.cdlib.org/view? docId=tf3489n9sv;developer=local;style=oac4;doc.view=itemsStereo), The Bancroft Library The Ten Commandments of Stereoscopy (http://www.stereoscopynews.com/download/software/923-qthe-tencommandments-of-stereoscopy.html), article about taking good stereoscopy images (photo and video) Moriarty, Philip. "3D Glasses" (http://www.sixtysymbols.com/videos/3d.htm). Sixty Symbols. Brady Haran for the University of Nottingham. Retrieved from "http://en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929" Categories: Stereoscopy This version of the page has been revised. Besides normal editing, the reason for revision may have been that this version contains factual inaccuracies, vandalism, or material not compatible with the Creative Commons Attribution-ShareAlike License. en.wikipedia.org/w/index.php?title=Stereoscopy&oldid=559166929&printable=yes 19/19