Another Introduction to Ray Tracing

Share Embed Donate


Short Description

Ray Tracing. Computer Science. Lovely introduction to the algorithm behind it....

Description

Another Introduction to Ray Tracing

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 24 Jul 2013 18:33:05 UTC

Contents Articles High Level View

1

3D computer graphics

1

Rendering

4

Ray casting

12

Ray tracing

15

3D projection

23

Light

27

Light

27

Radiance

37

Photometry

40

Shadow

45

Umbra

49

Distance fog

51

Material Properties

52

Shading

52

Diffuse reflection

57

Lambertian reflectance

60

Gouraud shading

61

Oren–Nayar reflectance model

63

Phong shading

66

Blinn–Phong shading model

68

Specular reflection

71

Specular highlight

73

Retroreflector

77

Texture mapping

84

Bump mapping

87

Bidirectional reflectance distribution function

89

Physics of reflection

92

Rendering reflection

96

Reflectivity

99

Fresnel equations

101

Transparency and translucency

106

Rendering transparency

115

Refraction

119

Total internal reflection

124

List of refractive indices

129

Schlick's approximation

132

Bidirectional scattering distribution function

133

Object Intersection

135

Line–sphere intersection

135

Line-plane intersection

136

Point in polygon

139

Efficiency Schemes

141

Spatial index

141

Grid

144

Octree

146

Global Illumination

148

Global illumination

148

Rendering equation

151

Distributed ray tracing

153

Monte Carlo method

154

Unbiased rendering

165

Path tracing

167

Radiosity

171

Photon mapping

176

Metropolis light transport

179

Other Topics

181

Anti-aliasing

181

Ambient occlusion

187

Caustics

189

Subsurface scattering

191

Motion blur

193

Beam tracing

197

Cone tracing

198

Ray tracing hardware

199

Ray Tracers

202

3Delight

202

Amiga Reflections

206

Autodesk 3ds Max

208

Anim8or

215

ASAP

220

Blender

221

Brazil R/S

231

BRL-CAD

232

Form-Z

235

Holomatix Rendition

238

Imagine

239

Indigo Renderer

240

Kerkythea

242

LightWave 3D

245

LuxRender

250

Manta Interactive Ray Tracer

253

Maxwell Render

253

Mental ray

256

Modo

258

OptiX

263

PhotoRealistic RenderMan

264

Picogen

265

Pixie

267

POV-Ray

269

Radiance

275

Real3D

279

Realsoft 3D

282

Sunflow

285

TurboSilver

286

V-Ray

286

YafaRay

288

References Article Sources and Contributors

290

Image Sources, Licenses and Contributors

297

Article Licenses License

303

1

High Level View 3D computer graphics 3D computer graphics

Basics • • •

3D modeling / 3D scanning 3D rendering / 3D printing 3D computer graphics software Primary Uses

• • • •

3D models / Computer-aided design Graphic design / Video games Visual effects / Visualization Virtual engineering / Virtual reality Related concepts

• • • • •

CGI / Animation / 3D display Wireframe model / Texture mapping Computer animation / Motion capture Skeletal animation / Crowd simulation Global illumination / Volume rendering

3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques. 3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.

3D computer graphics

2

History William Fetter was credited with coining the term computer graphics in 1961[1][2] to describe his work at Boeing. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand—produced by Ed Catmull and Fred Parke at the University of California.

Overview 3D computer graphics creation falls into three basic phases: • 3D modeling – the process of forming a computer model of an object's shape • Layout and animation – the motion and placement of objects within a scene • 3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate the image

Modeling The model describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices (or vertexes) that define the shape and form polygons. A polygon is an area formed from at least three vertexes (a triangle). A four-point polygon is a quad, and a polygon of more than four points is an n-gon[citation needed]. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.

Layout and animation Before rendering into an image, objects must be placed (laid out) in a scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture. These techniques are often used in combination. As with modeling, physical simulation also specifies motion.

Rendering Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Examples of 3D rendering

3D computer graphics

Left: A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay. Center: A 3d model of a Dunkerque class battleship rendered with flat shading. Right: During the 3D rendering step, the number of reflections “light rays” can take, as well as various other attributes, can be tailored to achieve a desired visual effect. Rendered with Cobalt.

Communities There are a multitude of websites designed to help educate and support 3D graphic artists. Some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, provide product reviews or post examples of their own work.

Distinction from photorealistic 2D graphics Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wireframe modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photorealistic effects without the use of filters. See also still life. [citation needed]

References [2] Computer Graphics, comphist.org (http:/ / www. comphist. org/ computing_history/ new_page_6. htm)

External links • A Critical History of Computer Graphics and Animation (http://accad.osu.edu/~waynec/history/lessons.html) • How Stuff Works - 3D Graphics (http://computer.howstuffworks.com/3dgraphics.htm) • History of Computer Graphics series of articles (http://hem.passagen.se/des/hocg/hocg_1960.htm)

3

Rendering

4

Rendering Rendering is the process of generating an image from a model (or models in what collectively could be called a scene file), by means of computer programs. Also, the results of such a model can be called a rendering. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing program to produce final video output. Rendering is one of the major sub-topics of 3D computer graphics, and in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject. Rendering has uses in architecture, video games, simulators, movie or TV visual effects, and design visualization, each employing a different balance of features and techniques. As a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to: light physics, visual perception, mathematics and software development.

A variety of rendering techniques applied to a single 3D scene

An image created by using POV-Ray 3.6.

In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in real time. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.

Rendering

5

Usage When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees. For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this.

Features A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together. • shading — how the color and brightness of a surface varies with lighting • texture-mapping — a method of applying detail to surfaces • bump-mapping — a method of simulating small-scale bumpiness on surfaces • fogging/participating medium — how light dims when passing through non-clear atmosphere or air • shadows — the effect of obstructing light • soft shadows — varying darkness caused by partially obscured light sources • reflection — mirror-like or highly glossy reflection • transparency (optics), transparency (graphic) or opacity — sharp transmission of light through solid objects • translucency — highly scattered transmission of light through solid objects

Image rendered with computer aided design.

• refraction — bending of light associated with transparency • diffraction — bending, spreading and interference of light passing by an object or aperture that disrupts the ray • indirect illumination — surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination) • caustics (a form of indirect illumination) — reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object • depth of field — objects appear blurry or out of focus when too far in front of or behind the object in focus • motion blur — objects appear blurry due to high-speed motion, or the motion of the camera • non-photorealistic rendering — rendering of scenes in an artistic style, intended to look like a painting or drawing

Rendering

Techniques Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image. Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted. Therefore, four loose families of more-efficient light transport modelling techniques have emerged: rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects; ray casting considers the scene as observed from a specific point-of-view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts; and ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude slower. The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique, but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques. Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost. Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels.

Scanline rendering and rasterisation A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives. If a pixel-by-pixel (image order) approach to rendering is impractical Rendering of the European Extremely Large or too slow for some task, then a primitive-by-primitive (object order) Telescope. approach to rendering may prove useful. Here, one loops through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards. Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization. The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics

6

Rendering

7

card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect.

Ray casting In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the colour value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged. Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two. Raycasting is primarily used for realtime simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The resulting surfaces have a characteristic 'flat' appearance when no additional tricks are used, as if objects in the scene were all painted with matte finish.

Ray tracing Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are Path Tracing, Bidirectional Path Tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects.[1] In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness.

Spiral Sphere and Julia, Detail, a computer-generated image created by visual artist Robert W. McGregor using only POV-Ray 3.6 and its built-in scene description language.

Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel.

Rendering In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments. As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required. However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing.

Radiosity Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is the way that shadows 'hug' the corners of rooms. The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it. The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithim, images may exhibit convincing realism, particularly for indoor scenes. In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model. Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some graphic artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity—or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame. Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films.

8

Rendering

Sampling and filtering One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel. If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing.

Optimization Optimizations used by an artist when a scene is being developed Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed.

Common optimizations for real time rendering For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'.

Academic core The implementation of a realistic renderer always has some basic element of physical simulation or emulation — some computation which resembles or abstracts a real physical process. The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community. The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques. Rendering research is concerned with both the adaptation of scientific models and their efficient application.

9

Rendering

The rendering equation This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation.

Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' — all the movement of light — in a scene.

The bidirectional reflectance distribution function The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows:

Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs.

Geometric optics Rendering is practically exclusively concerned with the particle aspect of light physics — known as geometric optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model.

Visual perception Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate an almost infinite range of light brightness and color, but current displays — movie screen, computer monitor, etc. — cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping. Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and Monte Carlo methods. Rendering for movies often takes place on a network of tightly connected computers known as a render farm. The current state of the art in 3-D image description for movie creation is the mental ray scene description language designed at mental images and the RenderMan shading language designed at Pixar.[2] (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators). Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include IPR and hardware rendering/shading.

10

Rendering

11

Chronology of important published ideas • 1968 Ray casting [3] • 1970 Scanline rendering [4] • 1971 Gouraud shading [5] • 1974 Texture mapping [] • 1974 Z-buffering [] • 1975 Phong shading [6] • 1976 Environment mapping [7] • 1977 Shadow volumes [8] • 1978 Shadow buffer • • • • • • • • • • • • • • • • • •

Rendering of an ESTCube-1 satellite.

[9]

1978 Bump mapping [10] 1980 BSP trees [11] 1980 Ray tracing [12] 1981 Cook shader [13] 1983 MIP maps [14] 1984 Octree ray tracing [15] 1984 Alpha compositing [16] 1984 Distributed ray tracing [17] 1984 Radiosity [18] 1985 Hemicube radiosity [19] 1986 Light source tracing [20] 1986 Rendering equation [21] 1987 Reyes rendering [22] 1991 Hierarchical radiosity [23] 1993 Tone mapping [24] 1993 Subsurface scattering [25] 1995 Photon mapping [26] 1997 Metropolis light transport [27]

• 1997 Instant Radiosity [28] • 2002 Precomputed Radiance Transfer [29]

Books and summaries • Pharr, Matt; Humphreys, Greg (2004). Physically based rendering from theory to implementation. Amsterdam: Elsevier/Morgan Kaufmann. ISBN 0-12-553180-X. • Shirley, Peter; Morley, R. Keith (2003). Realistic ray tracing (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-198-5. • Dutré, Philip; Bekaert, Philippe; Bala, Kavita (2003). Advanced global illumination ([Online-Ausg.] ed.). Natick, Mass.: A K Peters. ISBN 1-56881-177-2. • Akenine-Möller, Tomas; Haines, Eric (2004). Real-time rendering (2 ed.). Natick, Mass.: AK Peters. ISBN 1-56881-182-9. • Strothotte, Thomas; Schlechtweg, Stefan (2002). Non-photorealistic computer graphics modeling, rendering, and animation (2 ed.). San Francisco, CA: Morgan Kaufmann. ISBN 1-55860-787-0. • Gooch, Bruce; Gooch, Amy (2001). Non-photorealistic rendering. Natick, Mass.: A K Peters. ISBN 1-56881-133-0.

Rendering • Jensen, Henrik Wann (2001). Realistic image synthesis using photon mapping ([Nachdr.] ed.). Natick, Mass.: AK Peters. ISBN 1-56881-147-0. • Blinn, Jim (1996). Jim Blinn's corner : a trip down the graphics pipeline. San Francisco, Calif.: Morgan Kaufmann Publishers. ISBN 1-55860-387-5. • Glassner, Andrew S. (2004). Principles of digital image synthesis (2 ed.). San Francisco, Calif.: Kaufmann. ISBN 1-55860-276-3. • Cohen, Michael F.; Wallace, John R. (1998). Radiosity and realistic image synthesis (3 ed.). Boston, Mass. [u.a.]: Academic Press Professional. ISBN 0-12-178270-0. • Foley, James D.; Van Dam; Feiner; Hughes (1990). Computer graphics : principles and practice (2 ed.). Reading, Mass.: Addison-Wesley. ISBN 0-201-12110-7. • Andrew S. Glassner, ed. (1989). An introduction to ray tracing (3 ed.). London [u.a.]: Acad. Press. ISBN 0-12-286160-4. • Description of the 'Radiance' system [30]

References [2] A brief introduction to RenderMan (http:/ / portal. acm. org/ citation. cfm?id=1185817& jmp=abstract& coll=GUIDE& dl=GUIDE) [30] http:/ / radsite. lbl. gov/ radiance/ papers/ sg94. 1/

External links • SIGGRAPH (http://www.siggraph.org/) The ACMs special interest group in graphics — the largest academic and professional association and conference. • http://www.cs.brown.edu/~tor/List of links to (recent) siggraph papers (and some others) on the web.

Ray casting Ray casting is the use of ray-surface intersection tests to solve a variety of problems in computer graphics. The term was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering CSG models.[1]

Usage Ray casting can refer to: • the general problem of determining the first object intersected by a ray,[2] • a technique for hidden surface removal based on finding the first intersection of a ray cast from the eye through each pixel of an image, • a non-recursive ray tracing rendering algorithm that only casts primary rays, or • a direct volume rendering method, also called volume ray casting. Although "ray casting" and "ray tracing" were often used interchangeably in early computer graphics literature,[3] more recent usage tries to distinguish the two.[4] The distinction is that ray casting is a rendering algorithm that never recursively traces secondary rays, whereas other ray tracing-based rendering algorithms may.

12

Ray casting

Concept Ray casting is the most basic of many computer graphics rendering algorithms that use the geometric algorithm of ray tracing. Ray tracing-based rendering algorithms operate in image order to render three dimensional scenes to two dimensional images. Geometric rays are traced from the eye of the observer to sample the light (radiance) travelling toward the observer from the ray direction. The speed and simplicity of ray casting comes from computing the color of the light without recursively tracing additional rays that sample the radiance incident on the point that the ray hit. This eliminates the possibility of accurately rendering reflections, refractions, or the natural falloff of shadows; however all of these elements can be faked to a degree, by creative use of texture maps or other methods. The high speed of calculation made ray casting a handy rendering method in early real-time 3D video games. In nature, a light source emits a ray of light that travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons travelling along the same path. At this point, any combination of three things might happen with this light ray: absorption, reflection, and refraction. The surface may reflect all or part of the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Between absorption, reflection, and refraction, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, and reflective properties are again calculated based on the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image. Attempting to simulate this real-world process of tracing light rays using a computer can be considered extremely wasteful, as only a minuscule fraction of the rays in a scene would actually reach the eye. The first ray casting algorithm used for rendering was presented by Arthur Appel in 1968.[5] The idea behind ray casting is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray - think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modelling techniques and easily rendered. An early use of Appel's ray casting rendering algorithm was by Mathematical Applications Group, Inc., (MAGI) of Elmsford, New York.[6]

Ray casting in computer games Wolfenstein 3-D The world in Wolfenstein 3-D is built from a square based grid of uniform height walls meeting solid coloured floors and ceilings. In order to draw the world, a single ray is traced for every column of screen pixels and a vertical slice of wall texture is selected and scaled according to where in the world the ray hits a wall and how far it travels before doing so.[7] The purpose of the grid based levels is twofold - ray to wall collisions can be found more quickly since the potential hits become more predictable and memory overhead is reduced. However, encoding wide-open areas takes extra space.

13

Ray casting

Comanche series The so-called "Voxel Space" engine developed by NovaLogic for the Comanche games traces a ray through each column of screen pixels and tests each ray against points in a heightmap. Then it transforms each element of the heightmap into a column of pixels, determines which are visible (that is, have not been occluded by pixels that have been drawn in front), and draws them with the corresponding color from the texture map.[8]

Computational geometry setting In computational geometry, the ray casting problem is also known as the ray shooting problem and may be stated as the following query problem. Given a set of objects in d-dimensional space, preprocess them into a data structure so that for each query ray the first object hit by the ray can be found quickly. The problem has been investigated for various settings: space dimension, types of objects, restrictions on query rays, etc.[9] One technique is to use a sparse voxel octree.

References [5] "Ray-tracing and other Rendering Approaches" (http:/ / nccastaff. bournemouth. ac. uk/ jmacey/ CGF/ slides/ RayTracing4up. pdf) (PDF), lecture notes, MSc Computer Animation and Visual Effects, Jon Macey, University of Bournemouth [6] Goldstein, R. A., and R. Nagel. 3-D visual simulation. Simulation 16(1), pp. 25–31, 1971. [7] Wolfenstein-style ray casting tutorial (http:/ / www. permadi. com/ tutorial/ raycast/ ) by F. Permadi [8] Andre LaMothe. Black Art of 3D Game Programming. 1995, ISBN 1-57169-004-2, pp. 14, 398, 935-936, 941-943. [9] "Ray shooting, depth orders and hidden surface removal", by Mark de Berg, Springer-Verlag, 1993, ISBN 3-540-57020-9, 201 pp.

External links • Raycasting planes in WebGL with source code (http://adrianboeing.blogspot.com/2011/01/ raycasting-two-planes-in-webgl.html) • Raycasting (http://leftech.com/raycaster.htm) • Interactive raycaster for the Commodore 64 in 254 bytes (with source code) (http://pouet.net/prod. php?which=61298)

14

Ray tracing

15

Ray tracing In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television visual effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).

This recursive ray tracing of a sphere demonstrates the effects of shallow depth of field, area light sources and diffuse interreflection.

Algorithm overview Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it. Scenes in ray tracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.

The ray tracing algorithm builds an image by extending rays into a scene

Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene. It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could

Ray tracing potentially waste a tremendous amount of computation on light paths that are never recorded. Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.

Detailed description of ray tracing computer algorithm and its genesis What happens in nature In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength colour in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.

Ray casting algorithm The first ray tracing algorithm used for rendering was presented by Arthur Appel[1] in 1968. This algorithm has since been termed "ray casting". The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.

16

Ray tracing

17

Recursive ray tracing algorithm The next important research breakthrough came from Turner Whitted in 1979.[2] Previous algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Whitted continued the process. When a ray hits a surface, it can generate up to three new types of rays: reflection, refraction, and shadow.[3] A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it. This recursive ray tracing added more realism to ray traced images.

Ray tracing can create realistic images.

Advantages over other rendering methods Ray tracing's popularity stems from its basis in a realistic simulation of lighting over In addition to the high degree of realism, ray tracing can simulate the effects of a camera due to depth of field and aperture shape (in this case a hexagon). other rendering methods (such as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. Relatively simple to implement yet yielding impressive visual results, ray tracing often represents a first foray into graphics programming. The computational independence of each ray makes ray tracing amenable to parallelization.[4]

Ray tracing

18

Disadvantages A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed. Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required. The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (photon mapping, path tracing), give far more accurate simulation of real-world lighting.

The number of reflections a “ray” can take and how it is affected each time it encounters a surface is all controlled via software settings during ray tracing. Here, each ray was allowed to reflect up to 16 times. Multiple “reflections of reflections” can thus be seen. Created with Cobalt

The number of refractions a “ray” can take and how it is affected each time it encounters a surface is all controlled via software settings during ray tracing. Here, each ray was allowed to refract and reflect up to 9 times. Fresnel reflections were used. Also note the caustics. Created with Vray

It is also possible to approximate the equation using ray casting in a different way than what is traditionally considered to be "ray tracing". For performance, rays can be clustered according to their direction, with rasterization hardware and depth peeling used to efficiently sum the rays.[5]

Reversed direction of traversal of scene by the rays The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards ray tracing to mean shooting rays from the lights and gathering the results. Therefore it is clearer to distinguish eye-based versus light-based ray tracing. While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto

Ray tracing

19

reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.[6][7] Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points.[8][9] The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias. An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.[10] To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions. First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.

Example As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection between a ray and a sphere. In vector notation, the equation of a sphere with center and radius is

Any point on a ray starting from point

with direction

(here

is a unit vector) can be written as

where is its distance between and . In our problem, we know and , and we need to find . Therefore, we substitute for :

Let

for simplicity; then

Knowing that d is a unit vector allows us this minor simplification:

This quadratic equation has solutions

,

,

(e.g. the position of a light source)

Ray tracing

20

The two values of

found by solving this equation are the two ones such that

are the points where the ray

intersects the sphere. Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from with opposite direction). If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere. Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere. The normal to the sphere is simply

where with respect to

is the intersection point found before. The reflection direction can be found by a reflection of , that is

Thus the reflected ray has equation

Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection. This is merely the math behind the line–sphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an example of the algorithms used.

Adaptive depth control This means that we stop generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. You must always set a certain maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced. Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 * 0.5 = 0.25, the third: 0.25 * 0.5 = 0.125, the fourth: 0.125 * 0.5 = 0.0625, the fifth: 0.0625 * 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution. For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenberg[citation needed]found that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.

Ray tracing

Bounding volumes We enclose groups of objects in sets of hierarchical bounding volumes and first test for intersection with the bounding volume, and then only if there is an intersection, against the objects enclosed by the volume. Bounding volumes should be easy to test for intersection, for example a sphere or box (slab). The best bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin then a sphere will enclose mainly empty space and a box is much better. Boxes are also easier for hierarchical bounding volumes. Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and we would have a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this. Kay & Kajiya give a list of desired properties for hierarchical bounding volumes: • Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects. • The volume of each node should be minimal. • The sum of the volumes of all bounding volumes should be minimal. • Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree. • The time spent constructing the hierarchy should be much less than the time saved by using it.

In real time The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer graphics conference as the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel network distributed ray-tracing system that achieved several frames per second in rendering performance.[11] This performance was attained by means of the highly optimized yet platform independent LIBRT ray-tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and developed today as Open source software.[12] Since then, there have been considerable efforts and research towards implementing ray tracing in real time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.[13] The OpenRT project includes a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been designed to accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be rendered via ray tracing without intensive resource usage.[14] On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29 frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.[15] At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive

21

Ray tracing intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion.[16] Nvidia has shipped over 350,000,000 OptiX capable GPUs as of April 2013. OptiX-based renderers are used in Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers. Imagination Technologies offers a free API called OpenRL which accelerates tail recursive ray tracing-based rendering algorithms and, together with their proprietary ray tracing hardware, works with Autodesk Maya to provide what 3D World calls "real-time raytracing to the everyday artist".[17]

References [1] Appel A. (1968) Some techniques for shading machine rendering of solids (http:/ / graphics. stanford. edu/ courses/ Appel. pdf). AFIPS Conference Proc. 32 pp.37-45 [2] Whitted T. (1979) An improved illumination model for shaded display (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 156. 1534). Proceedings of the 6th annual conference on Computer graphics and interactive techniques [4] A. Chalmers, T. Davis, and E. Reinhard. Practical parallel rendering, ISBN 1-56881-179-9. AK Peters, Ltd., 2002. [5] GPU Gems 2, Chapter 38. High-Quality Global Illumination Rendering Using Rasterization, Addison-Wesley (http:/ / http. developer. nvidia. com/ GPUGems2/ gpugems2_chapter38. html) [8] Global Illumination using Photon Maps (http:/ / graphics. ucsd. edu/ ~henrik/ papers/ photon_map/ global_illumination_using_photon_maps_egwr96. pdf) [9] Photon Mapping - Zack Waters (http:/ / web. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping. html) [10] http:/ / graphics. stanford. edu/ papers/ metro/ metro. pdf [11] See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98. [17] 3D World, April 2013

External links • What is ray tracing ? (http://www.codermind.com/articles/Raytracer-in-C++ -Introduction-What-is-ray-tracing.html) • Ray Tracing and Gaming - Quake 4: Ray Traced Project (http://www.pcper.com/article.php?aid=334) • Ray tracing and Gaming - One Year Later (http://www.pcper.com/article.php?aid=506) • Interactive Ray Tracing: The replacement of rasterization? (http://www.few.vu.nl/~kielmann/theses/ avdploeg.pdf) • A series of tutorials on implementing a raytracer using C++ (http://devmaster.net/posts/ raytracing-theory-implementation-part-1-introduction) • Tutorial on implementing a raytracer in PHP (http://quaxio.com/raytracer/) • The Compleat Angler (1978) (http://www.youtube.com/watch?v=WV4qXzM641o)

22

3D projection

23

3D projection Part of a series on

Graphical projection

3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting.

Orthographic projection When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic projection ignores this effect to allow the creation of to-scale drawings for construction and engineering. Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and elevation. If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows; To project the 3D point , , onto the 2D point , using an orthographic projection parallel to the y axis (profile view), the following equations can be used:

where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become: . While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened as they would be in a perspective projection.

3D projection

24

Perspective projection When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition shows distant objects as smaller to provide additional realism. The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation: • •

- the 3D position of a point A that is to be projected. - the 3D position of a point C representing the camera.

• •

- The orientation of the camera (represented, for instance, by Tait–Bryan angles). - the viewer's position relative to the display surface.[1]

Which results in: •

- the 2D projection of

When Otherwise, to compute

.

and

the 3D vector

we first define a vector from

.

as the position of point A with respect to a coordinate

system defined by the camera, with origin in C and rotated by achieved by subtracting

is projected to the 2D vector

with respect to the initial coordinate system. This is

and then applying a rotation by

to the result. This transformation is often

called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes): [2] [3]

This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". Note that if the camera is not rotated ( ), then the matrices drop out (as identities), and this reduces to simply a shift: Alternatively, without using matrices, (note that the signs of angles are inconsistent with matrix form):

This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):[4]

Or, in matrix form using homogeneous coordinates, the system

in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving

3D projection

25

The distance of the viewer from the display surface, , directly relates to the field of view, where is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface) The above equations can also be rewritten as:

In which

is the display size,

is the recording surface size (CCD or film),

recording surface to the entrance pupil (camera center), and

is the distance from the

is the distance, from the 3D point being projected, to

the entrance pupil. Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.

Diagram

To determine which screen x-coordinate corresponds to a point at

multiply the point coordinates by:

where is the screen x coordinate is the model x coordinate is the focal length—the axial distance from the camera center to the image plane is the subject distance. Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.

3D projection

References [1] .

External links • A case study in camera projection (http://nccasymposium.bmth.ac.uk/2007/muhittin_bilginer/index.html) • Creating 3D Environments from Digital Photographs (http://nccasymposium.bmth.ac.uk/2009/ McLaughlin_Chris/McLaughlin_C_WebBasedNotes.pdf)

Further reading • Kenneth C. Finney (2004). 3D Game Programming All in One (http://books.google.com/ ?id=cknGqaHwPFkC&pg=PA93&dq="3D+projection"). Thomson Course. p. 93. ISBN 978-1-59200-136-1. • Koehler; Dr. Ralph. 2D/3D Graphics and Splines with Source Code. ISBN 0759611874.

26

27

Light Light Visible light (commonly referred to simply as light) is electromagnetic radiation that is visible to the human eye, and is responsible for the sense of sight.[1] Visible light has a wavelength in the range of about 380 nanometres (nm), or 380×10−9 m, to about 740 nanometres – between the invisible infrared, with longer wavelengths and the invisible ultraviolet, with shorter wavelengths. Primary properties of visible light are intensity, propagation direction, frequency or wavelength spectrum, and polarisation, while its speed in a vacuum, 299,792,458 meters per second, is one of the fundamental constants of nature. Visible light, as with all types of electromagnetic radiation (EMR), is experimentally found to always move at this speed in vacuum.

The Sun is Earth's primary source of light. About 44% of the sun's electromagnetic radiation that reaches the ground is in the visible light range.

In common with all types of EMR, visible light is emitted and absorbed in tiny "packets" called photons, and exhibits properties of both waves and particles. This property is referred to as the wave–particle duality. The study of light, known as optics, is an important research area in modern physics. In physics, the term light sometimes refers to electromagnetic radiation of any wavelength, whether visible or not.[2][3] This article focuses on visible light. See the electromagnetic radiation article for the general term.

Speed light The speed of light in a vacuum is defined to be exactly 299,792,458 m/s (approximately 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the metre is now defined in terms of the speed of light. All forms of electromagnetic radiation move at exactly this same speed in vacuum. Different physicists have attempted to measure the speed of light throughout history. Galileo attempted to measure the speed of light in the seventeenth century. An early experiment to measure the speed of light was conducted by Ole Rømer, a Danish physicist, in 1676. Using a telescope, Rømer observed the motions of Jupiter and one of its moons, Io. Noting discrepancies in the apparent period of Io's orbit, he calculated that light takes about 22 minutes to traverse the diameter of Earth's orbit.[4] However, its size was not known at that time. If Rømer had known the diameter of the Earth's orbit, he would have calculated a speed of 227,000,000 m/s. Another, more accurate, measurement of the speed of light was performed in Europe by Hippolyte Fizeau in 1849. Fizeau directed a beam of light at a mirror several kilometers away. A rotating cog wheel was placed in the path of the light beam as it traveled from the source, to the mirror and then returned to its origin. Fizeau found that at a certain rate of rotation, the beam would pass through one gap in the wheel on the way out and the next gap on the way back. Knowing the distance to the mirror, the number of teeth on the wheel, and the rate of rotation, Fizeau was able to calculate the speed of light as 313,000,000 m/s.

Light

28

Léon Foucault used an experiment which used rotating mirrors to obtain a value of 298,000,000 m/s in 1862. Albert A. Michelson conducted experiments on the speed of light from 1877 until his death in 1931. He refined Foucault's methods in 1926 using improved rotating mirrors to measure the time it took light to make a round trip from Mt. Wilson to Mt. San Antonio in California. The precise measurements yielded a speed of 299,796,000 m/s. The effective velocity of light in various transparent substances containing ordinary matter, is less than in vacuum. For example the speed of light in water is about 3/4 of that in vacuum. However, the slowing process in matter is thought to result not from actual slowing of particles of light, but rather from their absorption and re-emission from charged particles in matter. As an extreme example of the nature of light-slowing in matter, two independent teams of physicists were able to bring light to a "complete standstill" by passing it through a Bose-Einstein Condensate of the element rubidium, one team at Harvard University and the Rowland Institute for Science in Cambridge, Mass., and the other at the Harvard-Smithsonian Center for Astrophysics, also in Cambridge.[5] However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrary later time, as stimulated by a second laser pulse. During the time it had "stopped" it had ceased to be light.

Electromagnetic spectrum and visible light Generally, EM radiation, or EMR (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, the visible region that we perceive as light, ultraviolet, X-rays and gamma rays. The behaviour of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths. When EMR interacts with single atoms and molecules, its behaviour depends on the amount of energy per quantum it carries.

Electromagnetic spectrum with light highlighted

EMR in the visible light region consists of quanta (called photons) that are at the lower end of the energies that are capable of causing electronic excitation within molecules, which lead to changes in the bonding or chemistry of the molecule. At the lower end of the visible light spectrum, EMR becomes invisible to humans (infrared) because its photons no longer have enough individual energy to cause a lasting molecular change (a change in conformation) in the visual molecule retinal in the human retina. This change triggers the sensation of vision. There exist animals that are sensitive to various types of infrared, but not by means of quantum-absorption. Infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, and this is how living animals detect it. Above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the tissues of the eye and in particular the lens. Furthermore, the rods and cones located at the back of the human eye cannot detect the short ultraviolet wavelengths, and are in fact damaged by ultraviolet rays, a condition known as snow eye.[6] Many animals with eyes that do not require lenses (such as insects and shrimp) are able to directly detect ultraviolet visually, by quantum photon-absorption mechanisms, in much the same chemical way that normal humans detect visible light.

Light

29

Optics The study of light and the interaction of light and matter is termed optics. The observation and study of optical phenomena such as rainbows and the aurora borealis offer many clues as to the nature of light.

Refraction Refraction is the bending of light rays when passing through a surface between one transparent material and another. It is described by Snell's Law:

where

is the angle between the ray and the surface

normal in the first medium,

is the angle between the

ray and the surface normal in the second medium, and n1 and n2 are the indices of refraction, n = 1 in a vacuum and n > 1 in a transparent substance. When a beam of light crosses the boundary between a vacuum and another medium, or between two different media, the wavelength of the light changes, but the frequency remains constant. If the beam of light is not orthogonal (or rather normal) to the boundary, the change in wavelength results in a change in the direction of the beam. This change of direction is known as refraction. The refractive quality of lenses is frequently used to manipulate light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses, microscopes and refracting telescopes are all examples of this manipulation.

An example of refraction of light. The straw appears bent, because of refraction of light as it enters liquid from air.

Light sources There are many sources of light. The most common light sources are thermal: a body at a given temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is sunlight, the radiation emitted by the chromosphere of the Sun at around 6,000 Kelvin peaks in the visible region of the electromagnetic spectrum when plotted in wavelength A cloud illuminated by sunlight units [7] and roughly 44% of sunlight energy that reaches the ground is visible.[8] Another example is incandescent light bulbs, which emit only around 10% of their energy as visible light and the remainder as infrared. A common thermal light source in history is the glowing solid particles in flames, but these also emit most of their radiation in the infrared, and only a fraction in the visible spectrum. The peak of the blackbody spectrum is in the deep infrared, at about 10 micrometer wavelength, for relatively cool objects like human beings. As the temperature increases, the peak shifts to shorter wavelengths, producing first a red glow, then a white one, and finally a blue-white colour as the peak moves out of the visible part of the spectrum and into the ultraviolet. These colours can

Light

30

be seen when metal is heated to "red hot" or "white hot". Blue-white thermal emission is not often seen, except in stars (the commonly seen pure-blue colour in a gas flame or a welder's torch is in fact due to molecular emission, notably by CH radicals (emitting a wavelength band around 425 nm, and is not seen in stars or pure thermal radiation). Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as neon lamps and neon signs, mercury-vapor lamps, etc.), and flames (light from the hot gas itself—so, for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated, as in a laser or a microwave maser. Deceleration of a free charged particle, such as an electron, can produce visible radiation: cyclotron radiation, synchrotron radiation, and bremsstrahlung radiation are all examples of this. Particles moving through a medium faster than the speed of light in that medium can produce visible Cherenkov radiation. Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is called bioluminescence. For example, fireflies produce light by this means, and boats moving through water can disturb plankton which produce a glowing wake. Certain substances produce light when they are illuminated by more energetic radiation, a process known as fluorescence. Some substances emit light slowly after excitation by more energetic radiation. This is known as phosphorescence. Phosphorescent materials can also be excited by bombarding them with subatomic particles. Cathodoluminescence is one example. This mechanism is used in cathode ray tube television sets and computer monitors. Certain other mechanisms can produce light: • • • • • •

Bioluminescence Cherenkov radiation Electroluminescence Scintillation Sonoluminescence triboluminescence

When the concept of light is intended to include very-high-energy photons (gamma rays), additional generation mechanisms include: • Particle–antiparticle annihilation • Radioactive decay A city illuminated by artificial lighting

Units and measures Light is measured with two main alternative sets of units: radiometry consists of measurements of light power at all wavelengths, while photometry measures light with wavelength weighted with respect to a standardised model of human brightness perception. Photometry is useful, for example, to quantify Illumination (lighting) intended for human use. The SI units for both systems are summarised in the following tables.

Light

31

Quantity Name

Unit [9]

Dimension

Name

Symbol

Symbol

Notes

Symbol

[10]

joule

J

M⋅L2⋅T−2

energy

[10]

watt

W

M⋅L2⋅T−3

radiant energy per unit time, also called radiant power.

W⋅m−1

M⋅L⋅T−3

radiant power per wavelength.

watt per steradian

W⋅sr−1

M⋅L2⋅T−3

power per unit solid angle.

Spectral intensity I [11] eλ

watt per steradian per metre

W⋅sr−1⋅m−1

M⋅L⋅T−3

radiant intensity per wavelength.

Radiance

Le

watt per steradian per square metre

W⋅sr−1⋅m−2

M⋅T−3

power per unit solid angle per unit projected source area. confusingly called "intensity" in some other fields of study.

Spectral radiance

[11] Leλ or [12] Leν

watt per steradian per metre3 or watt per steradian per square metre per hertz

W⋅sr−1⋅m−3 M⋅L−1⋅T−3 or or W⋅sr−1⋅m−2⋅Hz−1 M⋅T−2

commonly measured in W⋅sr−1⋅m−2⋅nm−1 with surface area and either wavelength or frequency.

Irradiance

Ee

[10]

watt per square metre

W⋅m−2

M⋅T−3

power incident on a surface, also called radiant flux density. sometimes confusingly called "intensity" as well.

Spectral irradiance

[11] Eeλ or [12] Eeν

watt per metre3 or watt per square metre per hertz

W⋅m−3 or W⋅m−2⋅Hz−1

M⋅L−1⋅T−3 or M⋅T−2

commonly measured in W⋅m−2⋅nm−1 [13] or 10−22W⋅m−2⋅Hz−1, known as solar flux unit.

Radiant exitance / Radiant emittance

Me

[10]

watt per square metre

W⋅m−2

M⋅T−3

power emitted from a surface.

Spectral radiant exitance / Spectral radiant emittance

[11] Meλ or [12] Meν

watt per metre3 or watt per square metre per hertz

W⋅m−3 or W⋅m−2⋅Hz−1

M⋅L−1⋅T−3 or M⋅T−2

power emitted from a surface per wavelength or frequency.

Radiosity

Je or [11] Jeλ

watt per square metre

W⋅m−2

M⋅T−3

emitted plus reflected power leaving a surface.

Radiant exposure He

joule per square metre

J⋅m−2

M⋅T−2

Radiant energy density

joule per metre3

J⋅m−3

M⋅L−1⋅T−2

Radiant energy

Qe

Radiant flux

Φe

Spectral power

Φeλ

Radiant intensity

Ie

[10][11] watt per metre

ωe

See also: SI · Radiometry · Photometry · (Compare)

Light

32

Quantity Name

Unit [14]

Symbol

Name

Dimension Symbol

Symbol

[15]

lumen second

lm⋅s

T⋅J 

Φv 

[15]

lumen (= cd⋅sr)

lm



Luminous intensity

Iv

candela (= lm/sr)

cd

Luminance

Lv

candela per square metre cd/m2

Illuminance

Ev

lux (= lm/m2)

Luminous emittance

Mv

Luminous exposure

Hv

Luminous energy

Qv 

Luminous flux

Luminous energy density ωv [15]

Luminous efficacy

η 

Luminous efficiency

V

Notes

[16]

units are sometimes called talbots

[16]

also called luminous power



[16]

an SI base unit, luminous flux per unit solid angle

L−2⋅J

units are sometimes called nits

lx

L−2⋅J

used for light incident on a surface

lux (= lm/m2)

lx

L−2⋅J

used for light emitted from a surface

lux second

lx⋅s

L−2⋅T⋅J

lumen second per metre3 lm⋅s⋅m−3 L−3⋅T⋅J lumen per watt

lm/W

M−1⋅L−2⋅T3⋅J ratio of luminous flux to radiant flux 1

also called luminous coefficient

See also: SI · Photometry · Radiometry · (Compare)

The photometry units are different from most systems of physical units in that they take into account how the human eye responds to light. The cone cells in the human eye are of three types which respond differently across the visible spectrum, and the cumulative response peaks at a wavelength of around 555 nm. Therefore, two sources of light which produce the same intensity (W/m2) of visible light do not necessarily appear equally bright. The photometry units are designed to take this into account, and therefore are a better representation of how "bright" a light appears to be than raw intensity. They relate to raw power by a quantity called luminous efficacy, and are used for purposes like determining how to best achieve sufficient illumination for various tasks in indoor and outdoor settings. The illumination measured by a photocell sensor does not necessarily correspond to what is perceived by the human eye, and without filters which may be costly, photocells and charge-coupled devices (CCD) tend to respond to some infrared, ultraviolet or both.

Light pressure Light exerts physical pressure on objects in its path, a phenomenon which can be deduced by Maxwell's equations, but can be more easily explained by the particle nature of light: photons strike and transfer their momentum. Light pressure is equal to the power of the light beam divided by c, the speed of light.  Due to the magnitude of c, the effect of light pressure is negligible for everyday objects.  For example, a one-milliwatt laser pointer exerts a force of about 3.3 piconewtons on the object being illuminated; thus, one could lift a U. S. penny with laser pointers, but doing so would require about 30 billion 1-mW laser pointers.[17]  However, in nanometer-scale applications such as NEMS, the effect of light pressure is more significant, and exploiting light pressure to drive NEMS mechanisms and to flip nanometer-scale physical switches in integrated circuits is an active area of research.[18] At larger scales, light pressure can cause asteroids to spin faster,[19] acting on their irregular shapes as on the vanes of a windmill.  The possibility of making solar sails that would accelerate spaceships in space is also under investigation.[20][21] Although the motion of the Crookes radiometer was originally attributed to light pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a partial vacuum.[22] This should not be confused with the Nichols radiometer, in which the (slight) motion caused by torque (though not enough for full rotation against friction) is directly caused by light pressure.[23]

Light

Historical theories about light, in chronological order Classical Greece and Hellenism In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun. In about 300 BC, Euclid wrote Optica, in which he studied the properties of light. Euclid postulated that light travelled in straight lines and he described the laws of reflection and studied them mathematically. He questioned that sight is the result of a beam from the eye, for he asks how one sees the stars immediately, if one closes one's eyes, then opens them at night. Of course if the beam from the eye travels infinitely fast this is not a problem. In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote: "The light & heat of the sun; these are composed of minute atoms which, when they are shoved off, lose no time in shooting right across the interspace of air in the direction imparted by the shove." – On the nature of the Universe Despite being similar to later particle theories, Lucretius's views were not generally accepted. Ptolemy (c. 2nd century) wrote about the refraction of light in his book Optics.[24]

Classical India In ancient India, the Hindu schools of Samkhya and Vaisheshika, from around the early centuries CE developed theories on light. According to the Samkhya school, light is one of the five fundamental "subtle" elements (tanmatra) out of which emerge the gross elements. The atomicity of these elements is not specifically mentioned and it appears that they were actually taken to be continuous. On the other hand, the Vaisheshika school gives an atomic theory of the physical world on the non-atomic ground of ether, space and time. (See Indian atomism.) The basic atoms are those of earth (prthivi), water (pani), fire (agni), and air (vayu) Light rays are taken to be a stream of high velocity of tejas (fire) atoms. The particles of light can exhibit different characteristics depending on the speed and the arrangements of the tejas atoms.[citation needed] The Vishnu Purana refers to sunlight as "the seven rays of the sun".[citation needed] The Indian Buddhists, such as Dignāga in the 5th century and Dharmakirti in the 7th century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy.[citation needed]

Descartes René Descartes (1596–1650) held that light was a mechanical property of the luminous body, rejecting the "forms" of Ibn al-Haytham and Witelo as well as the "species" of Bacon, Grosseteste, and Kepler.[25] In 1637 he published a theory of the refraction of light that assumed, incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes arrived at this conclusion by analogy with the behaviour of sound waves.[citation needed] Although Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like a wave and in concluding that refraction could be explained by the speed of light in different media. Descartes is not the first to use the mechanical analogies but because he clearly asserts that light is only a mechanical property of the luminous body and the transmitting medium, Descartes' theory of light is regarded as the start of modern physical optics.[25]

33

Light

34

Particle theory Pierre Gassendi (1592–1655), an atomist, proposed a particle theory of light which was published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age, and preferred his view to Descartes' theory of the plenum. He stated in his Hypothesis of Light of 1675 that light was composed of corpuscles (particles of matter) which were emitted in all directions from a source. One of Newton's arguments against the wave nature of light was that waves were known to bend around obstacles, while light travelled only in straight lines. He did, however, explain the phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing that a light particle could create a localised wave in the aether. Newton's theory could be used to predict the reflection of light, but could only explain refraction by incorrectly assuming that light accelerated upon entering a denser medium because the Pierre Gassendi. gravitational pull was greater. Newton published the final version of his theory in his Opticks of 1704. His reputation helped the particle theory of light to hold sway during the 18th century. The particle theory of light led Laplace to argue that a body could be so massive that light could not escape from it. In other words it would become what is now called a black hole. Laplace withdrew his suggestion later, after a wave theory of light became firmly established as the model for light (as has been explained, neither a particle or wave theory is fully correct). A translation of Newton's essay on light appears in The large scale structure of space-time, by Stephen Hawking and George F. R. Ellis.

Wave theory To explain the origin of colors, Robert Hooke (1635-1703) developed a "pulse theory" and compared the spreading of light to that of waves in water in his 1665 Micrographia ("Observation XI"). In 1672 Hooke suggested that light's vibrations could be perpendicular to the direction of propagation. Christiaan Huygens (1629-1695) worked out a mathematical wave theory of light in 1678, and published it in his Treatise on light in 1690. He proposed that light was emitted in all directions as a series of waves in a medium called the Luminiferous ether. As waves are not affected by gravity, it was assumed that they slowed down upon entering a denser medium. [26] The wave theory predicted that light waves could interfere with each other like sound waves (as noted around 1800 by Thomas Young), and that light could be polarised, if it were a transverse wave. Young showed by means of a diffraction experiment that light behaved as waves. He also proposed that different colours were caused by different wavelengths of light, and explained colour vision in terms of three-coloured receptors in the eye. Another supporter of the wave theory was Leonhard Euler. He argued in Nova theoria lucis et colorum (1746) that diffraction could more easily be explained by a wave theory.

Thomas Young's sketch of the two-slit experiment showing the diffraction of light. Young's experiments supported the theory that light consists of waves.

Later, Augustin-Jean Fresnel independently worked out his own wave theory of light, and presented it to the Académie des Sciences in 1817. Simeon Denis Poisson added to Fresnel's mathematical work to produce a convincing argument in favour of the wave theory, helping to overturn Newton's corpuscular theory. By the year 1821, Fresnel was able to show via mathematical methods that polarisation could be explained only by the wave

Light theory of light and only if light was entirely transverse, with no longitudinal vibration whatsoever. The weakness of the wave theory was that light waves, like sound waves, would need a medium for transmission. The existence of the hypothetical substance luminiferous aether proposed by Huygens in 1678 was cast into strong doubt in the late nineteenth century by the Michelson–Morley experiment. Newton's corpuscular theory implied that light would travel faster in a denser medium, while the wave theory of Huygens and others implied the opposite. At that time, the speed of light could not be measured accurately enough to decide which theory was correct. The first to make a sufficiently accurate measurement was Léon Foucault, in 1850.[27] His result supported the wave theory, and the classical particle theory was finally abandoned, only to partly re-emerge in the 20th century.

Quantum theory In 1900 Max Planck, attempting to explain black body radiation suggested that although light was a wave, these waves could gain or lose energy only in finite amounts related to their frequency. Planck called these "lumps" of light energy "quanta" (from a Latin word for "how much"). In 1905, Albert Einstein used the idea of light quanta to explain the photoelectric effect, and suggested that these light quanta had a "real" existence. In 1923 Arthur Holly Compton showed that the wavelength shift seen when low intensity X-rays scattered from electrons (so called Compton scattering) could be explained by a particle-theory of X-rays, but not a wave theory. In 1926 Gilbert N. Lewis named these liqht quanta particles photons. Eventually the modern theory of quantum mechanics came to picture light as (in some sense) both a particle and a wave, and (in another sense), as a phenomenon which is neither a particle nor a wave (which actually are macroscopic phenomena, such as baseballs or ocean waves). Instead, modern physics sees light as something that can be described sometimes with mathematics appropriate to one type of macroscopic metaphor (particles), and sometimes another macroscopic metaphor (water waves), but is actually something that cannot be fully imagined. As in the case for radio waves and the X-rays involved in Compton scattering, physicists have noted that electromagnetic radiation tends to behave more like a classical wave at lower frequencies, but more like a classical particle at higher frequencies, but never completely loses all qualities of one or the other. Visible light, which occupies a middle ground in frequency, can easily be shown in experiments to be describable using either a wave or particle model, or sometimes both.

Electromagnetic theory as explanation for all types of visible light and all EM radiation In 1845, Michael Faraday discovered that the plane of polarisation of linearly polarised light is rotated when the light rays travel along the magnetic field direction in the presence of a transparent dielectric, an effect now known as Faraday rotation.[] This was the first evidence that light was related to electromagnetism. In 1846 he speculated that light might be some form of disturbance propagating along A linearly polarised light wave frozen in time and showing the two oscillating magnetic field lines.[] Faraday components of light; an electric field and a magnetic field perpendicular to each other and proposed in 1847 that light was a to the direction of motion (a transverse wave). high-frequency electromagnetic vibration, which could propagate even in the absence of a medium such as the ether.

35

Light Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light. Maxwell discovered that self-propagating electromagnetic waves would travel through space at a constant speed, which happened to be equal to the previously measured speed of light. From this, Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in 1862 in On Physical Lines of Force. In 1873, he published A Treatise on Electricity and Magnetism, which contained a full mathematical description of the behaviour of electric and magnetic fields, still known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory experimentally by generating and detecting radio waves in the laboratory, and demonstrating that these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction, diffraction, and interference. Maxwell's theory and Hertz's experiments led directly to the development of modern radio, radar, television, electromagnetic imaging, and wireless communications. In the quantum theory, photons are seen as wave packets of the waves described in the classical theory of Maxwell. The quantum theory was needed to explain effects even with visual light that Maxwell's classical theory could not (such as spectral lines).

Notes [1] CIE (1987). International Lighting Vocabulary (http:/ / www. cie. co. at/ publ/ abst/ 17-4-89. html). Number 17.4. CIE, 4th edition. ISBN 978-3-900734-07-7. By the International Lighting Vocabulary, the definition of light is: “Any radiation capable of causing a visual sensation directly.” [6] http:/ / www. yorku. ca/ eye/ lambdas. htm [7] http:/ / thulescientific. com/ LYNCH%20& %20Soffer%20OPN%201999. pdf [9] Standards organizations recommend that radiometric quantities should be denoted with a suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities. [10] Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiant emittance. [11] Spectral quantities given per unit wavelength are denoted with suffix "λ" (Greek) to indicate a spectral concentration. Spectral functions of wavelength are indicated by "(λ)" in parentheses instead, for example in spectral transmittance, reflectance and responsivity. [12] Spectral quantities given per unit frequency are denoted with suffix "ν" (Greek)—not to be confused with the suffix "v" (for "visual") indicating a photometric quantity. [13] NOAA / Space Weather Prediction Center (http:/ / www. swpc. noaa. gov/ forecast_verification/ F10. html) includes a definition of the solar flux unit (SFU). [14] Standards organizations recommend that photometric quantities be denoted with a suffix "v" (for "visual") to avoid confusion with radiometric or photon quantities. [15] Alternative symbols sometimes seen: W for luminous energy, P or F for luminous flux, and ρ or K for luminous efficacy. [16] "J" here is the symbol for the dimension of luminous intensity, not the symbol for the unit joules. [18] See, for example, nano-opto-mechanical systems research at Yale University (http:/ / www. eng. yale. edu/ tanglab/ research. htm). [22] P. Lebedev, Untersuchungen über die Druckkräfte des Lichtes, Ann. Phys. 6, 433 (1901). [25] Theories of light, from Descartes to Newton A. I. Sabra CUP Archive,1981 pg 48 ISBN 0-521-28436-8, ISBN 978-0-521-28436-3 [26] Fokko Jan Dijksterhuis, Lenses and Waves: Christiaan Huygens and the Mathematical Science of Optics in the 17th Century (http:/ / books. google. com/ books?id=cPFevyomPUIC), Kluwer Academic Publishers, 2004, ISBN 1-4020-2697-8

References

36

Radiance

37

Radiance Radiance and spectral radiance are measures of the quantity of radiation that passes through or is emitted from a surface and falls within a given solid angle in a specified direction. They are used in radiometry to characterize diffuse emission and reflection of electromagnetic radiation. In astrophysics, radiance is also used to quantify emission of neutrinos and other particles. The SI unit of radiance is watts per steradian per square metre (W·sr−1·m−2), while that of spectral radiance is W·sr−1·m−2·Hz−1 or W·sr−1·m−3 depending on if the spectrum is a function of frequency or of wavelength.

Description Radiance characterizes total emission or reflection. Radiance is useful because it indicates how much of the power emitted by an emitting or reflecting surface will be received by an optical system looking at the surface from some angle of view. In this case, the solid angle of interest is the solid angle subtended by the optical system's entrance pupil. Since the eye is an optical system, radiance and its cousin luminance are good indicators of how bright an object will appear. For this reason, radiance and luminance are both sometimes called "brightness". This usage is now discouraged – see Brightness for a discussion. The nonstandard usage of "brightness" for "radiance" persists in some fields, notably laser physics. The radiance divided by the index of refraction squared is invariant in geometric optics. This means that for an ideal optical system in air, the radiance at the output is the same as the input radiance. This is sometimes called conservation of radiance. For real, passive, optical systems, the output radiance is at most equal to the input, unless the index of refraction changes. As an example, if you form a demagnified image with a lens, the optical power is concentrated into a smaller area, so the irradiance is higher at the image. The light at the image plane, however, fills a larger solid angle so the radiance comes out to be the same assuming there is no loss at the lens. Spectral radiance expresses radiance as a function of frequency (Hz) with SI units W·sr−1·m−2·Hz−1 or wavelength (nm) with units of W·sr−1·m−2·nm−1 (more common than W·sr−1·m-3). In some fields spectral radiance is also measured in microflicks.[1][2] Radiance is the integral of the spectral radiance over all wavelengths or frequencies. For radiation emitted by an ideal black body at temperature T, spectral radiance is governed by Planck's law, while the integral of radiance over the hemisphere into which it radiates, in W/m2, is governed by the Stefan-Boltzmann law. There is no need for a separate law for radiance normal to the surface of a black body, in W/m2/sr, since this is simply the Stefan-Boltzmann law divided by π. This factor is obtained from the solid angle 2π steradians of a hemisphere decreased by integration over the cosine of the zenith angle. More generally the radiance at an angle θ to the normal (the zenith angle) is given by the Stefan-Boltzmann law times cos(θ)/π.

Definition Radiance is defined by

where L is the observed or measured radiance (W·m−2·sr−1), in the direction θ, d is the differential operator, Φ is the total radiant flux or power (W) emitted θ is the angle between the surface normal and the specified direction, A is the area of the surface (m2), and is the solid angle (sr) subtended by the observation or measurement.

Radiance

38 The approximation only holds for small A and Ω where cos θ is approximately constant.

In general, L is a function of viewing angle through the cos θ term in the denominator as well as the θ, and potentially azimuth angle, dependence of . For the special case of a Lambertian source, L is constant such that

is proportional to cos θ.

When calculating the radiance emitted by a source, A refers to an area on the surface of the source, and Ω to the solid angle into which the light is emitted. When calculating radiance at a detector, A refers to an area on the surface of the detector and Ω to the solid angle subtended by the source as viewed from that detector. When radiance is conserved, as discussed above, the radiance emitted by a source is the same as that received by a detector observing it. The spectral radiance (radiance per unit wavelength) is written Lλ and the radiance per unit frequency is written Lν.

Intensity Radiance is often, confusingly, called intensity in other areas of study, especially heat transfer, astrophysics and astronomy. Intensity has many other meanings in physics, with the most common being power per unit area. The distinction lies in the area rather than the subtended angle of the observer, and relative area of the source.

References External links • International Lighting in Controlled Environments Workshop (http://ncr101.montana.edu/Light1994Conf/ 4_2_Sliney/Sliney Text.htm)

Quantity Name

Unit

Symbol

[1]

Dimension

Name

Symbol

Notes

Symbol

[2]

joule

J

M⋅L2⋅T−2

energy

[2]

watt

W

M⋅L2⋅T−3

radiant energy per unit time, also called radiant power.

Φeλ

watt per metre

W⋅m−1

M⋅L⋅T−3

radiant power per wavelength.

Radiant intensity

Ie

watt per steradian

W⋅sr−1

M⋅L2⋅T−3

power per unit solid angle.

Spectral intensity

Ieλ

watt per steradian per metre

W⋅sr−1⋅m−1

M⋅L⋅T−3

radiant intensity per wavelength.

Radiance

Le

watt per steradian per square metre

W⋅sr−1⋅m−2

M⋅T−3

power per unit solid angle per unit projected source area. confusingly called "intensity" in some other fields of study.

Spectral radiance

[3] Leλ or [4] Leν

watt per steradian per metre3 or watt per steradian per square metre per hertz

W⋅sr−1⋅m−3 M⋅L−1⋅T−3 or or W⋅sr−1⋅m−2⋅Hz−1 M⋅T−2

commonly measured in W⋅sr−1⋅m−2⋅nm−1 with surface area and either wavelength or frequency.

Irradiance

Ee

watt per square metre

W⋅m−2

power incident on a surface, also called radiant flux density. sometimes confusingly called "intensity" as well.

Radiant energy

Qe

Radiant flux

Φe

Spectral power

[2][3]

[3]

[2]

M⋅T−3

Radiance

39 [3] Eeλ or [4] Eeν

watt per metre3 or watt per square metre per hertz

W⋅m−3 or W⋅m−2⋅Hz−1

M⋅L−1⋅T−3 or M⋅T−2

commonly measured in W⋅m−2⋅nm−1 [5] or 10−22W⋅m−2⋅Hz−1, known as solar flux unit.

Radiant exitance / M [2] e Radiant emittance

watt per square metre

W⋅m−2

M⋅T−3

power emitted from a surface.

Spectral irradiance

Spectral radiant exitance / Spectral radiant emittance

[3] Meλ or [4] Meν

watt per metre3 or watt per square metre per hertz

W⋅m−3 or W⋅m−2⋅Hz−1

M⋅L−1⋅T−3 or M⋅T−2

power emitted from a surface per wavelength or frequency.

Radiosity

Je or [3] Jeλ

watt per square metre

W⋅m−2

M⋅T−3

emitted plus reflected power leaving a surface.

Radiant exposure

He

joule per square metre

J⋅m−2

M⋅T−2

Radiant energy density

ωe

joule per metre3

J⋅m−3

M⋅L−1⋅T−2

See also: SI · Radiometry · Photometry [1] Standards organizations recommend that radiometric quantities should be denoted with a suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities. [2] Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiant emittance. [3] Spectral quantities given per unit wavelength are denoted with suffix "λ" (Greek) to indicate a spectral concentration. Spectral functions of wavelength are indicated by "(λ)" in parentheses instead, for example in spectral transmittance, reflectance and responsivity. [4] Spectral quantities given per unit frequency are denoted with suffix "ν" (Greek)—not to be confused with the suffix "v" (for "visual") indicating a photometric quantity. [5] NOAA / Space Weather Prediction Center (http:/ / www. swpc. noaa. gov/ forecast_verification/ F10. html) includes a definition of the solar flux unit (SFU).

Photometry

40

Photometry Photometry is the science of the measurement of light, in terms of its perceived brightness to the human eye.[5] It is distinct from radiometry, which is the science of measurement of radiant energy (including light) in terms of absolute power. In photometry, the radiant power at each wavelength is weighted by a luminosity function that models human brightness sensitivity. Typically, this weighting function is the photopic sensitivity function, although the scotopic function or other functions may also be applied in the same way.

Photometry and the eye

Photopic (daytime-adapted, black curve) and scotopic [1] (darkness-adapted, green curve) luminosity functions. The photopic includes the CIE 1931 standard [2] (solid), the Judd-Vos 1978 modified data [3] (dashed), and the Sharpe, Stockman, Jagla & Jägle 2005 data [4] (dotted). The horizontal axis is wavelength in nm.

The human eye is not equally sensitive to all wavelengths of visible light. Photometry attempts to account for this by weighing the measured power at each wavelength with a factor that represents how sensitive the eye is at that wavelength. The standardized model of the eye's response to light as a function of wavelength is given by the luminosity function. Note that the eye has different responses as a function of wavelength when it is adapted to light conditions (photopic vision) and dark conditions (scotopic vision). Photometry is typically based on the eye's photopic response, and so photometric measurements may not accurately indicate the perceived brightness of sources in dim lighting conditions where colors are not discernible, such as under just moonlight or starlight.[5] Photopic vision is characteristic of the eye's response at luminance levels over three candela per square metre. Scotopic vision occurs below 2 × 10-5 cd/m2. Mesopic vision occurs between these limits and is not well characterised for spectral response.[5]

Photometric quantities Measurement of the effects of electromagnetic radiation became a field of study as early as the end of 18th century. Measurement techniques varied depending on the effects under study and gave rise to different nomenclature. The total heating effect of infrared radiation as measured by thermometers led to development of radiometric units in terms of total energy and power. Use of the human eye as a detector led to photometric units, weighted by the eye's response characteristic. Study of the chemical effects of ultraviolet radiation led to characterization by the total dose or actinometric units expressed in photons per second. [5] Many different units of measure are used for photometric measurements. People sometimes ask why there need to be so many different units, or ask for conversions between units that can't be converted (lumens and candelas, for example). We are familiar with the idea that the adjective "heavy" can refer to weight or density, which are fundamentally different things. Similarly, the adjective "bright" can refer to a light source which delivers a high luminous flux (measured in lumens), or to a light source which concentrates the luminous flux it has into a very narrow beam (candelas), or to a light source that is seen against a dark background. Because of the ways in which light propagates through three-dimensional space — spreading out, becoming concentrated, reflecting off shiny or

Photometry

41

matte surfaces — and because light consists of many different wavelengths, the number of fundamentally different kinds of light measurement that can be made is large, and so are the numbers of quantities and units that represent them. For example, offices are typically "brightly" illuminated by an array of many recessed fluorescent lights for a combined high luminous flux. A laser pointer has very low luminous flux (it could not illuminate a room) but is blindingly bright in one direction (high luminous intensity in that direction).

Quantity Name

Unit [6]

Symbol

Name

Dimension Symbol

Symbol

[7]

lumen second

lm⋅s

T⋅J 

Φv 

[7]

lumen (= cd⋅sr)

lm



Luminous intensity

Iv

candela (= lm/sr)

cd

Luminance

Lv

candela per square metre cd/m2

Illuminance

Ev

lux (= lm/m2)

Luminous emittance

Mv

Luminous exposure

Hv

Luminous energy

Qv 

Luminous flux

Luminous energy density ωv [7]

Luminous efficacy

η 

Luminous efficiency

V

Notes

[8]

units are sometimes called talbots

[8]

also called luminous power



[8]

an SI base unit, luminous flux per unit solid angle

L−2⋅J

units are sometimes called nits

lx

L−2⋅J

used for light incident on a surface

lux (= lm/m2)

lx

L−2⋅J

used for light emitted from a surface

lux second

lx⋅s

L−2⋅T⋅J

lumen second per metre3 lm⋅s⋅m−3 L−3⋅T⋅J lumen per watt

lm/W

M−1⋅L−2⋅T3⋅J ratio of luminous flux to radiant flux 1

also called luminous coefficient

See also: SI · Photometry · Radiometry · (Compare)

Photometric versus radiometric quantities There are two parallel systems of quantities known as photometric and radiometric quantities. Every quantity in one system has an analogous quantity in the other system. Some examples of parallel quantities include:[5] • Luminance (photometric) and radiance (radiometric) • Luminous flux (photometric) and radiant flux (radiometric) • Luminous intensity (photometric) and radiant intensity (radiometric) In photometric quantities every wavelength is weighted according to how sensitive the human eye is to it, while radiometric quantities use unweighted absolute power. For example, the eye responds much more strongly to green light than to red, so a green source will have greater luminous flux than a red source with the same radiant flux would. Radiant energy outside the visible spectrum does not contribute to photometric quantities at all, so for example a 1000 watt space heater may put out a great deal of radiant flux (1000 watts, in fact), but as a light source it puts out very few lumens (because most of the energy is in the infrared, leaving only a dim red glow in the visible).

Photometry

42

Quantity Name

Unit [9]

Dimension

Name

Symbol

Symbol

Notes

Symbol

[10]

joule

J

M⋅L2⋅T−2

energy

[10]

watt

W

M⋅L2⋅T−3

radiant energy per unit time, also called radiant power.

W⋅m−1

M⋅L⋅T−3

radiant power per wavelength.

watt per steradian

W⋅sr−1

M⋅L2⋅T−3

power per unit solid angle.

Spectral intensity I [11] eλ

watt per steradian per metre

W⋅sr−1⋅m−1

M⋅L⋅T−3

radiant intensity per wavelength.

Radiance

Le

watt per steradian per square metre

W⋅sr−1⋅m−2

M⋅T−3

power per unit solid angle per unit projected source area. confusingly called "intensity" in some other fields of study.

Spectral radiance

[11] Leλ or [12] Leν

watt per steradian per metre3 or watt per steradian per square metre per hertz

W⋅sr−1⋅m−3 M⋅L−1⋅T−3 or or W⋅sr−1⋅m−2⋅Hz−1 M⋅T−2

commonly measured in W⋅sr−1⋅m−2⋅nm−1 with surface area and either wavelength or frequency.

Irradiance

Ee

[10]

watt per square metre

W⋅m−2

M⋅T−3

power incident on a surface, also called radiant flux density. sometimes confusingly called "intensity" as well.

Spectral irradiance

[11] Eeλ or [12] Eeν

watt per metre3 or watt per square metre per hertz

W⋅m−3 or W⋅m−2⋅Hz−1

M⋅L−1⋅T−3 or M⋅T−2

commonly measured in W⋅m−2⋅nm−1 [13] or 10−22W⋅m−2⋅Hz−1, known as solar flux unit.

Radiant exitance / Radiant emittance

Me

[10]

watt per square metre

W⋅m−2

M⋅T−3

power emitted from a surface.

Spectral radiant exitance / Spectral radiant emittance

[11] Meλ or [12] Meν

watt per metre3 or watt per square metre per hertz

W⋅m−3 or W⋅m−2⋅Hz−1

M⋅L−1⋅T−3 or M⋅T−2

power emitted from a surface per wavelength or frequency.

Radiosity

Je or [11] Jeλ

watt per square metre

W⋅m−2

M⋅T−3

emitted plus reflected power leaving a surface.

Radiant exposure He

joule per square metre

J⋅m−2

M⋅T−2

Radiant energy density

joule per metre3

J⋅m−3

M⋅L−1⋅T−2

Radiant energy

Qe

Radiant flux

Φe

Spectral power

Φeλ

Radiant intensity

Ie

[10][11] watt per metre

ωe

See also: SI · Radiometry · Photometry · (Compare)

Photometry

Watts versus lumens Watts are units of radiant flux while lumens are units of luminous flux. A comparison of the watt and the lumen illustrates the distinction between radiometric and photometric units. The watt is a unit of power. We are accustomed to thinking of light bulbs in terms of power in watts. This power is not a measure of the amount of light output, but rather indicates how much energy the bulb will use. Because incandescent bulbs sold for "general service" all have fairly similar characteristics (same spectral power distribution), power consumption provides a rough guide to the light output of incandescent bulbs. Watts can also be a direct measure of output. In a radiometric sense, an incandescent light bulb is about 80% efficient: 20% of the energy is lost (e.g. by conduction through the lamp base). The remainder is emitted as radiation, mostly in the infrared. Thus, a 60 watt light bulb emits a total radiant flux of about 45 watts. Incandescent bulbs are, in fact, sometimes used as heat sources (as in a chick incubator), but usually they are used for the purpose of providing light. As such, they are very inefficient, because most of the radiant energy they emit is invisible infrared. A compact fluorescent lamp can provide light comparable to a 60 watt incandescent while consuming as little as 15 watts of electricity. The lumen is the photometric unit of light output. Although most consumers still think of light in terms of power consumed by the bulb, in the U.S. it has been a trade requirement for several decades that light bulb packaging give the output in lumens. The package of a 60 watt incandescent bulb indicates that it provides about 900 lumens, as does the package of the 15 watt compact fluorescent. The lumen is defined as amount of light given into one steradian by a point source of one candela strength; while the candela, a base SI unit, is defined as the luminous intensity of a source of monochromatic radiation, of frequency 540 terahertz, and a radiant intensity of 1/683 watts per steradian. (540 THz corresponds to about 555 nanometres, the wavelength, in the green, to which the human eye is most sensitive. The number 1/683 was chosen to make the candela about equal to the standard candle, the unit which it superseded). Combining these definitions, we see that 1/683 watt of 555 nanometre green light provides one lumen. The relation between watts and lumens is not just a simple scaling factor. We know this already, because the 60 watt incandescent bulb and the 15 watt compact fluorescent can both provide 900 lumens. The definition tells us that 1 watt of pure green 555 nm light is "worth" 683 lumens. It does not say anything about other wavelengths. Because lumens are photometric units, their relationship to watts depends on the wavelength according to how visible the wavelength is. Infrared and ultraviolet radiation, for example, are invisible and do not count. One watt of infrared radiation (which is where most of the radiation from an incandescent bulb falls) is worth zero lumens. Within the visible spectrum, wavelengths of light are weighted according to a function called the "photopic spectral luminous efficiency." According to this function, 700 nm red light is only about 0.4% as efficient as 555 nm green light. Thus, one watt of 700 nm red light is "worth" only 2.7 lumens. Because of the summation over the visual portion of the EM spectrum that is part of this weighting, the unit of "lumen" is color-blind: there is no way to tell what color a lumen will appear. This is equivalent to evaluating groceries by number of bags: there is no information about the specific content, just a number that refers to the total weighted quantity.

43

Photometry

Photometric measurement techniques Photometric measurement is based on photodetectors, devices (of several types) that produce an electric signal when exposed to light. Simple applications of this technology include switching luminaires on and off based on ambient light conditions, and light meters, used to measure the total amount of light incident on a point. More complex forms of photometric measurement are used frequently within the lighting industry. Spherical photometers can be used to measure the directional luminous flux produced by lamps, and consist of a large-diameter globe with a lamp mounted at its center. A photocell rotates about the lamp in three axes, measuring the output of the lamp from all sides. Lamps and lighting fixtures are tested using goniophotometers and rotating mirror photometers, which keep the photocell stationary at a sufficient distance that the luminaire can be considered a point source. Rotating mirror photometers use a motorized system of mirrors to reflect light emanating from the luminaire in all directions to the distant photocell; goniophotometers use a rotating 2-axis table to change the orientation of the luminaire with respect to the photocell. In either case, luminous intensity is tabulated from this data and used in lighting design.

Non-SI photometry units Luminance • Footlambert • Millilambert • Stilb

Illuminance • Foot-candle • Phot

Notes [1] [2] [3] [4] [5]

http:/ / www. cvrl. org/ database/ text/ lum/ scvl. htm http:/ / www. cvrl. org/ database/ text/ cmfs/ ciexyz31. htm http:/ / www. cvrl. org/ database/ text/ lum/ vljv. htm http:/ / www. cvrl. org/ database/ text/ lum/ ssvl2. htm Michael Bass (ed.), Handbook of Optics Volume II - Devices, Measurements and Properties, 2nd Ed., McGraw-Hill 1995, ISBN 978-0-07-047974-6 pages 24-40 through 24-47 [6] Standards organizations recommend that photometric quantities be denoted with a suffix "v" (for "visual") to avoid confusion with radiometric or photon quantities. [7] Alternative symbols sometimes seen: W for luminous energy, P or F for luminous flux, and ρ or K for luminous efficacy. [8] "J" here is the symbol for the dimension of luminous intensity, not the symbol for the unit joules. [9] Standards organizations recommend that radiometric quantities should be denoted with a suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities. [10] Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiant emittance. [11] Spectral quantities given per unit wavelength are denoted with suffix "λ" (Greek) to indicate a spectral concentration. Spectral functions of wavelength are indicated by "(λ)" in parentheses instead, for example in spectral transmittance, reflectance and responsivity. [12] Spectral quantities given per unit frequency are denoted with suffix "ν" (Greek)—not to be confused with the suffix "v" (for "visual") indicating a photometric quantity. [13] NOAA / Space Weather Prediction Center (http:/ / www. swpc. noaa. gov/ forecast_verification/ F10. html) includes a definition of the solar flux unit (SFU).

44

Photometry

45

References External links • Photometry (http://www.nist.gov/pml/div685/grp03/photometry.cfm) (nist.gov) • Radiometry and photometry FAQ (http://www.optics.arizona.edu/Palmer/rpfaq/rpfaq.htm) Professor Jim Palmer's Radiometry FAQ page (University of Arizona).

Shadow A shadow is an area where direct light from a light source cannot reach due to obstruction by an object. It occupies all of the space behind an opaque object with light in front of it. The cross section of a shadow is a two-dimensional silhouette, or reverse projection of the object blocking the light. The sun causes many objects to have shadows and at certain times of the day, when the sun is at certain heights, the lengths of shadows change. An astronomical object casts human-visible shadows when its apparent magnitude is equal or lower than −4.[1] Currently the only astronomical objects able to produce visible shadows on Earth are the sun, the moon and, in the right conditions, Venus or Jupiter.[2]

Variation with time Shadow length when caused by the sun changes dramatically throughout the day. The length of a shadow cast on the ground is proportional to the cotangent of the sun's elevation angle—its angle θ relative to the horizon. Near sunrise and sunset, when θ = 0° and cot(θ) is infinite, shadows can be extremely long. If the sun passes directly overhead, then θ = 90°, cot(θ)=0, and shadows are cast directly underneath objects.

The shadow cast by an old street lamp

Park fence on the snow surface

Shadow

46

Non-point source For a non-point source of light, the shadow is divided into the umbra and penumbra. The wider the light source, the more blurred the shadow. If two penumbras overlap, the shadows appear to attract and merge. This is known as the Shadow Blister Effect. If there are multiple light sources there are multiple shadows, with overlapping parts darker, or a combination of colors. For a person or object touching the surface, like a person standing on the ground, or a pole in the ground, these converge at the point of touch.

Umbra, penumbra and antumbra

Shadow propagation speed

Steam phase eruption of Castle Geyser in Yellowstone National Park cast a shadow on its own steam. Crepuscular rays are also seen.

The farther the distance from the object blocking the light to the surface of projection, the larger the silhouette (they are considered proportional). Also, if the object is moving, the shadow cast by the object will project an image with dimensions (length) expanding proportionally faster than the object's own rate of movement. The increase of size and movement is also true if the distance between the object of interference and the light source are closer. This, however, does not mean the shadow may move faster than light, even when projected at vast distances, such as light years. The loss of light, which projects the shadow, will move towards the surface of projection at light speed. The misconception is that the edge of a shadow "moves" along a wall, when in actuality the increase of a shadow's length is part of a new projection, which will propagate at the speed of light from the object of interference.

Shadow cast by vapour trail of passing aircraft

Shadow

47

Fog shadow of the South Tower of the Golden Gate Bridge

Clouds and shadows over the Mediterranean

Reversed text in shadow

Shadow

48 Since there is no actual communication between points in a shadow (except for reflection or interference of light, at the speed of light), a shadow that projects over a surface of large distances (light years) cannot give information between those distances with the shadow's edge.[3]

Color of shadow on Earth During the daytime, a shadow cast by an opaque object illuminated by sunlight has a bluish tinge. This happens because of Rayleigh scattering, the same property that causes the sky to appear blue. The opaque object is able to block the light of the sun, but not the ambient light of the sky which is blue as the atmosphere molecules scatter blue light more effectively. As a result, the shadow appears bluish.[4] Fog Shadow of Sutro Tower

In photography In photography, which is essentially recording patterns of light, shade, and colour, "highlights" and "shadows" are the brightest and darkest parts of a scene or image. Photographic exposure must be adjusted (unless special effects are wanted) to allow the film or sensor, which has limited dynamic range, to record detail in the highlights without them being washed out, and in the shadows without their becoming undifferentiated black areas.

Fog shadows Fog shadows look odd since humans are not used to seeing shadows in three dimensions. The thin fog is just dense enough to be illuminated by the light that passes through the gaps in a structure or in a tree. As a result, the path of an object shadow through the "fog" appears darkened. In a sense, these shadow lanes are similar to crepuscular rays, which are caused by cloud shadows, but here, they are caused by the shadows of solid objects.

Other notes A shadow cast by the Earth on the Moon is a lunar eclipse. Conversely, a shadow cast by the Moon on the Earth is a solar eclipse. On satellite imagery and aerial photographs, taken vertically, tall buildings can be recognized as such by their long shadows (if the photographs are not taken in the tropics around noon), while these also show more of the shape of these buildings. A shadow shows, apart from distortion, the same image as the silhouette when looking at the object from the sun-side, hence the mirror image of the silhouette seen from the other side (see picture). Shadow as a term is often used for any occlusion, not just those with respect to light. For example, a rain shadow is a dry area, which, with Jasmine flowers soft shadows respect to the prevailing wind direction, is beyond a mountain range; the range is "blocking" water from crossing the area. An acoustic shadow can be created by terrain as well that will leave spots that can't easily hear sounds from a distance. Sciophobia, or sciaphobia, is the fear of shadows.

Shadow

49

Mythological connotations An unattended shadow or shade was thought by some cultures to be similar to that of a ghost.

Heraldry In heraldry, when a charge is supposedly shown in shadow (the appearance is of the charge merely being outlined in a neutral tint rather than being of one or more tinctures different from the field on which it is placed), it is called umbra-ted. Supposedly only a limited number of specific charges can be so depicted. Shadows can be colored by a colored transparent source of the shadow.

References [1] NASA Science Question of the Week (http:/ / web. archive. org/ web/ 20070627044109/ http:/ / www. gsfc. nasa. gov/ scienceques2005/ 20060406. htm). Gsfc.nasa.gov (April 7, 2006). Retrieved on 2013-04-26. [3] Philip Gibbs (1997) Is Faster-Than-Light Travel or Communication Possible? (http:/ / math. ucr. edu/ home/ baez/ physics/ Relativity/ SpeedOfLight/ FTL. html#3) math.ucr.edu [4] Question Board – Questions about Light (http:/ / www. pa. uky. edu/ sciworks/ qlight. htm). Pa.uky.edu. Retrieved on 2013-04-26.

External links • http://www.schoolsobservatory.org.uk/astro/esm/shadows How sun casts shadows over day hours

Umbra The umbra, penumbra and antumbra are the names given to three distinct parts of a shadow, created by any light source after impinging on an opaque object. For a point source only the umbra is cast. These names are most often used to refer to the shadows cast by celestial bodies, though they are sometimes used to describe levels of darkness, such as in sunspots. Umbra, penumbra, and antumbra

Umbra

50

Umbra (A) and penumbra (B)

Umbra The umbra (Latin for "shadow") is the innermost and darkest part of a shadow, where the light source is completely blocked by the occluding body. An observer in the umbra experiences a total eclipse.

Penumbra The penumbra (from the Latin paene "almost, nearly" and umbra "shadow") is the region in which only a portion of the light source is obscured by the occluding body. An observer in the penumbra Example of umbra, penumbra and antumbra outside experiences a partial eclipse. An alternative definition is that the astronomy penumbra is the region where some or all of the light source is obscured (i.e., the umbra is a subset of the penumbra). For example, NASA's Navigation and Ancillary Information Facility defines that a body in the umbra is also within the penumbra.[1] In radiation oncology, the penumbra is the space in the periphery of the main target of radiation therapy, and has been defined as the volume receiving between 80% and 20% of isodose.[2]

Antumbra The antumbra (from Latin ante, 'before') is the region from which the occluding body appears entirely contained within the disc of the light source. If an observer in the antumbra moves closer to the light source, the apparent size of the occluding body increases until it causes a full umbra. An observer in this region experiences an annular eclipse, in which a bright ring is visible around the eclipsing body. Earth's shadow, to scale, showing the extent of the umbral cone beyond the Moon's orbit (yellow dot, also to scale)

Umbra

51

References [1] Event Finding Subsystem Preview (http:/ / naif. jpl. nasa. gov/ pub/ naif/ toolkit_docs/ Tutorials/ pdf/ individual_docs/ 45_event_finding_preview. pdf) Navigation and Ancillary Information Facility. [2] Page 55 (http:/ / books. google. com/ books?id=4CRZoDk5KWsC& pg=PA55& lpg=PA55) in:

Distance fog Distance fog is a technique used in 3D computer graphics to enhance the perception of distance by simulating fog. Because many of the shapes in graphical environments are relatively simple, and complex shadows are difficult to render, many graphics engines employ a "fog" gradient so objects further from the camera are progressively more obscured by haze and by aerial perspective.[1] This technique simulates the effect of light scattering, which causes more distant objects to appear lower in contrast, especially in outdoor environments.

Example of distance fog.

"Fogging" is another use of distance fog in mid-to-late 1990s games, when processing power was not enough to render far viewing distances, and clipping was employed. However, the effect could be very distracting since bits and pieces of polygons would flicker in and out of view instantly, and by applying a medium-ranged fog, the clipped polygons would fade in more realistically from the haze, even though the effect may have been considered unrealistic in some cases (such as dense fog inside of a building). Many early Nintendo 64 and PlayStation games used this effect, as in Turok: Dinosaur Hunter, Bubsy 3D, Star Wars: Rogue Squadron, Tony Hawk's Pro Skater, and Superman. The game Silent Hill uniquely worked fogging into the game's storyline, with the eponymous town being consumed by a dense layer of fog as the result of the player having entered an alternate reality. The application of fogging was so well received as an atmospheric technique that it has appeared in each of the game's sequels, despite improved technology negating it as a graphical necessity.

References

52

Material Properties Shading Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness.

Gouraud shading, invented by Henri Gouraud in 1971, was one of the first shading techniques developed in computer graphics.

Drawing Shading is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. There are various techniques of shading including cross hatching where perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears. Light patterns, such as objects having light and shaded areas, help when creating the illusion of depth on paper.[1][2]

Example of shading.

Shading

53

Computer graphics In computer graphics, shading refers to the process of altering the color of an object/surface/polygon in the 3D scene, based on its angle to lights and its distance from lights to create a photorealistic effect. Shading is performed during the rendering process by a program called a shader.

Angle to light source Shading alters the colors of faces in a 3D model based on the angle of the surface to a light source or light sources. The first image below has the faces of the box rendered, but all in the same color. Edge lines have been rendered here as well which makes the image easier to see. The second image is the same model rendered without edge lines. It is difficult to tell where one face of the box ends and the next begins. The third image has shading enabled, which makes the image more realistic and makes it easier to see which face is which.

Rendered image of a box. This image has no shading on its faces, but uses edge lines to separate the faces.

This is the same image with the edge lines removed.

This is the same image rendered with shading of the faces to alter the colors of the 3 faces based on their angle to the light sources.

Shading

54

Lighting Usually, upon rendering a scene a number of different lighting techniques will be used to make the rendering look more realistic. For this matter, a number of different types of light sources exist to provide customization for the shading of objects. Ambient lighting Shading is also dependant on lighting. An ambient light source represents a fixed-intensity and fixed-color light source that affects all objects in the scene equally. Upon rendering, all objects in the scene are brightened with the specified intensity and color. This type of light source is mainly used to provide the scene with a basic view of the different objects in it. Directional lighting A directional light source illuminates all objects equally from a given direction, like an area light of infinite size and infinite distance from the scene; there is shading, but cannot be any distance falloff

Shading effects from floodlight.

Point lighting Light originates from a single point, and spreads outward in all directions Spotlight lighting Spotlight, originates from a single point, and spreads outward in a coned direction Area lighting Area, originates from a single plane and illuminates all objects in a given direction beginning from that plane Volumetric lighting Volume, an enclosed space lighting objects within that space Shading is interpolated based on how the angle of these light sources reach the objects within a scene. Of course, these light sources can be and often are combined in a scene. The renderer then interpolates how these lights must be combined, and produces a 2d image to be displayed on the screen accordingly.

Distance falloff Theoretically, two surfaces which are parallel, are illuminated the same amount from a distant light source, such as the sun. Even though one surface is further away, your eye sees more of it in the same space, so the illumination appears the same. Notice in the first image that the color on the front faces of the two boxes is exactly the same. It appears that there is a slight difference where the two faces meet, but this is an optical illusion because of the vertical edge below where the two faces meet. Notice in the second image that the surfaces on the boxes are bright on the front box and darker on the back box. Also the floor goes from light to dark as it gets farther away. This distance falloff effect produces images which appear more realistic without having to add additional lights to achieve the same effect.

Shading

Two boxes rendered with an OpenGL renderer. Note that the colors of the two front faces are the same even though one box is further away.

55

The same model rendered using ARRIS CAD which implements "Distance Falloff" to make surfaces which are closer to the eye appear brighter.

Distance falloff can be calculated in a number of ways: • None - The light intensity received is the same regardless of the distance between the point and the light source. • Linear - For a given point at a distance x from the light source, the light intensity received is proportional to 1/x. • Quadratic - This is how light intensity decreases in reality if the light has a free path (i.e. no fog or any other thing in the air that can absorb or scatter the light). For a given point at a distance x from the light source, the light intensity received is proportional to 1/x2. • Factor of n - For a given point at a distance x from the light source, the light intensity received is proportional to 1/xn. • Any number of other mathematical functions may also be used.

Flat shading Flat shading is a lighting technique used in 3D computer graphics to shade each polygon of an object based on the angle between the polygon's surface normal and the direction of the light source, their respective colors and the intensity of the light source. It is usually used for high speed Example of flat shading vs. interpolation rendering where more advanced shading techniques are too computationally expensive. As a result of flat shading all of the polygon's vertices are colored with one color, allowing differentiation between adjacent polygons. Specular highlights are rendered poorly with flat shading: If there happens to be a large specular component at the representative vertex, that brightness is drawn uniformly over the entire face. If a specular highlight doesn’t fall on the representative point, it is missed entirely. Consequently, the specular reflection component is usually not included in flat shading computation.

Shading

56

Smooth shading Smooth shading of a polygon displays the points in a polygon with smoothly-changing colors across the surface of the polygon. This requires you to define a separate color for each vertex of your polygon, because the smooth color change is computed by interpolating the vertex colors across the interior of the triangle with the standard kind of interpolation we saw in the graphics pipeline discussion. Computing the color for each vertex is done with the usual computation of a standard lighting model, but in order to compute the color for each vertex separately you must define a separate normal vector for each vertex of the polygon. This allows the color of the vertex to be determined by the lighting model that includes this unique normal. Types of smooth shading include: • Gouraud shading • Phong shading

Gouraud shading 1. Determine the normal at each polygon vertex 2. Apply an illumination model to each vertex to calculate the vertex intensity 3. Linearly interpolate the vertex intensities over the surface polygon Data structures • Sometimes vertex normals can be computed directly (e.g. height field with uniform mesh) • More generally, need data structure for mesh • Key: which polygons meet at each vertex Advantages • Polygons, more complex than triangles, can also have different colors specified for each vertex. In these instances, the underlying logic for shading can become more intricate. Problems • Even the smoothness introduced by Gouraud shading may not prevent the appearance of the shading differences between adjacent polygons. • Gouraud shading is more CPU intensive and can become a problem when rendering real time environments with many polygons. • T-Junctions with adjoining polygons can sometimes result in visual anomalies. In general, T-Junctions should be avoided.

Phong shading Phong shading, is similar to Gouraud shading except that the Normals are interpolated. Thus, the specular highlights are computed much more precisely than in the Gouraud shading model: 1. 2. 3. 4.

Compute a normal N for each vertex of the polygon. From bi-linear interpolation compute a normal, Ni for each pixel. (This must be renormalized each time) From Ni compute an intensity Ii for each pixel of the polygon. Paint pixel to shade corresponding to Ii.

Shading

57

Flat vs. smooth shading Flat

Smooth

Uses the same color for every pixel in a face - usually the color of the first vertex.

Smooth shading uses linear interpolation of colors between vertices

Edges appear more pronounced than they would on a real object because of a phenomenon in the eye known as lateral inhibition

The edges disappear with this technique

Same color for any point of the face

Each point of the face has its own color

Individual faces are visualized

Visualize underlying surface

Not suitable for smooth objects

Suitable for any objects

Less expensive

More expensive

References

Diffuse reflection Diffuse reflection is the reflection of light from a surface such that an incident ray is reflected at many angles rather than at just one angle as in the case of specular reflection. An illuminated ideal diffuse reflecting surface will have equal luminance from all directions which lie in the half-space adjacent to the surface (Lambertian reflectance). A surface built from a non-absorbing powder such as plaster, or from fibers such as paper, or from a polycrystalline material such as white marble, reflects light diffusely with great efficiency. Many common materials exhibit a mixture of specular and diffuse reflection. The visibility of objects, but light-emitting ones, is primarily caused by diffuse reflection of light: it is diffusely-scattered light that forms the image of the object in the observer's eye.

Diffuse and specular reflection from a glossy [1] surface. The rays represent luminous intensity, which varies according to Lambert's cosine law for an ideal diffuse reflector.

Diffuse reflection

58

Mechanism Diffuse reflection from solids is generally not due to surface roughness. A flat surface is indeed required to give specular reflection, but it does not prevent diffuse reflection. A piece of highly polished white marble remains white; no amount of polishing will turn it into a mirror. Polishing produces some specular reflection, but the remaining light continues to be diffusely reflected. The most general mechanism by which a surface gives diffuse reflection does not involve exactly the surface: most of the light is contributed by scattering centers beneath the surface,[2][3] as illustrated in Figure 1 at right. If one were to imagine that the figure represents snow, and that the polygons are its (transparent) ice crystallites, an impinging ray is partially reflected (a few percent) by the first particle, enters in it, is again reflected by the interface with the second particle, enters in it, impinges on the third, and so on, generating a series of "primary" scattered rays in random directions, which, in turn, through the same mechanism, generate a large number of "secondary" scattered rays, which generate "tertiary" rays...[4] All these rays walk through the snow crystallytes, which do not absorb light, until they arrive at the surface and exit in random directions.[5] The result is that the light that was sent out is returned in all directions, so that snow is white despite being made of transparent material (ice crystals). For simplicity, "reflections" are spoken of here, but more generally the interface between the small particles that constitute many materials is irregular on a scale comparable with light wavelength, so diffuse light is generated at each interface, rather than a single reflected ray, but the story can be told the same way.

Figure 1 – General mechanism of diffuse reflection by a solid surface (refraction phenomena not represented)

Figure 2 – Diffuse reflection from an irregular surface

This mechanism is very general, because almost all common materials are made of "small things" held together. Mineral materials are generally polycrystalline: one can describe them as made of a 3D mosaic of small, irregularly shaped defective crystals. Organic materials are usually composed of fibers or cells, with their membranes and their complex internal structure. And each interface, inhomogeneity or imperfection can deviate, reflect or scatter light, reproducing the above mechanism. Few materials don't follow it: among them metals, which do not allow light to enter; gases, liquids; glass and transparent plastics (which have a liquid-like amorphous microscopic structure); single crystals, such as some gems or a salt crystal; and some very special materials, such as the tissues which make the cornea and the lens of an eye. These materials can reflect diffusely, however, if their surface is microscopically rough, like in a frost glass

Diffuse reflection (Figure 2), or, of course, if their homogeneous structure deteriorates, as in the eye lens. A surface may also exhibit both specular and diffuse reflection, as is the case, for example, of glossy paints as used in home painting, which give also a fraction of specular reflection, while matte paints give almost exclusively diffuse reflection.

Specular vs. diffuse reflection Virtually all materials can give specular reflection, provided that their surface can be polished to eliminate irregularities comparable with light wavelength (a fraction of micrometer). A few materials, like liquids and glasses, lack the internal subdivisions which give the subsurface scattering mechanism described above, so they can be clear and give only specular reflection (not great, however), while, among common materials, only polished metals can reflect light specularly with great efficiency (the reflecting material of mirrors usually is aluminum or silver). All other common materials, even when perfectly polished, usually give not more than a few percent specular reflection, except in particular cases, such as grazing angle reflection by a lake, or the total reflection of a glass prism, or when structured in certain complex configurations such as the silvery skin of many fish species or the reflective surface of a dielectric mirror. Diffuse reflection from white materials, instead, can be highly efficient in giving back all the light they receive, due to the summing up of the many subsurface reflections.

Colored objects Up to now white objects have been discussed, which do not absorb light. But the above scheme continues to be valid in the case that the material is absorbent. In this case, diffused rays will lose some wavelengths during their walk in the material, and will emerge colored. More, diffusion affects in a substantial manner the color of objects, because it determines the average path of light in the material, and hence to which extent the various wavelengths are absorbed.[6] Red ink looks black when it stays in its bottle. Its vivid color is only perceived when it is placed on a scattering material (e.g. paper). This is so because light's path through the paper fibers (and through the ink) is only a fraction of millimeter long. Light coming from the bottle, instead, has crossed centimeters of ink, and has been heavily absorbed, even in its red wavelengths. And, when a colored object has both diffuse and specular reflection, usually only the diffuse component is colored. A cherry reflects diffusely red light, absorbs all other colors and has a specular reflection which is essentially white. This is quite general, because, except for metals, the reflectivity of most materials depends on their refraction index, which varies little with the wavelength (though it is this variation that causes the chromatic dispersion in a prism), so that all colors are reflected nearly with the same intensity. Reflections from different origin, instead, may be colored: metallic reflections, such as in gold or copper, or interferential reflections: iridescences, peacock feathers, butterfly wings, beetle elytra, or the antireflection coating of a lens.

Importance for vision Looking at one's surrounding environment, the vast majority of visible objects are seen primarily by diffuse reflection from their surface. This holds with few exceptions, such as glass, reflective liquids, polished or smooth metals, glossy objects, and objects that themselves emit light: the Sun, lamps, and computer screens (which, however, emit diffuse light). Outdoors it is the same, with perhaps the exception of a transparent water stream or of the iridescent colors of a beetle. Additionally, Rayleigh scattering is responsible for the blue color of the sky, and Mie scattering for the white color of the water droplets of clouds. Light scattered from the surfaces of objects is by far the primary light which humans visually observe.[][]

59

Diffuse reflection

60

Interreflection Diffuse interreflection is a process whereby light reflected from an object strikes other objects in the surrounding area, illuminating them. Diffuse interreflection specifically describes light reflected from objects which are not shiny or specular. In real life terms what this means is that light is reflected off non-shiny surfaces such as the ground, walls, or fabric, to reach areas not directly in view of a light source. If the diffuse surface is colored, the reflected light is also colored, resulting in similar coloration of surrounding objects. In 3D computer graphics, diffuse interreflection is an important component of global illumination. There are a number of ways to model diffuse interreflection when rendering a scene. Radiosity and photon mapping are two commonly used methods.

References [2] P.Hanrahan and W.Krueger (1993), Reflection from layered surfaces due to subsurface scattering, in SIGGRAPH ’93 Proceedings, J. T. Kajiya, Ed., vol. 27, pp. 165–174 (http:/ / www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p165-hanrahan. pdf). [3] H.W.Jensen et al. (2001), A practical model for subsurface light transport, in ' Proceedings of ACM SIGGRAPH 2001', pp. 511–518 (http:/ / www. cs. berkeley. edu/ ~ravir/ 6998/ papers/ p511-jensen. pdf) [4] Only primary and secondary rays are represented in the figure. [5] Or, if the object is thin, it can exit from the opposite surface, giving diffuse transmitted light. [6] Paul Kubelka, Franz Munk (1931), Ein Beitrag zur Optik der Farbanstriche, Zeits. f. Techn. Physik, 12, 593–601, see The Kubelka-Munk Theory of Reflectance (http:/ / web. eng. fiu. edu/ ~godavart/ BME-Optics/ Kubelka-Munk-Theory. pdf)

Lambertian reflectance Lambertian reflectance is the property that defines an ideal diffusely reflecting surface. The apparent brightness of such a surface to an observer is the same regardless of the observer's angle of view. More technically, the surface's luminance is isotropic, and the luminous intensity obeys Lambert's cosine law. Lambertian reflectance is named after Johann Heinrich Lambert, who introduced the concept of perfect diffusion in his 1760 book Photometria.

Examples Unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy coat of polyurethane does not, since the glossy coating creates specular highlights. Not all rough surfaces are Lambertian reflectors, but this is often a good approximation when the characteristics of the surface are unknown. Spectralon is a material which is designed to exhibit an almost perfect Lambertian reflectance.

Use in computer graphics In computer graphics, Lambertian reflection is often used as a model for diffuse reflection. This technique causes all closed polygons (such as a triangle within a 3D mesh) to reflect light equally in all directions when rendered. In effect, a point rotated around its normal vector will not change the way it reflects light. However, the point will change the way it reflects light if it is tilted away from its initial normal vector.[1]Wikipedia:Verifiability The reflection is calculated by taking the dot product of the surface's normal vector, , and a normalized light-direction vector,

, pointing from the surface to the light source. This number is then multiplied by the color of the surface

and the intensity of the light hitting the surface: , where

is the intensity of the diffusely reflected light (surface brightness),

of the incoming light. Because ,

is the color and

is the intensity

Lambertian reflectance

61

where is the angle between the direction of the two vectors, the intensity will be the highest if the normal vector points in the same direction as the light vector ( , the surface will be perpendicular to the direction of the light), and the lowest if the normal vector is perpendicular to the light vector (

, the surface runs

parallel with the direction of the light). Lambertian reflection from polished surfaces are typically accompanied by specular reflection (gloss), where the surface luminance is highest when the observer is situated at the perfect reflection direction (i.e. where the direction of the reflected light is a reflection of the direction of the incident light in the surface), and falls off sharply. This is simulated in computer graphics with various specular reflection models such as Phong, Cook-Torrance. etc.

Other waves While Lambertian reflectance usually refers to the reflection of light by an object, it can be used to refer to the reflection of any wave. For example, in ultrasound imaging, "rough" tissues are said to exhibit Lambertian reflectance.

References

Gouraud shading Gouraud shading, named after Henri Gouraud, is an interpolation method used in computer graphics to produce continuous shading of surfaces represented by polygon meshes. In practice, Gouraud shading is most often used to achieve continuous lighting on triangle surfaces by computing the lighting at the corners of each triangle and linearly interpolating the resulting colours for each pixel covered by the triangle. Gouraud first published the technique in 1971.[1][2][3]

Gouraud-shaded triangle mesh using the Phong reflection model

Description Gouraud shading works as follows: An estimate to the surface normal of each vertex in a polygonal 3D model is either specified for each vertex or found by averaging the surface normals of the polygons that meet at each vertex. Using these estimates, lighting computations based on a reflection model, e.g. the Phong reflection model, are then performed to produce colour intensities at the vertices. For each screen pixel that is covered by the polygonal mesh, colour intensities can then be interpolated from the colour values calculated at the vertices.

Gouraud shading

62

Comparison with other shading techniques Gouraud shading is considered superior to flat shading, which requires significantly less processing than Phong shading, but usually results in a faceted look. In comparison to Phong shading, Gouraud shading's strength and weakness lies in its interpolation. If a mesh covers more pixels in screen space than it has vertices, interpolating colour values from samples of Comparison of flat shading and Gouraud shading. expensive lighting calculations at vertices is less processor intensive than performing the lighting calculation for each pixel as in Phong shading. However, highly localized lighting effects (such as specular highlights, e.g. the glint of reflected light on the surface of an apple) will not be rendered correctly, and if a highlight lies in the middle of a polygon, but does not spread to the polygon's vertex, it will not be apparent in a Gouraud rendering; conversely, if a highlight occurs at the vertex of a polygon, it will be rendered correctly at this vertex (as this is where the lighting model is applied), but will be spread unnaturally across all neighboring polygons via the interpolation method. The problem is easily spotted in a rendering which ought to have a specular highlight moving smoothly across the surface of a model as it rotates. Gouraud shading will instead produce a highlight continuously fading in and out across neighboring portions of the model, peaking in intensity when the intended specular highlight passes over a vertex of the model. For clarity, note that the problem just described can be improved by increasing the density of vertices in the object (or perhaps increasing them just near the problem area), but of course, this solution applies to any shading paradigm whatsoever - indeed, with an "incredibly large" number of vertices there would never be any need at all for shading concepts.

Gouraud-shaded sphere - note the poor behaviour of the specular highlight.

The same sphere rendered with a very high polygon count.

Gouraud shading

References

Oren–Nayar reflectance model The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar, is a reflectance model for diffuse reflection from rough surfaces. It has been shown to accurately predict the appearance of a wide range of natural surfaces, such as concrete, plaster, sand, etc.

Introduction Reflectance is a physical property of a material that describes how it reflects incident light. The appearance of various materials are determined to a large extent by their reflectance properties. Most reflectance models can be broadly classified into two categories: diffuse and specular. In computer vision and computer graphics, the diffuse component is often assumed to be Lambertian. A surface that obeys Lambert's Law appears equally bright from all viewing directions. This model for diffuse reflection was proposed by Johann Comparison of a matte vase with the rendering based on the Heinrich Lambert in 1760 and has been perhaps the Lambertian model. Illumination is from the viewing direction most widely used reflectance model in computer vision and graphics. For a large number of real-world surfaces, such as concrete, plaster, sand, etc., however, the Lambertian model is an inadequate approximation of the diffuse component. This is primarily because the Lambertian model does not take the roughness of the surface into account. Rough surfaces can be modelled as a set of facets with different slopes, where each facet is a small planar patch. Since photo receptors of the retina and pixels in a camera are both finite-area detectors, substantial macroscopic (much larger than the wavelength of incident light) surface roughness is often projected onto a single detection element, which in turn produces an aggregate brightness value over many facets. Whereas Lambert’s law may hold well when observing a single planar facet, a collection of such facets with different orientations is guaranteed to violate Lambert’s law. The primary reason for this is that the foreshortened facet areas will change for different viewing directions, and thus the surface appearance will be view-dependent.

63

OrenNayar reflectance model

64

Analysis of this phenomenon has a long history and can be traced back almost a century. Past work has resulted in empirical models designed to fit experimental data as well as theoretical results derived from first principles. Much of this work was motivated by the non-Lambertian reflectance of the moon. The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar in 1993,[1] predicts reflectance from rough diffuse surfaces for the entire hemisphere of source and sensor directions. The model takes into account complex physical phenomena such as masking, shadowing and interreflections between points on the surface facets. It can be viewed as a generalization of Lambert’s law. Today, it is widely used in computer graphics and animation for rendering rough surfaces.[citation needed] It also has important implications for human vision and computer vision problems, such as shape from shading, photometric stereo, etc.

Aggregation of the reflection from rough surfaces

Formulation The surface roughness model used in the derivation of the Oren-Nayar model is the microfacet model, proposed by Torrance and Sparrow,[2] which assumes the surface to be composed of long symmetric V-cavities. Each cavity consists of two planar facets. The roughness of the surface is specified using a probability function for the distribution of facet slopes. In particular, the Gaussian distribution is often used, and thus the variance of the Gaussian distribution, , is a measure of the roughness of the surfaces. The standard deviation of the facet slopes (gradient of the surface elevation), ranges in .

Diagram of surface reflection

In the Oren–Nayar reflectance model, each facet is assumed to be Lambertian in reflectance. As shown in the image at right, given the radiance of the incoming light , the radiance of the reflected light , according to the Oren-Nayar model, is

where , ,

OrenNayar reflectance model

65 , ,

and

is the albedo of the surface, and

same plane), we have

, and

is the roughness of the surface. In the case of

(i.e., all facets in the

, and thus the Oren-Nayar model simplifies to the Lambertian model:

Results Here is a real image of a matte vase illuminated from the viewing direction, along with versions rendered using the Lambertian and Oren-Nayar models. It shows that the Oren-Nayar model predicts the diffuse reflectance for rough surfaces more accurately than the Lambertian model. Here are rendered images of a sphere using the Oren-Nayar model, corresponding to different surface roughnesses (i.e. different values):

Plot of the brightness of the rendered images, compared with the measurements on a cross section of the real vase.

OrenNayar reflectance model

66

Connection with other microfacet reflectance models Oren-Nayar model Rough opaque diffuse surfaces

Torrance-Sparrow model

[3]

Microfacet model for refraction

Rough opaque specular surfaces (glossy surfaces) Rough transparent surfaces

Each facet is Lambertian (diffuse) Each facet is a mirror (specular)

Each facet is made of glass (transparent)

References [1] M. Oren and S.K. Nayar, " Generalization of Lambert's Reflectance Model (http:/ / www1. cs. columbia. edu/ CAVE/ publications/ pdfs/ Oren_SIGGRAPH94. pdf)". SIGGRAPH. pp.239-246, Jul, 1994 [2] Torrance, K. E. and Sparrow, E. M. Theory for off-specular reflection from roughened surfaces. J. Opt. Soc. Am.. 57, 9(Sep 1967) 1105-1114 [3] B. Walter, et al. " Microfacet Models for Refraction through Rough Surfaces (http:/ / www. cs. cornell. edu/ ~srm/ publications/ EGSR07-btdf. html)". EGSR 2007.

External links • The official project page for the Oren-Nayar model (http://www1.cs.columbia.edu/CAVE/projects/oren/) at Shree Nayar's CAVE research group webpage (http://www.cs.columbia.edu/CAVE/)

Phong shading Phong shading refers to an interpolation technique for surface shading in 3D computer graphics. It is also called Phong interpolation[] or normal-vector interpolation shading.[] Specifically, it interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model.

History Phong shading and the Phong reflection model were developed at the University of Utah by Bui Tuong Phong, who published them in his 1973 Ph.D. dissertation.[1][2] Phong's methods were considered radical at the time of their introduction, but have evolved into a baseline shading method for many rendering applications. Phong's methods have proven popular due to their generally efficient use of computation time per rendered pixel.

Phong interpolation Phong shading improves upon Gouraud shading and provides a better approximation of the shading of a smooth surface. Phong shading assumes a smoothly varying surface normal vector. The Phong interpolation method works better than Gouraud shading when applied to a reflection model that has small specular highlights such as the Phong reflection model.

Phong shading interpolation example

Phong shading The most serious problem with Gouraud shading occurs when specular highlights are found in the middle of a large polygon. Since these specular highlights are absent from the polygon's vertices and Gouraud shading interpolates based on the vertex colors, the specular highlight will be missing from the polygon's interior. This problem is fixed by Phong shading. Unlike Gouraud shading, which interpolates colors across polygons, in Phong shading a normal vector is linearly interpolated across the surface of the polygon from the polygon's vertex normals. The surface normal is interpolated and normalized at each pixel and then used in a reflection model, e.g. the Phong reflection model, to obtain the final pixel color. Phong shading is more computationally expensive than Gouraud shading since the reflection model must be computed at each pixel instead of at each vertex. In modern graphics hardware, variants of this algorithm are implemented using pixel or fragment shaders.

Phong reflection model Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model, which is an empirical model of local illumination. It describes the way a surface reflects light as a combination of the diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on Bui Tuong Phong's informal observation that shiny surfaces have small intense specular highlights, while dull surfaces have large highlights that fall off more gradually. The reflection model also includes an ambient term to account for the small amount of light that is scattered about the entire scene.

Visual illustration of the Phong equation: here the light is white, the ambient and diffuse colors are both blue, and the specular color is white, reflecting a small part of the light hitting the surface, but only in very narrow highlights. The intensity of the diffuse component varies with the direction of the surface, and the ambient component is uniform (independent of direction).

References [1] B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317. [2] University of Utah School of Computing, http:/ / www. cs. utah. edu/ school/ history/ #phong-ref

67

BlinnPhong shading model

68

Blinn–Phong shading model The Blinn–Phong shading model (also called Blinn–Phong reflection model or modified Phong reflection model) is a modification to the Phong reflection model developed by Jim Blinn.[1] Blinn–Phong is the default shading model used in OpenGL[2] and Direct3D's fixed-function pipeline (before Direct3D 10 and OpenGL 3.1), and is carried out on each vertex as it passes down the graphics pipeline; pixel values between vertices are interpolated by Gouraud shading by default, rather than the more computationally-expensive Phong shading.

Description In Phong shading, one must continually recalculate the scalar product between a viewer (V) and the beam from a light-source (L) reflected (R) on a surface. If, instead, one calculates a halfway vector between the viewer and light-source vectors,

we can replace

with

, where

surface normal. In the above equation, vectors, and

and

is a solution to the equation

is the normalized are both normalized

Vectors for calculating Phong and Blinn–Phong shading

where

is the Householder matrix that reflects a point in the hyperplane that contains the origin and has the normal This dot product represents the cosine of an angle that is half of the angle represented by Phong's dot product if V, L, N and R all lie in the same plane. This relation between the angles remains approximately true when the vectors don't lie in the same plane, especially when the angles are small. The angle between N and H is therefore sometimes called the halfway angle. Considering that the angle between the halfway vector and the surface normal is likely to be smaller than the angle between R and V used in Phong's model (unless the surface is viewed from a very steep angle for which it is likely to be larger), and since Phong is using an exponent can be set such that is closer to the former expression. For front-lit surfaces (specular reflections on surfaces facing the viewer),

will result in specular

highlights that very closely match the corresponding Phong reflections. However, while the Phong reflections are always round for a flat surface, the Blinn–Phong reflections become elliptical when the surface is viewed from a steep angle. This can be compared to the case where the sun is reflected in the sea close to the horizon, or where a far away street light is reflected in wet pavement, where the reflection will always be much more extended vertically than horizontally.

BlinnPhong shading model

Additionally, while it can be seen as an approximation to the Phong model, it produces more accurate models of empirically determined bidirectional reflectance distribution functions than Phong for many types of surfaces. (See: Experimental Validation of Analytical BRDF Models, Siggraph 2004 [3])

Efficiency This rendering model is less efficient than pure Phong shading in most cases, since it contains a square root calculation. While the original Phong model only needs a simple vector reflection, this modified form takes more into consideration. However, as many CPUs and GPUs contain single and double precision square root functions (as standard features) and other instructions that can be used to speed up rendering, the time penalty for this kind of shader will not be noticed in most implementations. However, Blinn-Phong will be faster in the case where the viewer and light are treated to be at infinity. This is the case for directional lights. In this case, the half-angle vector is independent of position and surface curvature. It can be computed once for each light and then used for the entire frame, or indeed while light and viewpoint remain in the same relative position. The same is not true with Phong's original reflected light vector which depends on the surface curvature and must be recalculated for each pixel of the image (or for each vertex of the model in the case of vertex lighting). In most cases where lights are not treated to be at infinity, for instance when using point lights, the original Phong model will be faster.

Code sample This sample in High Level Shader Language is a method of determining the diffuse and specular light from a point light. The light structure, position in space of the surface, view direction vector and the normal of the surface are passed through. A Lighting structure is returned; struct Lighting { float3 Diffuse; float3 Specular; }; struct PointLight { float3 position; float3 diffuseColor; float diffusePower; float3 specularColor;

69

BlinnPhong shading model float

specularPower;

}; Lighting GetPointLight( PointLight light, float3 pos3D, float3 viewDir, float3 normal ) { Lighting OUT; if( light.diffusePower > 0 ) { float3 lightDir = light.position - pos3D; //3D position in space of the surface float distance = length( lightDir ); lightDir = lightDir / distance; // = normalize( lightDir ); distance = distance * distance; //This line may be optimised using Inverse square root //Intensity of the diffuse light. Saturate to keep within the 0-1 range. float NdotL = dot( normal, lightDir ); float intensity = saturate( NdotL ); // Calculate the diffuse light factoring in light color, power and the attenuation OUT.Diffuse = intensity * light.diffuseColor * light.diffusePower / distance; //Calculate the half vector between the light vector and the view vector. //This is faster than calculating the actual reflective vector. float3 H = normalize( lightDir + viewDir ); //Intensity of the specular light float NdotH = dot( normal, H ); intensity = pow( saturate( NdotH ), specularHardness ); //Sum up the specular light factoring OUT.Specular = intensity * light.specularColor * light.specularPower / distance; } return OUT; }

70

BlinnPhong shading model

71

References [3] http:/ / people. csail. mit. edu/ wojciech/ BRDFValidation/ index. html

Specular reflection Specular reflection is the mirror-like reflection of light (or of other kinds of wave) from a surface, in which light from a single incoming direction (a ray) is reflected into a single outgoing direction. Such behavior is described by the law of reflection, which states that the direction of incoming light (the incident ray), and the direction of outgoing light reflected (the reflected ray) make the same angle with respect to the surface normal, thus the angle of incidence equals the angle of reflection ( in the figure), and that the incident, normal, and reflected directions are coplanar. This behavior was first discovered through careful observation and measurement by Hero of Alexandria (AD c. 10–70).[1]

Explanation Specular reflection is distinct from diffuse reflection, where incoming light is reflected in a broad range of directions. An example of the distinction between specular and diffuse reflection would be glossy and matte paints. Matte paints have almost exclusively diffuse reflection, while glossy paints have both specular and diffuse reflection. A surface built from a non-absorbing powder, such as plaster, can be a nearly perfect diffuser, whereas polished metallic objects can specularly reflect light very efficiently. The reflecting material of mirrors is usually aluminum or silver.

Diagram of specular reflection

Even when a surface exhibits only specular reflection with no diffuse Reflections on still water are an example of reflection, not all of the light is necessarily reflected. Some of the light specular reflection. may be absorbed by the materials. Additionally, depending on the type of material behind the surface, some of the light may be transmitted through the surface. For most interfaces between materials, the fraction of the light that is reflected increases with increasing angle of incidence . If the light is propagating in a material with a higher index of refraction than the material whose surface it strikes, then total internal reflection may occur if the angle of incidence is greater than a certain critical angle. Specular reflection from a dielectric such as water can affect polarization and at Brewster's angle reflected light is completely linearly polarized parallel to the interface. The law of reflection arises from diffraction of a plane wave with small wavelength on a flat boundary: when the boundary size is much larger than the wavelength then electrons of the boundary are seen oscillating exactly in phase only from one direction – the specular direction. If a mirror becomes very small compared to the wavelength, the law of reflection no longer holds and the behavior of light is more complicated. Waves other than visible light can also exhibit specular reflection. This includes other electromagnetic waves, as well as non-electromagnetic waves. Examples include ionospheric reflection of radiowaves, reflection of radio- or microwave radar signals by flying objects, acoustic mirrors, which reflect sound, and atomic mirrors, which reflect neutral atoms. For the efficient reflection of atoms from a solid-state mirror, very cold atoms and/or grazing

Specular reflection

72

incidence are used in order to provide significant quantum reflection; ridged mirrors are used to enhance the specular reflection of atoms. The reflectivity of a surface is the ratio of reflected power to incident power. The reflectivity is a material characteristic, depends on the wavelength, and is related to the refractive index of the material through Fresnel's equations. In absorbing materials, like metals, it is related to the electronic absorption spectrum through the imaginary component of the complex refractive index. Measurements of specular reflection are performed with normal or varying incidence reflectometers using a scanning variable-wavelength light source. Lower quality measurements using a glossmeter quantify the glossy appearance of a surface in gloss units. The image in a flat mirror has these features: • • • • •

It is the same distance behind the mirror as the object is in front. It is the same size as the object. It is the right way up (erect). It appears to be laterally inverted, in other words left and right reversed. It is virtual, meaning that the image appears to be behind the mirror, and cannot be projected onto a screen.

Direction of reflection The direction of a reflected ray is determined by the vector of incidence and the surface normal vector. Given an incident direction from the surface to the light source and the surface normal direction the specularly reflected direction

where

(all unit vectors) is:[2][3]

is a scalar obtained with the dot product. Different authors may define the incident and reflection

directions with different signs. Assuming these Euclidean vectors are represented in column form, the equation can be equivalently expressed as a matrix-vector multiplication: where

is the so-called Householder transformation matrix, defined as:

denotes transposition and is the identity matrix.

References

Specular highlight

73

Specular highlight A specular highlight is the bright spot of light that appears on shiny objects when illuminated (for example, see image at right). Specular highlights are important in 3D computer graphics, as they provide a strong visual cue for the shape of an object and its location with respect to light sources in the scene.

Microfacets The term specular means that light is perfectly reflected in a mirror-like way from the light source to the viewer. Specular reflection is visible only where the surface normal is oriented precisely halfway between the direction of incoming light and the direction of the viewer; Specular highlights on a pair of spheres. this is called the half-angle direction because it bisects (divides into halves) the angle between the incoming light and the viewer. Thus, a specularly reflecting surface would show a specular highlight as the perfectly sharp reflected image of a light source. However, many shiny objects show blurred specular highlights. This can be explained by the existence of microfacets. We assume that surfaces that are not perfectly smooth are composed of many very tiny facets, each of which is a perfect specular reflector. These microfacets have normals that are distributed about the normal of the approximating smooth surface. The degree to which microfacet normals differ from the smooth surface normal is determined by the roughness of the surface. At points on the object where the smooth normal is close to the half-angle direction, many of the microfacets point in the half-angle direction and so the specular highlight is bright. As one moves away from the center of the highlight, the smooth normal and the half-angle direction get farther apart; the number of microfacets oriented in the half-angle direction falls, and so the intensity of the highlight falls off to zero. The specular highlight often reflects the color of the light source, not the color of the reflecting object. This is because many materials have a thin layer of clear material above the surface of the pigmented material. For example plastic is made up of tiny beads of color suspended in a clear polymer and human skin often has a thin layer of oil or sweat above the pigmented cells. Such materials will show specular highlights in which all parts of the color spectrum are reflected equally. On metallic materials such as gold the color of the specular highlight will reflect the color of the material.

Models of microfacets A number of different models exist to predict the distribution of microfacets. Most assume that the microfacet normals are distributed evenly around the normal; these models are called isotropic. If microfacets are distributed with a preference for a certain direction along the surface, the distribution is anisotropic. NOTE: In most equations, when it says

it means

Phong distribution In the Phong reflection model, the intensity of the specular highlight is calculated as:

Where R is the mirror reflection of the light vector off the surface, and V is the viewpoint vector. In the Blinn–Phong shading model, the intensity of a specular highlight is calculated as:

Specular highlight

74

Where N is the smooth surface normal and H is the half-angle direction (the direction vector midway between L, the vector to the light, and V, the viewpoint vector). The number n is called the Phong exponent, and is a user-chosen value that controls the apparent smoothness of the surface. These equations imply that the distribution of microfacet normals is an approximately Gaussian distribution (for large ), or approximately Pearson type II distribution, of the corresponding angle.[1] While this is a useful heuristic and produces believable results, it is not a physically based model. Another similar formula, but only calculated differently:

where R is an eye reflection vector, E is an eye vector (view vector), N is surface normal vector. All vectors are normalized ( ). L is a light vector. For example, then:

Approximate formula is this:

If

vector

H

is

normalized

then

Gaussian distribution A slightly better model of microfacet distribution can be created using a Gaussian distribution.[citation usual function calculates specular highlight intensity as:

needed]

The

where m is a constant between 0 and 1 that controls the apparent smoothness of the surface.[2]

Beckmann distribution A physically based model of microfacet distribution is the Beckmann distribution:[3]

where m is the rms slope of the surface microfacets (the roughness of the material).[4] Compare to the empirical models above, this function "gives the absolute magnitude of the reflectance without introducing arbitrary constants; the disadvantage is that it requires more computation".[5] However, this model can be simplified since . Also note that the product of normalized over the half-sphere which is obeyed by this function.

and a surface distribution function is

Specular highlight

75

Heidrich–Seidel anisotropic distribution The Heidrich–Seidel distribution is a simple anisotropic distribution, based on the Phong model. It can be used to model surfaces that have small parallel grooves or fibers, such as brushed metal, satin, and hair. The specular highlight intensity for this distribution is:

where n is the anisotropic exponent, V is the viewing direction, L is the direction of incoming light, and T is the direction parallel to the grooves or fibers at this point on the surface. If you have a unit vector D which specifies the global direction of the anisotropic distribution, you can compute the vector T at a given point by the following:

where N is the unit normal vector at that point on the surface. You can also easily compute the cosine of the angle between the vectors by using a property of the dot product and the sine of the angle by using the trigonometric identities. The anisotropic

should be used in conjunction with a non-anisotropic distribution like a Phong distribution to

produce the correct specular highlight.

Ward anisotropic distribution The Ward anisotropic distribution [6] uses two user-controllable parameters αx and αy to control the anisotropy. If the two parameters are equal, then an isotropic highlight results. The specular term in the distribution is:

The specular term is zero if N·L < 0 or N·R < 0. All vectors are unit vectors. The vector R is the mirror reflection of the light vector off the surface, L is the direction from the surface point to the light, H is the half-angle direction, N is the surface normal, and X and Y are two orthogonal vectors in the normal plane which specify the anisotropic directions.

Cook–Torrance model The Cook–Torrance model[5] uses a specular term of the form . Here D is the Beckmann distribution factor as above and F is the Fresnel term,

For performance reasons in real-time 3D graphics Schlick's approximation is often used to approximate Fresnel term. G is the geometric attenuation term, describing selfshadowing due to the microfacets, and is of the form . In these formulas E is the vector to the camera or eye, H is the half-angle vector, L is the vector to the light source and N is the normal vector, and α is the angle between H and N.

Specular highlight

Using multiple distributions If desired, different distributions (usually, using the same distribution function with different values of m or n) can be combined using a weighted average. This is useful for modelling, for example, surfaces that have small smooth and rough patches rather than uniform roughness.

References [1] Richard Lyon, "Phong Shading Reformulation for Hardware Renderer Simplification", Apple Technical Report #43, Apple Computer, Inc. 1993 PDF (http:/ / dicklyon. com/ tech/ Graphics/ Phong_TR-Lyon. pdf) [2] Glassner, Andrew S. (ed). An Introduction to Ray Tracing. San Diego: Academic Press Ltd, 1989. p. 148. [3] Petr Beckmann, André Spizzichino, The scattering of electromagnetic waves from rough surfaces, Pergamon Press, 1963, 503 pp (Republished by Artech House, 1987, ISBN 978-0-89006-238-8). [4] Foley et al. Computer Graphics: Principles and Practice. Menlo Park: Addison-Wesley, 1997. p. 764. [5] R. Cook and K. Torrance. " A reflectance model for computer graphics (http:/ / inst. eecs. berkeley. edu/ ~cs283/ sp13/ lectures/ cookpaper. pdf)". Computer Graphics (SIGGRAPH '81 Proceedings), Vol. 15, No. 3, July 1981, pp. 301–316. [6] http:/ / radsite. lbl. gov/ radiance/ papers/

76

Retroreflector

77

Retroreflector Retroreflector

A gold corner cube retroreflector Uses

Distance measurement by optical delay line

A retroreflector (sometimes called a retroflector or cataphote) is a device or surface that reflects light back to its source with a minimum of scattering. An electromagnetic wave front is reflected back along a vector that is parallel to but opposite in direction from the wave's source. The device or surface's angle of incidence is greater than zero. This is unlike a planar mirror, which does this only if the mirror is exactly perpendicular to the wave front, having a zero angle of incidence.

Types of retroreflectors There are several ways to obtain retroreflection:[1]

Corner reflector A set of three mutually perpendicular reflective surfaces, placed to form the corner of a cube, work as a retroreflector. The three corresponding normal vectors of the corner's sides form a basis (x, y, z) in which to represent the direction of an arbitrary incoming ray, [a, b, c]. When the ray reflects from the first side, say x, the ray's x component, a, is reversed to -a while the y and z components are unchanged. Therefore as the ray reflects first from side x then side y and finally from side z the ray direction goes from [a,b,c] to [-a,b,c] to [-a,-b,c] to [-a,-b,-c] and it leaves the corner with all three components of motions exactly reversed.

Working principle of a corner reflector

Corner reflectors occur in two varieties. In the more common form, the corner is literally the truncated corner of a cube of transparent material such as conventional optical glass. In this structure, the reflection is achieved either by total internal reflection or silvering of the outer cube surfaces. The second form uses mutually perpendicular flat mirrors bracketing an air space. These two types have similar optical properties.

Retroreflector

78

A large relatively thin retroreflector can be formed by combining many small corner reflectors, using the standard triangular tiling.

Comparison of the effect of corner (1) and spherical (2) retroreflectors on three light rays. Reflective surfaces are drawn in dark blue.

Cat's eye Another common type of retroreflector consists of refracting optical elements with a reflective surface, arranged so that the focal surface of the refractive element coincides with the reflective surface, typically a transparent sphere and a spherical mirror. This same effect can be optimally achieved with a single transparent sphere when the refractive index of the material is exactly two times the refractive index of the medium from which the radiation is incident. In that case, the sphere surface behaves as a concave spherical mirror with the required curvature for retroreflection. The refractive index need not be twice the ambient but can be anything exceeding 1.5 times as high; due to spherical aberration, there exists a radius from the centerline at which incident rays are focused at the center of the rear surface of the sphere.

Eyeshine from retroreflectors of the transparent sphere type is clearly visible in this cat's eyes

The term cat's eye derives from the resemblance of the cat's eye retroreflector to the optical system that produces the well-known phenomenon of "glowing eyes" or eyeshine in cats and other vertebrates (which are only reflecting light, rather than actually glowing). The combination of the eye's lens and the cornea form the refractive converging system, while the tapetum lucidum behind the retina forms the spherical concave mirror. Because the function of the eye is to form an image on the retina, an eye focused on a distant object has a focal surface that approximately follows the reflective tapetum lucidum structure,[citation needed] which is the condition required to form a good retroreflection. This type of retroreflector can consist of many small versions of these structures incorporated in a thin sheet or in paint. In the case of paint containing glass beads, the paint glues the beads to the surface where retroreflection is required and the beads protrude, their diameter being about twice the thickness of the paint.

Retroreflector

79

Phase-conjugate mirror A third, much less common way of producing a retroreflector is to use the nonlinear optical phenomenon of phase conjugation. This technique is used in advanced optical systems such as high-power lasers and optical transmission lines. Phase-conjugate mirrors require a comparatively expensive and complex apparatus, as well as large quantities of power (as nonlinear optical processes can be efficient only at high enough intensities). However, phase-conjugate mirrors have an inherently much greater accuracy in the direction of the retroreflection, which in passive elements is limited by the mechanical accuracy of the construction.

Operation Retroreflectors are devices that operate by returning light back to the light source along the same light direction. The coefficient of luminous intensity, RI, is the measure of a reflector performance, which is defined as the ratio of the strength of the reflected light (luminous intensity) to the amount of light that falls on the reflector (normal illuminance). A reflector will appear brighter as its RI value increases.[1] The RI value of the reflector is a function of the color, size, and condition of the reflector. Clear or white reflectors are the most efficient, and appear brighter than other colors. The surface area of the reflector is proportional to the RI value and increases as the reflective surface increases.[1]

Figure 1 - Observation angle

Figure 2 - Entrance angle

The RI value is also a function of the spatial geometry between the observer, light source, and reflector. Figures 1 and 2 show the observation angle and entrance angle between the automobile's headlights, bicycle, and driver. The observation angle is the angle formed by the light beam and the driver's line of sight. Observation angle is a function of the distance between the headlights and the driver's eye, and the distance to the reflector. Traffic engineers use an Bicycle retroreflectors observation angle of 0.2 degrees to simulate a reflector target about 800 feet in front of a passenger automobile. As the observation angle increases, the reflector performance decreases. For example, a truck has a large separation between the headlight and the driver's eye compared to a passenger vehicle. A bicycle reflector appears brighter to the passenger car driver than to the truck driver at the same distance from the vehicle to the reflector.[1] The light beam and the normal axis of the reflector as shown in Figure 2 form the entrance angle. The entrance angle is a function of the orientation of the reflector to the light source. For example, the entrance angle between an automobile approaching a bicycle at an intersection 90 degrees apart is larger than the entrance angle for a bicycle directly in front of an automobile on a straight road. The reflector appears brightest to the observer when it is directly in line with the light source.[1] The brightness of a reflector is also a function of the distance between the light source and the reflector. At a given observation angle, as the distance between the light source and the reflector decreases, the light that falls on the reflector increases. This increases the amount of light returned to the observer and the reflector appears brighter.[1]

Retroreflector

Applications Retroreflectors on roads Retroreflection (sometimes called retroflection) is used on road surfaces, road signs, vehicles, and clothing (large parts of the surface of special safety clothing, less on regular coats). When the headlights of a car illuminate a retroreflective surface, the reflected light is directed towards the car and its driver (rather than in all directions as with diffuse reflection). However, a pedestrian can see retroreflective surfaces in the dark only if there is a light source directly between them and the reflector (e.g., via a flashlight they carry) or directly Retroreflectors are clearly visible in a pair of bicycle shoes. Light source is a flash behind them (e.g., via a car approaching a few centimeters above camera lens. from behind). "Cat's eyes" are a particular type of retroreflector embedded in the road surface and are used mostly in the UK and parts of the United States. Corner reflectors are better at sending the light back to the source over long distances, while spheres are better at sending the light to a receiver somewhat off-axis from the source, as when the light from headlights is reflected into the driver's eyes. Retroreflectors can be embedded in the road (level with the road surface), or they can be raised above the road surface. Raised reflectors are visible for very long distances (typically 0.5-1 kilometer or more), while sunken reflectors are visible only at very close ranges due to the higher angle required to properly reflect the light. Raised reflectors are generally not used in areas that regularly experience snow during winter, as passing snowplows can tear them off the roadways. Stress on roadways caused by cars running over embedded objects also contributes to accelerated wear and pothole formation. Retroreflective road paint is thus very popular in Canada and parts of the United States, as it is not affected by the passage of snowplows and does not affect the interior of the roadway. Where weather permits, embedded or raised retroreflectors are preferred as they last much longer than road paint, which is weathered by the elements, can be obscured by sediment or rain, and is ground away by the passage of vehicles.

Retroreflectivity for Road Signs Reflectivity is light reflected from a source to a surface and returned to its original source. For traffic signs and vehicle operators, the light source is a vehicle’s headlights, where the light is sent to the traffic sign face and then returned to the vehicle operator. Traffic signs are manufactured with Retroreflective sheeting so that the traffic sign is visible at night. Reflective sign faces are manufactured with glass beads or prismatic reflectors imbedded in the sheeting so that the face reflects light, therefore making the sign appear more bright and visible to the vehicle operator. According to the National Highway Traffic Safety Administration (NHTSA), the Traffic Safety Facts 2000 publication states the fatal crash rate is 3-4 times more likely during nighttime crashes then daytime incidents. A misconception many people have is that Retroreflectivity is only important during night-time travel, however, in recent years, more states and agencies are requiring headlights to be used during inclement weather, such as rain and snow. According to the Federal Highway Administration (FHWA): Approximately 24% of all vehicle accidents occur during adverse weather (rain, sleet, snow and fog). Rain conditions account for 47% of weather related

80

Retroreflector

81

accidents. These statistics are based on 14 year averages from 1995 to 2008. The MUTCD requires signs to be either illuminated or made with retroreflective sheeting materials and although most signs in the U.S. are made with retroreflective sheeting materials, they degrade over time resulting in a shorter life span. Until now, there has been little information available to determine how long the retroreflectivity lasts.  The MUTCD now requires that agencies maintain traffic signs to a set of minimum levels but provide a variety of maintenance methods that agencies can use for compliance.  The minimum retroreflectivity requirements do not imply that an agency must measure every sign. Rather, the new MUTCD language describes methods that agencies can use to maintain traffic sign retroreflectivity at or above the minimum levels. There is a visual assessment program which can be used in conjunction with an inspector and age limitations on the inspector, who will use comparison panels and specific types of vehicles during testing. There has been a great deal of debate as to whether this type of assessment and management will really hold water when it comes to liability and litigation. This method is not considered to be one that many would like to use because the data it is really open to interpretation, and as we know everybody sees things differently.

Retroreflectors on the Moon Astronauts on the Apollo 11, 14, and 15 missions left retroreflectors on the Moon as part of the Lunar Laser Ranging Experiment. They are considered to prove conclusively that man-made equipment is present on the moon[2] and thus disprove some Moon landing hoax accusations. Additionally the Soviet Lunokhod 1 and Lunokhod 2 rovers carried smaller arrays. Reflected signals were initially received from Lunokhod 1, but no return signals were detected from 1971 until 2010, at least in part due to some uncertainty in its location on the Moon. In 2010, it was found in Lunar Reconnaissance Orbiter photographs and the retroreflectors have been used again. Lunokhod 2's array continues to return signals to Earth.[3] Even under good viewing conditions, only a single reflected photon is received every few seconds. This makes the job of filtering laser-generated photons from naturally-occurring photons challenging.[4]

The Apollo 11 Lunar Laser Ranging Experiment

Retroreflectors in Earth orbit LAGEOS and STARSHINE LAGEOS, or Laser Geodynamics Satellites, are a series of scientific research satellites designed to provide an orbiting laser ranging benchmark for geodynamical studies of the Earth. There are two LAGEOS spacecraft: LAGEOS-1 (launched in 1976), and LAGEOS-2 (launched in 1992). They use cube-corner retroreflectors made of fused silica glass. As of 2004, both LAGEOS spacecraft are still in service. Three STARSHINE satellites equipped with retroreflectors were launched beginning in 1999. The LARES satellite was launched on February 13, 2012. (See also List of passive satellites)

Retroreflector BLITS BLITS (Ball Lens In The Space) spherical retroreflector satellite was placed into orbit as part of a September 2009 Soyuz launch[5] by the Federal Space Agency of Russia with the assistance of the International Laser Ranging Service, an independent body originally organized by the International Association of Geodesy, the International Astronomical Union, and international committees.[6] The ILRS central bureau is located at the United State’s Goddard Space Flight Center. The reflector, a type of Luneburg lens, was developed and manufactured by the Institute for Precision Instrument Engineering (IPIE) in Moscow. The purpose of the mission was to validate the spherical glass retroreflector satellite concept and obtain SLR (Satellite Laser Ranging) data for solution of scientific problems in geophysics, geodynamics, and relativity. The BLITS allows millimeter and submillimeter accuracy SLR measurements, as its "target error" (uncertainty of reflection center relative to its center of mass) is less than 0.1mm. An additional advantage is that the Earth’s magnetic field does not affect the satellite orbit and spin parameters, unlike retroreflectors incorporated into active satellites. The BLITS allows the most accurate measurements of any SLR satellites, with the same accuracy level as a ground target.[7] The actual satellite is a solid sphere around 17cm dia, weighing 7.63kg. It is made with two hemispherical shells (outer radius 85.16 mm) of low refractive index glass (n=1.47), and an inner sphere or ball lens (radius 53.52 mm) made of a high refractive index glass (n=1.76). The hemispheres are glued over the ball lens with all spherical surfaces concentric; the external surface of one hemisphere is coated with aluminum and protected by a varnish layer. It was designed for ranging with a green (532nm) laser. When used for ranging, the phase center is 85.16 mm behind the sphere center, with a range correction of +196.94 mm taking into account the indices of refraction.[] A smaller spherical retroreflector of the same type but 6cm in diameter was fastened to the Meteor-3M spacecraft and tested during its space flight of 2001–2006. Before a collision with space debris, the satellite was in a sun-synchronous circular orbit, 832 km high, with an inclination of 98.77 degrees, an orbital period of 101.3 min, and its own spin period of 5.6 seconds.[] In early 2013, the satellite was found to have a new orbit 120m lower, a faster spin period of 2.1 seconds, and a different spin axis.[8] The change was traced back to an event that occurred 22 Jan 2013 at 07:57 UTC; data from the United States’s Space Surveillance Network showed that within 10 seconds of that time BLITS was close to the predicted path of a fragment of the former Chinese Fengyun-1C satellite, with a relative velocity of 9.6 km/s between them. The Chinese government destroyed the Fengyun-1C, at an altitude of 865 km, on 11 Jan 2007 as a test of an anti-satellite missile which resulted in 2,300 to 15,000 debris pieces.

Retroreflectors and communications Modulated retroreflectors, in which the reflectance is changed over time by some means, are the subject of research and development for free-space optical communications networks. The basic concept of such systems is that a low-power remote system, such as a sensor mote, can receive an optical signal from a base station and reflect the modulated signal back to the base station. Since the base station supplies the optical power, this allows the remote system to communicate without excessive power consumption. Modulated retroreflectors also exist in the form of modulated phase-conjugate mirrors (PCMs). In the latter case, a "time-reversed" wave is generated by the PCM with temporal encoding of the phase-conjugate wave (see, e.g., SciAm, Oct. 1990, "The Photorefractive Effect," David M. Pepper, et al.). Inexpensive corner-aiming retroreflectors are used in user-controlled technology as optical datalink devices. Aiming is done at night, and the necessary retroreflector area depends on aiming distance and ambient lighting from street lamps. The optical receiver itself behaves as a weak retroreflector because it contains a large, precisely focused lens that detects illuminated objects in its focal plane. This allows aiming without a retroreflector for short ranges. A single biological instance of this is known: in flashlight fish of the family Anomalopidae (see Tapetum lucidum).

82

Retroreflector

Retroreflectors and ships, boats, emergency gear Retroflective tape is recognized and recommended by the International Convention for the Safety of Life at Sea (SOLAS) because of its high reflectivity of both light and radar signals. Application to life rafts, personal flotation devices, and other safety gear makes it easy to locate people and objects in the water at night. When applied to boat surfaces it creates a much larger radar signature, particularly for fiberglass boats which produce very little radar reflection on their own. It conforms to International Maritime Organization regulation, IMO Res. A.658 (16) and meets U.S. Coast Guard specification 46 CFR Part 164, Subpart 164.018/5/0. Examples of commercially available products are 3M part numbers 3150A and 6750I.

Other uses Retroreflectors are used in the following example applications: • In surveying with a total station or robot, the instrument man or robot aims a laser beam at a corner cube retroreflector held by the rodman. The instrument measures the propagation time of the light and converts it to a distance. • In Canada, aerodrome lighting can be replaced by appropriately coloured retroreflectors, the most important of which are the white retroreflectors that delineate the runway edges, and must be seen by aircraft equipped with landing lights up to 2 nautical miles away.[9] • In common (non-SLR) digital cameras, where the sensor system is retroreflective. Researchers have used this property to demonstrate a system to prevent unauthorized photographs by detecting digital cameras and beaming a highly-focused beam of light into the lens.[10] • In movie screens to allow for high brilliance under dark conditions.[11] • Digital compositing programs and chroma environments use retroreflection to replace traditional lit backdrops in composite work as they provide a more solid colour without requiring the backdrop to be lit separately.[12]

Notes [1] U.S. Consumer Product Safety Commission Bicycle Reflector Project report (http:/ / www. cpsc. gov/ volstd/ bike/ BikeReport. pdf) [3] http:/ / ilrs. gsfc. nasa. gov/ docs/ williams_lw13. pdf [9] Transport Canada CARs 301.07 (http:/ / www. tc. gc. ca/ eng/ civilaviation/ regserv/ cars/ part3-301-155. htm#301_07) [10] ABC News: Device Seeks to Jam Covert Digital Photographers (http:/ / abcnews. go. com/ Technology/ FutureTech/ story?id=1139800& page=1) [11] Howstuffworks "Altered Reality" (http:/ / science. howstuffworks. com/ invisibility-cloak1. htm)

References • Optics Letters, Vol. 4, pp. 190–192 (1979), "Retroreflective Arrays as Approximate Phase Conjugators," by H.H. Barrett and S.F. Jacobs. • Optical Engineering, Vol. 21, pp. 281–283 (March/April 1982), "Experiments with Retrodirective Arrays," by Stephen F. Jacobs. • Scientific American, December 1985, "Phase Conjugation," by Vladimir Shkunov and Boris Zel'dovich. • Scientific American, January 1986, "Applications of Optical Phase Conjugation," by David M. Pepper. • Scientific American, April 1986, "The Amateur Scientist" ('Wonders with the Retroreflector'), by Jearl Walker. • Scientific American, October 1990, "The Photorefractive Effect," by David M. Pepper, Jack Feinberg, and Nicolai V. Kukhtarev.

83

Retroreflector

84

External links • Apollo 15 Laser Ranging Retroreflector Experiment (http://www.lpi.usra.edu/expmoon/Apollo15/ A15_Experiments_LRRR.html) • Manual of Traffic Signs - Retroreflective Sheetings Used for Sign Faces (http://www.trafficsign.us/signsheet. html) • Motorcycle retroreflective Sheeting (http://dr650.zenseeker.net/ReflectiveTape.htm) • Lunar retroflectors (http://physics.ucsd.edu/~tmurphy/apollo/lrrr.html) • Howstuffworks article on retroreflector-based invisibility cloaks (http://science.howstuffworks.com/ invisibility-cloak.htm) • Retroreflective measurement tool for roadway safety (http://pppcatalog.com/922)

Texture mapping Texture mapping is a method for adding detail, surface texture (a bitmap or raster image), or color to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.

1 = 3D model without textures 2 = 3D model with textures

Texture mapping A texture map is applied (mapped) to the surface of a shape or polygon.[1] This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition. Image sampling locations are then interpolated across the face of a polygon to produce a visual result Examples of multitexturing (click for larger image); 1: Untextured sphere, 2: Texture and bump maps, 3: Texture map only, that seems to have more richness than could 4: Opacity and texture maps. otherwise be achieved with a limited number of polygons. Multitexturing is the use of more [2] than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it in real-time.

Texture mapping

85

The way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture filtering. The fastest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped.

Perspective correctness Texture coordinates are specified at each vertex of a given triangle, and these coordinates are interpolated using an extended Bresenham's line algorithm. If these texture coordinates are linearly interpolated across the screen, the result is affine texture mapping. This is a fast calculation, but Because affine texture mapping does not take into account the depth information about a there can be a noticeable discontinuity polygon's vertices, where the polygon is not perpendicular to the viewer it produces a noticeable defect. between adjacent triangles when these triangles are at an angle to the plane of the screen (see figure at right – textures (the checker boxes) appear bent). Perspective correct texturing accounts for the vertices' positions in 3D space, rather than simply interpolating a 2D triangle. This achieves the correct visual effect, but it is slower to calculate. Instead of interpolating the texture coordinates directly, the coordinates are divided by their depth (relative to the viewer), and the reciprocal of the depth value is also interpolated and used to recover the perspective-correct coordinate. This correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture coordinates is smaller (stretching the texture wider), and in parts that are farther away this difference is larger (compressing the texture). Affine texture mapping directly interpolates a texture coordinate

between two endpoints

and

:

where Perspective correct mapping interpolates after dividing by depth recover the correct coordinate:

, then uses its interpolated reciprocal to

All modern 3D graphics hardware implements perspective correct texturing.

Development Classic texture mappers generally did only simple mapping with at most one lighting effect, and the perspective correctness was about 16 times more expensive. To achieve two goals - faster arithmetic results, and keeping the arithmetic mill busy at all times - every triangle is further subdivided into groups of about 16 pixels. For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering, which improves details in non-architectural applications. Software renderers generally preferred screen subdivision because it has less overhead. Additionally they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the x86 CPU; the 68000 or any RISC is much more suited). For instance, Doom restricted the world to vertical walls and horizontal floors/ceilings. This meant the walls would be a constant distance along a vertical line and the floors/ceilings would be a constant distance along a horizontal line. A fast affine mapping could

Texture mapping

86

be used along those lines because it would be correct. A different approach was taken for Quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor.[3] The polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z, but the effort seems not to be worth it.

Screen space sub division techniques. Top left: Quake-like, top right: bilinear, bottom left: const-z

Another technique was subdividing the polygons into smaller polygons, like triangles in 3d-space or squares in screen space, and using an affine mapping on them. The distortion of affine mapping becomes much less noticeable on smaller polygons. Yet another technique was approximating the perspective with a faster calculation, such as a polynomial. Still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. The division is then done starting from those values so that only a small remainder has to be divided,[4] but the amount of bookkeeping makes this method too slow on most systems. Finally, some programmers extended the constant distance trick used for Doom by finding the line of constant distance for arbitrary polygons and rendering along it.

References [1] Jon Radoff, Anatomy of an MMORPG, http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/ [2] Blythe, David. Advanced Graphics Programming Techniques Using OpenGL (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/ advanced99/ notes/ notes. html). Siggraph 1999. (see: Multitexture (http:/ / www. opengl. org/ resources/ code/ samples/ sig99/ advanced99/ notes/ node60. html)) [3] Abrash, Michael. Michael Abrash's Graphics Programming Black Book Special Edition. The Coriolis Group, Scottsdale Arizona, 1997. ISBN 1-57610-174-6 ( PDF (http:/ / www. gamedev. net/ reference/ articles/ article1698. asp)) (Chapter 70, pg. 1282)

External links • Introduction into texture mapping using C and SDL (http://www.happy-werner.de/howtos/isw/parts/3d/ chapter_2/chapter_2_texture_mapping.pdf) • Programming a textured terrain (http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series4/ Textured_terrain.php) using XNA/DirectX, from www.riemers.net • Perspective correct texturing (http://www.gamers.org/dEngine/quake/papers/checker_texmap.html) • Time Texturing (http://www.fawzma.com/time-texturing-texture-mapping-with-bezier-lines/) Texture mapping with bezier lines • Polynomial Texture Mapping (http://www.hpl.hp.com/research/ptm/) Interactive Relighting for Photos • 3 Métodos de interpolación a partir de puntos (in spanish) (http://www.um.es/geograf/sigmur/temariohtml/ node43_ct.html) Methods that can be used to interpolate a texture knowing the texture coords at the vertices of a polygon

Bump mapping

87

Bump mapping Bump mapping is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not actually changed. Bump mapping was introduced by Blinn in 1978.[1]

A sphere without bump mapping (left). A bump map to be applied to the sphere (middle). The sphere with the bump map applied (right) appears to have a mottled surface resembling an orange. Bump maps achieve this effect by changing how an illuminated surface reacts to light without actually modifying the size or shape of the surface

Normal mapping is the most common variation of bump mapping used.[2]

Bump mapping basics Bump mapping is a technique in computer graphics to make a rendered surface look more realistic by simulating small displacements of the surface. However, unlike traditional displacement mapping, the surface geometry is not modified. Instead only the surface normal is modified as if the surface had been displaced. The modified surface normal is then used for lighting calculations as usual, typically using the Phong reflection model or similar, giving the appearance of detail instead of a smooth surface.

Bump mapping is limited in that it does not actually modify the shape of the underlying object. On the left, a mathematical function defining a bump map simulates a crumbling surface on a sphere, but the object's outline and shadow remain those of a perfect sphere. On the right, the same function is used to modify the surface of a sphere by generating an isosurface. This actually models a sphere with a bumpy surface with the result that both its outline and its shadow are rendered realistically.

Bump mapping is much faster and consumes less resources for the same level of detail compared to displacement mapping because the geometry remains unchanged.

There are primarily two methods to perform bump mapping. The first uses a height map for simulating the surface displacement yielding the modified normal. This is the method invented by Blinn[1] and is usually what is referred to as bump mapping unless specified. The steps of this method are summarized as follows. Before lighting a calculation is performed for each visible point (or pixel) on the object's surface: 1. Look up the height in the heightmap that corresponds to the position on the surface. 2. Calculate the surface normal of the heightmap, typically using the finite difference method. 3. Combine the surface normal from step two with the true ("geometric") surface normal so that the combined normal points in a new direction.

Bump mapping 4. Calculate the interaction of the new "bumpy" surface with lights in the scene using, for example, the Phong reflection model. The result is a surface that appears to have real depth. The algorithm also ensures that the surface appearance changes as lights in the scene are moved around. The other method is to specify a normal map which contains the modified normal for each point on the surface directly. Since the normal is specified directly instead of derived from a height map this method usually leads to more predictable results. This makes it easier for artists to work with, making it the most common method of bump mapping today.[2] There are also extensions which modify other surface features in addition to increasing the sense of depth. Parallax mapping is one such extension. The primary limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself.[3] Silhouettes and shadows therefore remain unaffected, which is especially noticeable for larger simulated displacements. This limitation can be overcome by techniques including the displacement mapping where bumps are actually applied to the surface or using an isosurface.

Realtime bump mapping techniques Realtime 3D graphics programmers often use variations of the technique in order to simulate bump mapping at a lower computational cost. One typical way was to use a fixed geometry, which allows one to use the heightmap surface normal almost directly. Combined with a precomputed lookup table for the lighting calculations the method could be implemented with a very simple and fast loop, allowing for a full-screen effect. This method was a common visual effect when bump mapping was first introduced.

References [1] Blinn, James F. "Simulation of Wrinkled Surfaces" (http:/ / portal. acm. org/ citation. cfm?id=507101), Computer Graphics, Vol. 12 (3), pp. 286-292 SIGGRAPH-ACM (August 1978) [2] Mikkelsen, Morten. Simulation of Wrinkled Surfaces Revisited (http:/ / image. diku. dk/ projects/ media/ morten. mikkelsen. 08. pdf), 2008 (PDF) [3] Real-Time Bump Map Synthesis (http:/ / web4. cs. ucl. ac. uk/ staff/ j. kautz/ publications/ rtbumpmapHWWS01. pdf), Jan Kautz1, Wolfgang Heidrichy2 and Hans-Peter Seidel1, (1Max-Planck-Institut für Informatik, 2University of British Columbia)

External links • Bump shading for volume textures (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=291525), Max, N.L., Becker, B.G., Computer Graphics and Applications, IEEE, Jul 1994, Volume 14, Issue 4, pages 18 – 20, ISSN 0272-1716 • Bump Mapping tutorial using CG and C++ (http://www.blacksmith-studios.dk/projects/downloads/ bumpmapping_using_cg.php) • Simple creating vectors per pixel of a grayscale for a bump map to work and more (http://freespace.virgin.net/ hugo.elias/graphics/x_polybm.htm) • Bump Mapping example (http://www.neilwallis.com/java/bump2.htm) (Java applet)

88

Bidirectional reflectance distribution function

89

Bidirectional reflectance distribution function The bidirectional reflectance distribution function (BRDF;  ) is a four-dimensional function that defines how light is reflected at an opaque surface. The function takes a negative incoming light direction, , and outgoing direction, , both defined with respect to the surface normal Wikipedia:Please clarify, and returns the ratio of reflected radiance exiting along to the irradiance incident on the surface from direction . Each direction is itself parameterized by azimuth angle and zenith angle , therefore the BRDF as a whole is 4-dimensional. The BRDF has units sr−1, with steradians (sr) being a unit of solid angle.

Diagram showing vectors used to define the BRDF. All vectors are unit length. points toward the light source.

points toward the viewer (camera).

is the

surface normal.

Definition The BRDF was first defined by Fred Nicodemus around 1965.[1] The definition is:

where

is

radiance,

or

power

projected-area-perpendicular-to-the-ray, between

and the surface normal,

per

unit

solid-angle-in-the-direction-of-a-ray

is irradiance, or power per unit surface area, and

per

unit

is the angle

. The index indicates incident light, whereas the index indicates reflected

light. The reason the function is defined as a quotient of two differentials and not directly as a quotient between the undifferentiated quantities, is because other irradiating light than , which are of no interest for , might illuminate the surface which would unintentionally affect affected by

, whereas

is only

.

Related functions The Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) is a 6-dimensional function, , where describes a 2D location over an object's surface. The Bidirectional Texture Function (BTF) is appropriate for modeling non-flat surfaces, and has the same parameterization as the SVBRDF; however in contrast, the BTF includes non-local scattering effects like shadowing, masking, interreflections or subsurface scattering. The functions defined by the BTF at each point on the surface are thus called Apparent BRDFs. The Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF), is a further generalized 8-dimensional function in which light entering the surface may scatter internally and exit at another location. In all these cases, the dependence on the wavelength of light has been ignored and binned into RGB channels. In reality, the BRDF is wavelength dependent, and to account for effects such as iridescence or luminescence the

Bidirectional reflectance distribution function dependence on wavelength must be made explicit:

90 .

Physically based BRDFs Physically based BRDFs have additional properties, including, • positivity: • obeying Helmholtz reciprocity: • conserving energy:

Applications The BRDF is a fundamental radiometric concept, and accordingly is used in computer graphics for photorealistic rendering of synthetic scenes (see the Rendering equation), as well as in computer vision for many inverse problems such as object recognition.

Models BRDFs can be measured directly from real objects using calibrated cameras and lightsources;[2] however, many phenomenological and analytic models have been proposed including the Lambertian reflectance model frequently assumed in computer graphics. Some useful features of recent models include: • • • •

accommodating anisotropic reflection editable using a small number of intuitive parameters accounting for Fresnel effects at grazing angles being well-suited to Monte Carlo methods.

Wojciech et al. found that interpolating between measured samples produced realistic results and was easy to understand.[3]

Some examples • Lambertian model, representing perfectly diffuse (matte) surfaces by a constant BRDF. • Lommel–Seeliger, lunar and Martian reflection. • Phong reflectance model, a phenomenological model akin to plastic-like specularity.[4] • Blinn–Phong model, resembling Phong, but allowing for certain quantities to be interpolated, reducing computational overhead.[5] • Torrance–Sparrow model, a general model representing surfaces as distributions of perfectly specular microfacets.[6] • Cook–Torrance model, a specular-microfacet model (Torrance–Sparrow) accounting for wavelength and thus color shifting.[7] • Ward's anisotropic model, a specular-microfacet model with an elliptical-Gaussian distribution function dependent on surface tangent orientation (in addition to surface normal).[] • Oren–Nayar model, a "directed-diffuse" microfacet model, with perfectly diffuse (rather than specular) microfacets.[8] • Ashikhmin-Shirley model, allowing for anisotropic reflectance, along with a diffuse substrate under a specular surface.[9] • HTSG (He,Torrance,Sillion,Greenberg), a comprehensive physically based model.[10] • Fitted Lafortune model, a generalization of Phong with multiple specular lobes, and intended for parametric fits of measured data.[11] • Lebedev model for analytical-grid BRDF approximation.[12]

Bidirectional reflectance distribution function

Acquisition Traditionally, BRDF measurements were taken for a specific lighting and viewing direction at a time using gonioreflectometers. Unfortunately, using such a device to densely measure the BRDF is very time consuming. One of the first improvements on these techniques used a half-silvered mirror and a digital camera to take many BRDF samples of a planar target at once. Since this work, many researchers have developed other devices for efficiently acquiring BRDFs from real world samples, and it remains an active area of research. There is an alternative way to measure BRDF based on HDR images. The standard algorithm is to measure the BRDF point cloud from images and optimize it by one of the BRDF models.[13]

References [3] Wojciech Matusik, Hanspeter Pfister, Matt Brand, and Leonard McMillan. A Data-Driven Reflectance Model (http:/ / people. csail. mit. edu/ wojciech/ DDRM/ index. html). ACM Transactions on Graphics. 22(3) 2002. [4] B. T. Phong, Illumination for computer generated pictures, Communications of ACM 18 (1975), no. 6, 311–317. [6] K. Torrance and E. Sparrow. Theory for Off-Specular Reflection from Roughened Surfaces. J. Optical Soc. America, vol. 57. 1976. pp. 1105–1114. [7] R. Cook and K. Torrance. "A reflectance model for computer graphics". Computer Graphics (SIGGRAPH '81 Proceedings), Vol. 15, No. 3, July 1981, pp. 301–316. [8] S.K. Nayar and M. Oren, " Generalization of the Lambertian Model and Implications for Machine Vision (http:/ / www1. cs. columbia. edu/ CAVE/ publications/ pdfs/ Nayar_IJCV95. pdf)". International Journal on Computer Vision, Vol. 14, No. 3, pp. 227–251, Apr, 1995 [9] Michael Ashikhmin, Peter Shirley, An Anisotropic Phong BRDF Model, Journal of Graphics Tools 2000 [10] X. He, K. Torrance, F. Sillon, and D. Greenberg, A comprehensive physical model for light reflection, Computer Graphics 25 (1991), no. Annual Conference Series, 175–186. [11] E. Lafortune, S. Foo, K. Torrance, and D. Greenberg, Non-linear approximation of reflectance functions. In Turner Whitted, editor, SIGGRAPH 97 Conference Proceedings, Annual Conference Series, pp. 117–126. ACM SIGGRAPH, Addison Wesley, August 1997. [12] Ilyin A., Lebedev A., Sinyavsky V., Ignatenko, A., Image-based modelling of material reflective properties of flat objects (In Russian) (http:/ / data. lebedev. as/ LebedevGraphicon2009. pdf). In: GraphiCon'2009.; 2009. p. 198-201. [13] BRDFRecon project (http:/ / lebedev. as/ index. php?p=1_7_BRDFRecon)

Further reading • Lubin, Dan; Robert Massom (2006-02-10). Polar Remote Sensing. Volume I: Atmosphere and Oceans (1st ed.). Springer. p. 756. ISBN 3-540-43097-0. • Matt, Pharr; Greg Humphreys (2004). Physically Based Rendering (1st ed.). Morgan Kauffmann. p. 1019. ISBN 0-12-553180-X. • Schaepman-Strub, G.; M. E. Schaepman, T. H. Painter, S. Dangel, J. V. Martonchik (2006-07-15). "Reflectance quantities in optical remote sensing: definitions and case studies" (http://www.sciencedirect.com/science/ article/B6V6V-4K427VX-1/2/d8f9855bc59ae8233e2ee9b111252701). Remote Sensing of Environment 103 (1): 27–42. doi: 10.1016/j.rse.2006.03.002 (http://dx.doi.org/10.1016/j.rse.2006.03.002). Retrieved 2007-10-18.

91

Physics of reflection

92

Physics of reflection Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection the angle at which the wave is incident on the surface equals the angle at which it is reflected. Mirrors exhibit specular reflection. In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors.

Reflection of light Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. Furthermore, if the interface is between a dielectric and a conductor, the phase of the reflected wave is retained, otherwise if the interface is between two dielectrics, the phase may be retained or inverted, depending on the indices of refraction.[citation needed] A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the reflection actually occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass.

Double reflection: The sun is reflected in the water, which is reflected in the paddle.

In the diagram at left, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr. The law of reflection states that θi = θr, or in other words, the angle of incidence equals the angle of reflection. In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be Diagram of specular reflection used to predict how much of the light is reflected, and how much is refracted in a given situation. Total internal reflection of light from a denser medium occurs if the angle of incidence is above the critical angle. Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle

Physics of reflection

93

with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector. When light reflects off a material denser (with higher refractive index) than the external medium, it undergoes a polarity inversion. In contrast, a less dense, lower refractive index material will reflect light in phase. This is an important principle in the field of thin-film optics. Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.

Refraction of light at the interface between two media.

Laws of reflection If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows: 1. The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane. 2. The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal. 3. The reflected ray and the incident ray are on the opposite sides of the normal. These three laws can all be derived from the reflection equation.

An example of the law of reflection

Mechanism In the classical electrodynamics, light is considered as an electromagnetic wave, which is described by the Maxwell Equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave (in all directions, like a dipole antenna). All these waves add up to give specular reflection and refraction, according to the Huygens-Fresnel principle.

Physics of reflection

94

In case of dielectric (glass), the electric field of the light acts on the electrons in the glass, the moving electrons generate a field and become a new radiator. The refraction light in the glass is the combined of the forward radiation of the electrons and the incident light and; the backward radiation is the one we see reflected from the surface of transparent materials, this radiation comes from everywhere in the glass, but it turns out that the total effect is equivalent to a reflection from the surface. In metals, the electrons with no binding energy are called free electrons. The density number of the free electrons is very large. When these electrons oscillate with the incident light, the phase differences between the radiation field of these electrons and the incident field are , so the forward radiation will compensate the incident light at a skin depth, and backward radiation is just the reflected light. Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter.

Diffuse reflection When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law. The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation.[]

General scattering mechanism which gives diffuse reflection by a solid surface

Retroreflection Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came. When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet. Some animals' retinas act as retroreflectors, as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight.

Working principle of a corner reflector

A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror. A surface can be made

Physics of reflection partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes.

Multiple reflections When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle.[1] The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus.[2]

Complex conjugate reflection Light bounces exactly back in the direction from which it came due to a nonlinear optical process. In this type of reflection, not only the direction of the light is reversed, but the actual wavefronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the aberrating optics a second time.

Other types of reflection Neutron reflection Materials that reflect neutrons, for example beryllium, are used in nuclear reactors and nuclear weapons. In the physical and biological sciences, the reflection of neutrons off of atoms within a material is commonly used to determine the material's internal structure.

Sound reflection When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000 Hz), and thus a very wide range of wavelengths (from about 20 mm to 17 m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in Sound diffusion panel for high frequencies many directions—to scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space. In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction.

95

Physics of reflection

Seismic reflection Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits.

References External links • Acoustic reflection (http://www.acoustics.salford.ac.uk/feschools/waves/reflect.htm) • Diffraction Grating Equations (http://www.jobinyvon.com/SiteResources/Data/Templates/1divisional. asp?DocID=616&v1ID=&lang=) • Animations demonstrating optical reflection (http://qed.wikina.org/reflection/) by QED • Simulation on Laws of Reflection of Sound (http://amrita.olabs.co.in/?sub=1&brch=1&sim=1&cnt=1& id=0) By Amrita University

Rendering reflection Reflection in computer graphics is used to emulate reflective objects like mirrors and shiny surfaces. Reflection is accomplished in a ray trace renderer by following a ray from the eye to the mirror and then calculating where it bounces from, and continuing the process until no surface is found, or a non-reflective surface is found. Reflection on a shiny surface like wood or tile can add to the photorealistic effects of a 3D rendering. • Polished - A Polished Reflection is an undisturbed reflection, like a mirror or chrome. • Blurry - A Blurry Reflection means that tiny random bumps on the surface of the material cause the reflection to be blurry. • Metallic - A reflection is Metallic if the highlights and reflections retain the color of the reflective object. • Glossy - This term can be misused. Sometimes it is a setting which Ray traced model demonstrating specular reflection. is the opposite of Blurry. (When "Glossiness" has a low value, the reflection is blurry.) However, some people use the term "Glossy Reflection" as a synonym for "Blurred Reflection." Glossy used in this context means that the reflection is actually blurred.

96

Rendering reflection

97

Examples Polished or Mirror reflection Mirrors are usually almost 100% reflective.

Mirror on wall rendered with 100% reflection.

Metallic Reflection Normal, (nonmetallic), objects reflect light and colors in the original color of the object being reflected. Metallic objects reflect lights and colors altered by the color of the metallic object itself.

The large sphere on the left is blue with its reflection marked as metallic. The large sphere on the right is the same color but does not have the metallic property selected.

Rendering reflection

98

Blurry Reflection Many materials are imperfect reflectors, where the reflections are blurred to various degrees due to surface roughness that scatters the rays of the reflections.

The large sphere on the left has sharpness set to 100%. The sphere on the right has sharpness set to 50% which creates a blurry reflection.

Glossy Reflection Fully glossy reflection, shows highlights from light sources, but does not show a clear reflection from objects.

The sphere on the left has normal, metallic reflection. The sphere on the right has the same parameters, except that the reflection is marked as "glossy".

Reflectivity

99

Reflectivity Reflectivity and reflectance generally refer to the fraction of incident electromagnetic power that is reflected at an interface, while the term "reflection coefficient" is used for the fraction of electric field reflected.[1]

Spectral reflectance curves for aluminium (Al), silver (Ag), and gold (Au) metal mirrors at normal incidence.

Reflectance The reflectance or reflectivity is thus the square of the magnitude of the reflection coefficient.[2] The reflection coefficient can be expressed as a complex number as determined by the Fresnel equations for a single layer, whereas the reflectance (or reflectivity) is always a positive real number. According to the CIE (the International Commission on Illumination),[3] reflectivity is distinguished from reflectance by the fact that reflectivity is a value that applies to thick reflecting objects.[4] Fresnel reflection coefficients for a boundary When reflection occurs from thin layers of material, internal reflection surface between air and a variable material in effects can cause the reflectance to vary with surface thickness. dependence of the complex refractive index and Reflectivity is the limit value of reflectance as the sample becomes the angle of incidence thick; it is the intrinsic reflectance of the surface, hence irrespective of other parameters such as the reflectance of the rear surface. Another way to interpret this is that the reflectance is the fraction of electromagnetic power reflected from a specific sample, while reflectivity is a property of the material itself, which would be measured on a perfect machine if the material filled half of all space.[5] The reflectance spectrum or spectral reflectance curve is the plot of the reflectance as a function of wavelength.

Reflectivity

100

Surface type Going back to the fact that reflectivity is a directional property, most surfaces can be divided into those that give specular reflection and those that give diffuse reflection. • For specular surfaces, such as glass or polished metal, reflectivity will be nearly zero at all angles except at the appropriate reflected angle - that is, reflected radiation will follow a different path from incident radiation for all cases other than radiation normal to the surface. • For diffuse surfaces, such as matte white paint, reflectivity is uniform; radiation is reflected in all angles equally or near-equally. Such surfaces are said to be Lambertian. Most real objects have some mixture of diffuse and specular reflective properties.

Water reflectivity Reflection occurs when light moves from a medium with one index of refraction into a second medium with a different index of refraction. Specular reflection from a body of water is calculated by the Fresnel equations.[6] Fresnel reflection is directional and therefore does not contribute significantly to albedo which is primarily diffuse reflection. A real water surface may be wavy. Reflectivity assuming a flat surface as given by the Fresnel equations can be adjusted to account for waviness. Reflectivity of smooth water at 20°C (refractive index=1.333)

Grating efficiency The generalization of reflectance to a diffraction grating, which disperses light by wavelength, is called diffraction efficiency.

Applications Reflectivity is an important concept in the fields of optics, solar thermal energy, telecommunication and radar.

References [1] [2] [3] [4] [5]

Klein and Furtak, Optics (http:/ / www. amazon. com/ dp/ 0471872970) E. Hecht (2001). Optics (4th ed.). Pearson Education. ISBN 0-8053-8566-5. CIE (the International Commission on Illumination) (http:/ / www. cie. co. at/ ) CIE International Lighting Vocabulary (http:/ / www. cie. co. at/ index. php/ index. php?i_ca_id=306) Palmer and Grant, The Art of Radiometry (http:/ / www. amazon. com/ dp/ 081947245X)

[6] Ottaviani, M. and Stamnes, K. and Koskulics, J. and Eide, H. and Long, S.R. and Su, W. and Wiscombe, W., 2008: 'Light Reflection from Water Waves: Suitable Setup for a Polarimetric Investigation under Controlled Laboratory Conditions. Journal of Atmospheric and Oceanic Technology, 25 (5), 715--728.

Reflectivity

101

External links • reflectivity of metals (chart) (http://www.tvu.com/metalreflectivityLR.jpg) • Reflectance Data (http://www.graphics.cornell.edu/online/measurements/reflectance/index.html) Painted surfaces etc. • Grating efficiency (http://www.gratinglab.com/information/handbook/chapter9.asp)

Fresnel equations The Fresnel equations (or Fresnel conditions), deduced by Augustin-Jean Fresnel /frɛˈnɛl/, describe the behaviour of light when moving between media of differing refractive indices. The reflection of light that the equations predict is known as Fresnel reflection.

Overview Partial transmission and reflection amplitudes of a wave travelling

When light moves from a medium of a given refractive from a low to high refractive index medium. index n1 into a second medium with refractive index n2, both reflection and refraction of the light may occur. The Fresnel equations describe what fraction of the light is reflected and what fraction is refracted (i.e., transmitted). They also describe the phase shift of the reflected light. The equations assume the interface is flat, planar, and homogeneous, and that the light is a plane wave.

Definitions and power equations In the diagram on the right, an incident light ray IO strikes the interface between two media of refractive indices n1 and n2 at point O. Part of the ray is reflected as ray OR and part refracted as ray OT. The angles that the incident, reflected and refracted rays make to the normal of the interface are given as θi, θr and θt, respectively. The relationship between these angles is given by the law of reflection:

and Snell's law: Variables used in the Fresnel equations.

The fraction of the incident power that is reflected from the interface is given by the reflectance R and the fraction that is refracted is given by the transmittance T.[1] The media are assumed to be non-magnetic. The calculations of R and T depend on polarisation of the incident ray. Two cases are analyzed: 1. The incident light is polarized with its electric field parallel to the plane containing the incident, reflected, and refracted rays, i.e. in the plane of the diagram above. Such light is described as p-polarized.

Fresnel equations

102

2. The incident light is polarized with its electric field perpendicular to the plane described above. The light is said to be s-polarized, from the German senkrecht (perpendicular). For the s-polarized light, the reflection coefficient is given by

,

where the second form is derived from the first by eliminating θt using Snell's law and trigonometric identities. For the p-polarized light, the R is given by

.

As a consequence of the conservation of energy, the transmission coefficients are given by [2]

and

These relationships hold only for power coefficients, not for amplitude coefficients as defined below. If the incident light is unpolarised (containing an equal mix of s- and p-polarisations), the reflection coefficient is

For common glass, the reflection coefficient is about 4%. Note that reflection by a window is from the front side as well as the back side, and that some of the light bounces back and forth a number of times between the two sides. The combined reflection coefficient for this case is 2R/(1 + R), when interference can be neglected (see below). The discussion given here assumes that the permeability μ is equal to the vacuum permeability μ0 in both media. This is approximately true for most dielectric materials, but not for some other types of material. The completely general Fresnel equations are more complicated.

Fresnel equations

103

Special angles At one particular angle for a given n1 and n2, the value of Rp goes to zero and a p-polarised incident ray is purely refracted. This angle is known as Brewster's angle, and is around 56° for a glass medium in air or vacuum. Note that this statement is only true when the refractive indices of both materials are real numbers, as is the case for materials like air and glass. For materials that absorb light, like metals and semiconductors, n is complex, and Rp does not generally go to zero. When moving from a denser medium into a less dense one (i.e., n1 > n2), above an incidence angle known as the critical angle, all light is reflected and Rs = Rp = 1. This phenomenon is known as total internal reflection. The critical angle is approximately 41° for glass in air.

Amplitude equations Equations for coefficients corresponding to ratios of the electric field complex-valued amplitudes of the waves (not necessarily real-valued magnitudes) are also called "Fresnel equations". These take several different forms, depending on the choice of formalism and sign convention used. The amplitude coefficients are usually represented by lower case r and t.

Conventions used here In this treatment, the coefficient r is the ratio of the reflected wave's complex electric field amplitude to that of the incident wave. The coefficient t is the ratio of the transmitted wave's electric field amplitude to that of the incident wave. The light is split into s and p polarizations as defined above. (In the figures to the right, s polarization is denoted " " and p is denoted " ".) For s-polarization, a positive r or t means that the electric fields of the incoming and reflected or transmitted wave are parallel, while negative means anti-parallel. For p-polarization, a positive r or t means that the magnetic fields of the waves are parallel, while negative means anti-parallel.[3] It is also assumed that the magnetic permeability µ of both media is equal to the permeability of free space µ0.

Amplitude ratios: air to glass

Formulas Using the conventions above,[3]

Amplitude ratios: glass to air

Notice that

but

.[4]

Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the amplitude reflection coefficient is related to the reflectance R by [5]

Fresnel equations

104 .

The transmittance T is generally not equal to |t|2, since the light travels with different direction and speed in the two media. The transmittance is related to t by.[6] . The factor of n2/n1 occurs from the ratio of intensities (closely related to irradiance). The factor of cos θt/cos θi represents the change in area m of the pencil of rays, needed since T, the ratio of powers, is equal to the ratio of (intensity × area). In terms of the ratio of refractive indices, , and of the magnification m of the beam cross section occurring at the interface, .

Multiple surfaces When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser. An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry–Pérot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference. The transfer-matrix method, or the recursive Rouard method[7] can be used to solve multiple-surface problems.

References [1] Hecht (1987), p. 100. [2] Hecht (1987), p. 102. [3] Lecture notes by Bo Sernelius, main site (http:/ / www. ifm. liu. se/ courses/ TFYY67/ ), see especially Lecture 12 (http:/ / www. ifm. liu. se/ courses/ TFYY67/ Lect12. pdf). [4] Hecht (2003), p. 116, eq.(4.49)-(4.50). [5] Hecht (2003), p. 120, eq.(4.56). [6] Hecht (2003), p. 120, eq.(4.57). [7] see, e.g. O.S. Heavens, Optical Properties of Thin Films, Academic Press, 1955, chapt. 4.

• Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN 0-201-11609-X. • Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. ISBN 0-321-18878-0.

Fresnel equations

Further reading • The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, ISBN 978-0-521-57507-2. • Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 • Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y.B. Band, John Wiley & Sons, 2010, ISBN 978-0-471-89931-0 • The Light Fantastic – Introduction to Classic and Quantum Optics, I.R. Kenyon, Oxford University Press, 2008, ISBN 978-0-19-856646-5 • Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 • McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, ISBN 0-07-051400-3

External links • Fresnel Equations (http://scienceworld.wolfram.com/physics/FresnelEquations.html) – Wolfram • FreeSnell (http://people.csail.mit.edu/jaffer/FreeSnell) – Free software computes the optical properties of multilayer materials • Thinfilm (http://thinfilm.hansteen.net/) – Web interface for calculating optical properties of thin films and multilayer materials. (Reflection & transmission coefficients, ellipsometric parameters Psi & Delta) • Simple web interface for calculating single-interface reflection and refraction angles and strengths. (http://www. calctool.org/CALC/phys/optics/reflec_refrac) • Reflection and transmittance for two dielectrics (http://wm.eecs.umich.edu/webMathematica/eecs434/f08/ ideliz/final.jsp) – Mathematica interactive webpage that shows the relations between index of refraction and reflection. • A self-contained first-principles derivation (http://www.jedsoft.org/physics/notes/multilayer.pdf) of the transmission and reflection probabilities from a multilayer with complex indices of refraction.

105

Transparency and translucency

Transparency and translucency In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without being scattered. On a macroscopic scale (one where the dimensions investigated are much, much larger than the wavelength of the photons in question), the photons can be said to follow Snell's Law. Translucency (also called translucence or translucidity), is a super-set of transparency, allows light to pass through; but, does not necessarily (again, on the macroscopic scale) Dichroic filters are created using optically follow Snell's law; the photons can be scattered at either of the two transparent materials. interfaces where there is a change in index of refraction, or internally. In other words, a translucent medium allows the transport of light while a transparent medium not only allows the transport of light but allows for the image formation. The opposite property of translucency is opacity. Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission. Some materials, such as plate glass and clean water, allow much of the light that falls on them to be transmitted, with little being reflected; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission. Materials which do not allow the transmission of light are called opaque. Many such substances have a chemical composition which includes what are referred to as absorption centers. Many substances are selective in their absorption of white light frequencies. They absorb certain portions of the visible spectrum, while reflecting others. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. This is what gives rise to color. The attenuation of light of all frequencies and wavelengths is due to the combined mechanisms of absorption and scattering.[1]

106

Transparency and translucency

107

Introduction With regard to the absorption of light, primary material considerations include: • At the electronic level, absorption in the ultraviolet and visible (UV-Vis) portions of the spectrum depends on whether the electron orbitals are spaced (or "quantized") such that they can absorb a quantum of light (or photon) of a specific frequency, and does not violate selection rules. For example, in most glasses, electrons have no available energy levels above them in range of that associated with visible light, or if they do, they violate selection rules, meaning there is no appreciable absorption in pure (undoped) glasses, making them ideal transparent materials for windows in buildings. • At the atomic or molecular level, physical absorption in the infrared portion of the spectrum depends on the frequencies of atomic or molecular vibrations or chemical bonds, and on selection rules. Nitrogen and oxygen are not greenhouse gases because the absorption is forbidden by the lack of a molecular dipole moment.

Simulated comparisons of (from top to bottom): decreasing levels of opacity; increasing levels of translucency; and increasing levels of transparency; behind each panel is a left-right gradiented grey star

With regard to the scattering of light, the most critical factor is the length scale of any or all of these structural features relative to the wavelength of the light being scattered. Primary material considerations include:

• Crystalline structure: whether or not the atoms or molecules exhibit the 'long-range order' evidenced in crystalline solids. • Glassy structure: scattering centers include fluctuations in density or composition. • Microstructure: scattering centers include internal surfaces such as grain boundaries, crystallographic defects and microscopic pores. • Organic materials: scattering centers include fiber and cell structures and boundaries.

Transparency and translucency

Light scattering in solids Diffuse reflection - Generally, when light strikes the surface of a (non-metallic and non-glassy) solid material, it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g., the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material), and by its surface, if it is rough. Diffuse reflection is typically characterized by omni-directional reflection angles. Most of the objects visible to the naked eye are identified via diffuse reflection. Another term commonly used for this type of reflection is “light scattering”. Light scattering from the surfaces of objects is our primary mechanism of physical observation.[][] Light scattering in liquids and solids depends on the wavelength of the light being scattered. Limits to spatial scales of visibility (using white light) therefore General mechanism of diffuse reflection arise, depending on the frequency of the light wave and the physical dimension (or spatial scale) of the scattering center. Visible light has a wavelength scale on the order of a half a micrometer (one millionth of a meter). Scattering centers (or particles) as small as one micrometer have been observed directly in the light microscope (e.g., Brownian motion).[2][3]

Applications Optical transparency in polycrystalline materials is limited by the amount of light which is scattered by their microstructural features. Light scattering depends on the wavelength of the light. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension of the scattering center. For example, since visible light has a wavelength scale on the order of a micrometer, scattering centers will have dimensions on a similar spatial scale. Primary scattering centers in Large laser elements made from transparent ceramics can be produced at a polycrystalline materials include relatively low cost. These components are free of internal stress or intrinsic birefringence, and allow relatively large doping levels or optimized microstructural defects such as pores and custom-designed doping profiles. This makes ceramic laser elements particularly grain boundaries. In addition to pores, most important for high-energy lasers. of the interfaces in a typical metal or ceramic object are in the form of grain boundaries which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent.

108

Transparency and translucency

In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength or roughly 600/15 = 40 nm) eliminates much of light scattering, resulting in a translucent or even transparent material.

109

Soldiers pictured during the 2003 Iraq War seen through Night Vision Goggles

Computer modeling of light transmission through translucent ceramic alumina has shown that microscopic pores trapped near grain boundaries act as primary scattering centers. The volume fraction of porosity had to be reduced below 1% for high-quality optical transmission (99.99 percent of theoretical density). This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol-gel chemistry and nanotechnology.[4] Transparent ceramics have created interest in their applications for high energy lasers, transparent armor windows, nose cones for heat seeking missiles, radiation detectors for non-destructive testing, high energy physics, space exploration, security and medical imaging applications. The development of transparent panel products will have other potential advanced applications including high strength, impact-resistant materials that can be used for domestic windows and skylights. Perhaps more important is that walls and other applications Translucency of a material being used to will have improved overall strength, especially for high-shear highlight the structure of a photographic subject conditions found in high seismic and wind exposures. If the expected improvements in mechanical properties bear out, the traditional limits seen on glazing areas in today's building codes could quickly become outdated if the window area actually contributes to the shear resistance of the wall. Currently available infrared transparent materials typically exhibit a trade-off between optical performance, mechanical strength and price. For example, sapphire (crystalline alumina) is very strong, but it is expensive and lacks full transparency throughout the 3–5 micrometer mid-infrared range. Yttria is fully transparent from 3–5 micrometers, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. Not surprisingly, a combination of these two materials in the form of the yttrium aluminium garnet (YAG) is one of the top performers in the field.

Transparency and translucency

110

Absorption of light in solids When light strikes an object, it usually has not just a single frequency (or wavelength) but many. Objects have a tendency to selectively absorb, reflect or transmit light of certain frequencies. That is, one object might reflect green light while absorbing all other frequencies of visible light. Another object might selectively transmit blue light while absorbing all other frequencies of visible light. The manner in which visible light interacts with an object is dependent upon the frequency of the light, the nature of the atoms in the object, and often the nature of the electrons in the atoms of the object. Some materials allow much of the light that falls on them to be transmitted through the material without being reflected. Materials that allow the transmission of light waves through them are called optically transparent. Chemically pure (undoped) window glass and clean river or spring water are prime examples of this. Materials which do not allow the transmission of any light wave frequencies are called opaque. Such substances may have a chemical composition which includes what are referred to as absorption centers. Most materials are composed of materials which are selective in their absorption of light frequencies. Thus they absorb only certain portions of the visible spectrum. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. In the visible portion of the spectrum, this is what gives rise to color.[][] Color centers are largely responsible for the appearance of specific wavelengths of visible light all around us. Moving from longer (0.7 micrometer) to shorter (0.4 micrometer) wavelengths: red, orange, yellow, green and blue (ROYGB) can all be identified by our senses in the appearance of color by the selective absorption of specific light wave frequencies (or wavelengths). Mechanisms of selective light wave absorption include: • Electronic: Transitions in electron energy levels within the atom (e.g., pigments). These transitions are typically in the ultraviolet (UV) and/or visible portions of the spectrum.

Meiningen Catholic Church, 20th century glass

• Vibrational: Resonance in atomic/molecular vibrational modes. These transitions are typically in the infrared portion of the spectrum.

UV-Vis: Electronic transitions In electronic absorption, the frequency of the incoming light wave is at or near the energy levels of the electrons within the atoms which compose the substance. In this case, the electrons will absorb the energy of the light wave and increase their energy state, often moving outward from the nucleus of the atom into an outer shell or orbital. The atoms that bind together to make the molecules of any particular substance contain a number of electrons (given by the atomic number Z in the periodic chart). Recall that all light waves are electromagnetic in origin. Thus they are affected strongly when coming into contact with negatively charged electrons in matter. When photons (individual packets of light energy) come in contact with the valence electrons of atom, one of several things can and will occur: • An electron absorbs all of the energy of the photon and re-emits it with different color. This gives rise to luminescence, fluorescence and phosphorescence. • An electron absorbs the energy of the photon and sends it back out the way it came in. This results in reflection or scattering. • An electron cannot absorb the energy of the photon and the photon continues on its path. This results in transmission (provided no other absorption mechanisms are active).

Transparency and translucency • An electron selectively absorbs a portion of the photon, and the remaining frequencies are transmitted in the form of spectral color. Most of the time, it is a combination of the above that happens to the light that hits an object. The electrons in different materials vary in the range of energy that they can absorb. Most glasses, for example, block ultraviolet (UV) light. What happens is the electrons in the glass absorb the energy of the photons in the UV range while ignoring the weaker energy of photons in the visible light spectrum. Thus, when a material is illuminated, individual photons of light can make the valence electrons of an atom transition to a higher electronic energy level. The photon is destroyed in the process and the absorbed radiant energy is transformed to electric potential energy. Several things can happen then to the absorbed energy. as it may be re-emitted by the electron as radiant energy (in this case the overall effect is in fact a scattering of light), dissipated to the rest of the material (i.e. transformed into heat), or the electron can be freed from the atom (as in the photoelectric and Compton effects).

Infrared: Bond stretching The primary physical mechanism for storing mechanical energy of motion in condensed matter is through heat, or thermal energy. Thermal energy manifests itself as energy of motion. Thus, heat is motion at the atomic and molecular levels. The primary mode of motion in crystalline substances is vibration. Any given atom will vibrate around some mean or average position within a crystalline structure, surrounded by its nearest neighbors. This vibration in 2-dimensions is equivalent to the oscillation of a clock’s pendulum. It swings back and forth symmetrically about some mean or average (vertical) position. Atomic and molecular vibrational frequencies may average on the order of 1012 cycles per second (hertz). Normal modes of vibration in a crystalline solid When a light wave of a given frequency strikes a material with particles having the same or (resonant) vibrational frequencies, then those particles will absorb the energy of the light wave and transform it into thermal energy of vibrational motion. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When infrared light of these frequencies strikes an object, the energy is reflected or transmitted.

If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves are said to be transmitted.[5][6]

111

Transparency and translucency

112

Transparency in insulators An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Almost all solids reflect a part and absorb a part of the incoming light. When light falls onto a block of metal, it encounters atoms that are tightly packed in a regular lattice and a "sea of electrons" moving randomly between the atoms.[] In metals, most of these are non-bonding electrons (or free electrons) as opposed to the bonding electrons typically found in covalently bonded or ionically bonded non-metallic (insulating) solids. In a metallic bond, any potential bonding electrons can easily be lost by the atoms in a crystalline structure. The effect of this delocalization is simply to exaggerate the effect of the "sea of electrons". As a result of these electrons, most of the incoming light in metals is reflected back, which is why we see a shiny metal surface. Most insulators (or dielectric materials) are held together by ionic bonds. Thus, these materials do not have free conduction electrons, and the bonding electrons reflect only a small fraction of the incident wave. The remaining frequencies (or wavelengths) are free to propagate (or be transmitted). This class of materials includes all ceramics and glasses. If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. Color centers (or dye molecules, or "dopants") in a dielectric absorb a portion of the incoming light wave. The remaining frequencies (or wavelengths) are free to be reflected or transmitted. This is how colored glass is produced. Most liquids and aqueous solutions are highly transparent. For example, water, cooking oil, rubbing alcohol, air, natural gas, are all clear. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are chiefly responsible for their excellent optical transmission. The ability of liquids to "heal" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. The liquid fills up numerous voids making the material more structurally homogeneous.[citation needed] Light scattering in an ideal defect-free crystalline (non-metallic) solid which provides no scattering centers for incoming lightwaves will be due primarily to any effects of anharmonicity within the ordered lattice. Lightwave transmission will be highly directional due to the typical anisotropy of crystalline substances, which includes their symmetry group and Bravais lattice. For example, the seven different crystalline forms of quartz silica (silicon dioxide, SiO2) are all clear, transparent materials.[7]

Optical waveguides Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Guided light wave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions to act as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation is relatively lossless.

Propagation of light through a multi-mode optical fiber

Transparency and translucency

An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis by the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The refractive index is the parameter reflecting the speed of light in a material. (Refractive index is the ratio of the speed of light in vacuum to the speed of light in a given medium. The refractive index of vacuum is therefore 1.) The larger the refractive index, the more slowly light travels in that medium. Typical values for core and cladding of an optical fiber are 1.48 and 1.46, respectively.

113

A laser beam bouncing down an acrylic rod, illustrating the total internal reflection of light in a multimode optical fiber

When light traveling in a dense medium hits a boundary at a steep angle, the light will be completely reflected. This effect, called total internal reflection, is used in optical fibers to confine light in the core. Light travels along the fiber bouncing back and forth off of the boundary. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles will be propagated. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Optical waveguides are used as components in integrated optical circuits (e.g. combined with lasers or light-emitting diodes, LEDs) or as the transmission medium in local and long haul optical communication systems.

Mechanisms of attenuation Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance traveled through a transmission medium. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the very high quality of transparency of modern optical transmission media. The medium is usually a fiber of silica glass that confines the incident light beam to the inside. Attenuation is an important factor limiting the transmission of a signal across large distances. In optical fibers the main attenuation source Light attenuation by ZBLAN and silica fibers is scattering from molecular level irregularities (Rayleigh scattering)[8] due to structural disorder and compositional fluctuations of the glass structure. This same phenomenon is seen as one of the limiting factors in the transparency of infrared missile domes[citation needed]. Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation.[9][10]

Transparency and translucency

References [8] I. P. Kaminow, T. Li (2002), Optical fiber telecommunications IV, Vol.1, p. 223 (http:/ / books. google. com/ books?id=GlxnCiQlNwEC& q& f=false& pg=PA223#v=onepage& q& f=false)

Further reading • Electrodynamics of continuous media, Landau, L. D., Lifshits. E.M. and Pitaevskii, L.P., (Pergamon Press, Oxford, 1984) • Laser Light Scattering: Basic Principles and Practice Chu, B., 2nd Edn. (Academic Press, New York 1992) • Solid State Laser Engineering, W. Koechner (Springer-Verlag, New York, 1999) • Introduction to Chemical Physics, J.C. Slater (McGraw-Hill, New York, 1939) • Modern Theory of Solids, F. Seitz, (McGraw-Hill, New York, 1940) • Modern Aspects of the Vitreous State, J.D.MacKenzie, Ed. (Butterworths, London, 1960)

External links • Properties of Light (http://sol.sci.uop.edu/~jfalward/physics17/chapter12/chapter12.html) • UV-Vis Absorption (http://teaching.shu.ac.uk/hwb/chemistry/tutorials/molspec/uvvisab1.htm) • Infrared Spectroscopy (http://www.cem.msu.edu/~reusch/VirtualText/Spectrpy/InfraRed/infrared.htm) • Brillouin Scattering (http://www.soest.hawaii.edu/~zinin/Zi-Brillouin.html) • Transparent Ceramics (http://www.ikts.fhg.de/business/strukturkeramik/basiswerkstoffe/oxidkeramik/ transparentkeramik_en.html) • Bulletproof Glass (http://science.howstuffworks.com/question476.htm) • Transparent ALON Armor (http://science.howstuffworks.com/transparent-aluminum-armor.htm) • Properties of Optical Materials (http://www.harricksci.com/infoserver/Optical Materials.cfm) • What makes glass transparent ? (http://science.howstuffworks.com/question404.htm) • Brillouin scattering in optical fiber (http://www.rp-photonics.com/brillouin_scattering.html) • Thermal IR Radiation and Missile Guidance (http://www.ausairpower.net/TE-IR-Guidance.html)

114

Rendering transparency

115

Rendering transparency Transparency is possible in a number of graphics file formats. The term transparency is used in various ways by different people, but at its simplest there is "full transparency" i.e. something that is completely invisible. Of course, only part of a graphic should be fully transparent, or there would be nothing to see. More complex is "partial transparency" or "translucency" where the effect is achieved that a graphic is partially transparent in the same way as colored glass. Since ultimately a printed page or computer or television screen can only be one color at a point, partial transparency is always simulated at some level by mixing colors. There are many different ways to mix colors, so in some cases transparency is ambiguous.

GIF animation with transparent background

In addition, transparency is often an "extra" for a graphics format, and some graphics programs will ignore the transparency. Raster file formats that support transparency include GIF, PNG, BMP and TIFF, through either a transparent color or an alpha channel. Most vector formats implicitly support transparency because they simply avoid putting any objects at a given point. This includes EPS and WMF. For vector graphics this may not strictly be seen as transparency, but it requires much of the same careful programming as transparency in raster formats. More complex vector formats may allow transparency combinations between the elements within the graphic, as well as that above. This includes SVG and PDF. A suitable raster graphics editor shows transparency by a special pattern, e.g. a checkerboard pattern.

Transparent pixels One color entry in a single GIF or PNG image's palette can be defined as "transparent" rather than an actual color. This means that when the decoder encounters a pixel with this value, it is rendered in the background color of the part of the screen where the image is placed, also if this varies pixel-by-pixel as in the case of a background image. Applications include:

This image has binary transparency (some pixels fully transparent, other pixels fully opaque). It can be transparent against any background because it is monochrome.

• an image that is not rectangular can be filled to the required rectangle using transparent surroundings; the image can even have holes (e.g. be ring-shaped) • in a run of text, a special symbol for which an image is used because it is not available in the character set, can be given a transparent background, resulting in a matching background. The transparent color should be chosen carefully, to avoid items that just happen to be the same color vanishing. Even this limited form of transparency has patchy implementation, though most popular web browsers are capable of displaying transparent GIF images. This support often does not extend to printing, especially to printing devices (such as PostScript) which do not include support for transparency in the device or driver. Outside the world of web browsers, support is fairly hit-or-miss for transparent GIF files.

Rendering transparency

116

Edge limitations of transparent pixels The edges of characters and other images with transparent background should not have shades of gray: these are normally used for intermediate colors between the color of the letter/image and that of the background, typically shades of gray being intermediate between a black letter and a white background. However, with for example a red background the intermediate colors would be dark red, and gray edge pixels give an ugly and unclear result. For a variable background color there are no suitable fixed intermediate colors.

This image has binary transparency. However, it is grayscale, with anti-aliasing, so it looks good only against a white background. Set against a different background, a "ghosting" effect from the shades of gray would result.

Partial transparency by alpha channels PNG and TIFF also allows partial transparency, which solves the edge limitation problem. However, support is even more patchy. Internet Explorer prior to version 7 does not support partial transparency in a PNG graphic. Very few applications correctly process TIFF files with alpha channels. A major use of partial transparency, but not the only one, is to produce "soft edges" in graphics so that they blend into their background. See also monochrome or with shades of gray and anti-aliasing.

This image has partial transparency (254 possible levels of transparency between fully transparent and fully opaque). It can be transparent against any background despite being anti-aliased.

The process of combining a partially transparent color with its background ("compositing") is often ill-defined and the results may not be exactly the same in all cases. For example, where color correction is in use, should the colors be composited before or after color correction?

Transparency by clipping path An alternative approach to full transparency is to use a Clipping path. A clipping path is simply a shape or outline, that is used in conjunction with the other graphics. Everything inside the path is visible, and everything outside the path is invisible. The path is inherently vector, but can potentially be used to mask both vector and bitmap data. The main usage of clipping paths is in PostScript files.

Compositing calculations While some transparency specifications are vague, others may give mathematical details of how two colors are to be composited. This gives a fairly simple example of how compositing calculations can work, can produce the expected results, and can also produce surprises. In this example, two grayscale colors are to be composited. Grayscale values are considered to be numbers between 0.0 (white) and 1.0 (black). To emphasize: this is only one possible rule for transparency. If working with transparency, check the rules in use for your situation.

This image shows the results of overlaying each of the above transparent PNG images on a background color of #6080A0. Note the gray fringes on the letters of the middle image.

Rendering transparency

117

The color at a point, where color G1 and G2 are to be combined, is . Some consequences of this are: • Where the colors are equal, the result is the same color because . • Where one color (G1) is white (0.0), the result is

. This will

always be less than any nonzero value of G2, so the result is whiter than G2. (This is easily reversed for the case where G2 is white).

• Where one color (G1) is black (1.0), the result is is more black than G2. • The formula is commutative since

This shows how the above images would look when, for example, editing them. The grey and white check pattern would be converted into transparency.

. This will always be more than G2, so the result . This means it does not matter which

order two graphics are mixed i.e. which of the two is on the top and which is on the bottom. • The formula is not associative since ( ( G1 + G2 ) / 2 + G3 ) / 2 = G1 / 4 + G2 / 4 + G3 / 2 ( G1 + ( G2 + G3 ) / 2 ) / 2 = G1 / 2 + G2 / 4 + G3 / 4 This is important as it means that when combining three or more objects with this rule for transparency, the final color depends very much on the order of doing the calculations. Although the formula is simple, it may not be ideal. Human perception of brightness is not linear - we do not necessarily consider that a gray value of 0.5 is halfway between black and white. Such details may not matter when transparency is used only to soften edges, but in more complex designs this may be significant. Most people working seriously with transparency will need to see the results and may fiddle with the colors or (where possible) the algorithm to arrive at the results they need. This formula can easily be generalized to RGB color or CMYK color by applying the formula to each channel separately. For example, final . But it cannot be applied to all color models. For example Lab color would produce results that were surprising. An alternative model is that at every point in each element to be combined for transparency there is an associated color and opacity between 0 and 1. For each color channel, you might work with this model: if a channel with intensity and opacity overlays a channel with intensity and opacity the result will be a channel with intensity equal to

, and opacity

. Each channel must be

multiplied by corresponding alpha value before composition (so called premultiplied alpha). The SVG file specification uses this type of blending, and this is one of the models that can be used in PDF. Alpha channels may be implemented in this way, where the alpha channel provides an opacity level to be applied equally to all other channels. To work with the above formula, the opacity needs to be scaled to the range 0 to 1, whatever its external representation (often 0 to 255 if using 8 bit samples such as "RGBA").

Transparency in PDF Starting with version 1.4 of the PDF standard (Adobe Acrobat version 5), transparency (including translucency) is supported. Transparency in PDF files allows to achieve various effects, including adding shadows to objects, making objects semi-transparent and having objects blend into each other or into text. PDF supports many different blend modes, not just the most common averaging method, and the rules for compositing many overlapping objects allow choices (such as whether a group of objects are blended before being blended with the background, or whether each object in turn is blended into the background).

Rendering transparency PDF transparency is a very complex model, its original specification by Adobe being over 100 pages long. A key source of complication is that blending objects with different color spaces can be tricky and error-prone as well as cause compatibility issues. Transparency in PDF was designed not to cause errors in PDF viewers that did not understand it – they would simply display all elements as fully opaque. However, this was a two-edged sword as users with older viewers, PDF printers, etc. could see or print something completely different from the original design. The fact that the PDF transparency model is so complicated means that it is not well supported. This means that RIPs and printers often have problems printing PDFs with transparency. The solution to this is either to rasterize the image or to apply vector transparency flattening to the PDF. However vector transparency flattening is extremely complex and only supported by a few specialist packages.

Transparency in PostScript The PostScript language has limited support for full (not partial) transparency, depending on the PostScript level. Partial transparency is available with the pdfmark extension,[1] available on many PostScript implementations.

Level 1 Level 1 PostScript offers transparency via two methods: • A one-bit (monochrome) image can be treated as a mask. In this case the 1-bits can be painted any single color, while the 0-bits are not painted at all. This technique cannot be generalised to more than one color, or to vector shapes. • Clipping paths can be defined. These restrict what part of all subsequent graphics can be seen. This can be used for any kind of graphic, however in level 1, the maximum number of nodes in a path was often limited to 1500, so complex paths (e.g. cutting around the hair in a photograph of a person's head) often failed.

Level 2 Level 2 PostScript adds no specific transparency features. However, by the use of patterns, arbitrary graphics can be painted through masks defined by any vector or text operations. This is, however, complex to implement. In addition, this too often reached implementation limits, and few if any application programs ever offered this technique.

Level 3 Level 3 PostScript adds further transparency option for any raster image. A transparent color, or range of colors, can be applied; or a separate 1-bit mask can be used to provide an alpha channel.

Encapsulated PostScript EPS files contain PostScript, which may be level 1, 2 or 3 and make use of the features above. A more subtle issue arises with the previews for EPS files that are typically used to show the view of the EPS file on screen. There are viable techniques for setting transparency in the preview. For example, a TIFF preview might use a TIFF alpha channel. However, many applications do not use this transparency information and will therefore show the preview as a rectangle. A semi-proprietary technique pioneered in Photoshop and adopted by a number of pre-press applications is to store a clipping path in a standard location of the EPS, and use that for display. In addition, few of the programs that generate EPS previews will generate transparency information in the preview. Some programs have sought to get around this by treating all white in the preview as transparent, but this too is problematic in the cases where some whites are not transparent.

118

Rendering transparency

119

More recently, applications have been appearing that ignore the preview altogether; they therefore get information on which parts of the preview to paint by interpreting the PostScript.

References External links • Image Modification Online - free tool for creating semi-transparent, translucent PNG images (http://www. where2link.com/image-modification-online/partial_transparency_translucency.aspx)

Refraction Refraction is the change in direction of a wave due to a change in its transmission medium. Refraction is essentially a surface phenomenon. The phenomenon is mainly in governance to the law of conservation of energy and momentum. Due to change of medium, the phase velocity of the wave is changed but its frequency remains constant. This is most commonly observed when a wave passes from one medium to another at any angle other than 90° or 0°. Refraction of light is the most commonly observed phenomenon, but any type of wave can refract when it interacts with a medium, for example when sound waves pass from one medium into another or when water waves move into water of a different depth. Refraction is described by Snell's law, which states that for a given pair of media and a wave with a single frequency, the ratio of the sines of the angle of incidence θ1 and angle of refraction θ2 is equivalent to the ratio of phase velocities (v1 / v2) in the two media, or equivalently, to the opposite ratio of the indices of refraction (n2 / n1):

Light on air–plexi surface in this experiment undergoes refraction (lower ray) and reflection (upper ray).

In general, the incident wave is partially refracted and partially reflected; the details of this behavior are described by the Fresnel equations. An image of the Golden Gate Bridge is refracted and bent by many differing three dimensional drops of water.

Refraction

120

Explanation In optics, refraction is a phenomenon that often occurs when waves travel from a medium with a given refractive index to a medium with another at an oblique angle. At the boundary between the media, the wave's phase velocity is altered, usually causing a change in direction. Its wavelength increases or decreases but its frequency remains constant. For example, a light ray will refract as it enters and leaves glass, assuming there is a change in refractive index. A ray traveling along the normal (perpendicular to the boundary) will change speed, but not direction. Refraction still occurs in this case. Understanding of this concept led to the invention of lenses and the refracting telescope.

Refraction of light at the interface between two media of different refractive indices, with n2 > n1. Since the phase velocity is lower in the second medium (v2 < v1), the angle of refraction θ2 is less than the angle of incidence θ1; that is, the ray in the higher-index medium is closer to the normal.

Refraction can be seen when looking into a bowl of water. Air has a refractive index of about 1.0003, and water has a refractive index of about 1.33. If a person looks at a straight object, such as a pencil or straw, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than An object (in this case a pencil) part immersed in water looks bent due to refraction: the light waves where the actual rays originated. This causes the pencil to appear from X change direction and so seem to originate higher and the water to appear shallower than it really is. The depth at Y. (More accurately, for any angle of view, Y that the water appears to be when viewed from above is known as the should be vertically above X, and the pencil apparent depth. This is an important consideration for spearfishing should appear shorter, not longer as shown.) from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish. Conversely, an object above the water has a higher apparent height when viewed from below the water. The opposite correction must be made by archer fish. For small angles of incidence (measured from the normal, when sin θ is approximately the same as tan θ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But as the angle of incidence approaches 90o, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached. The diagram on the right shows an example of refraction in water waves. Ripples travel from the left and pass over a shallower region inclined at an angle to the wavefront. The waves travel slower in the more shallow water, so the wavelength decreases and the wave bends at the boundary. The dotted line represents the normal to the boundary. The dashed line represents the original direction of the waves. This phenomenon explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their

Diagram of refraction of water waves.

Refraction

121

original direction of travel to an angle more normal to the shoreline.[1] Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass has a higher refractive index than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency, a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies. While refraction allows for phenomena such as rainbows, it may also produce peculiar optical phenomena, such as mirages and Fata Morgana. These are caused by the change of the refractive index of air with temperature. Recently some metamaterials have been created which have a negative refractive index. With metamaterials, we can also obtain total refraction phenomena when the wave impedances of the two media are matched. There is then no reflected wave.[2] Also, since refraction can make objects appear closer than they are, it is responsible for allowing water to magnify objects. First, as light is entering a drop of water, it slows down. If the water's surface is not flat, then the light will be bent into a new path. This round shape will bend the light outwards and as it spreads out, the image you see gets larger.

Refraction of light at the interface between two media.

A useful analogy in explaining the refraction of light would be to imagine a marching band as they march at an oblique angle from pavement (a fast medium) into mud (a slower medium). The marchers on the side that runs into the mud first will slow down first. This causes the whole band to pivot slightly toward the normal (make a smaller angle from the normal).

Clinical significance In medicine, particularly optometry, ophthalmology and orthoptics, refraction (also known as refractometry) is a clinical test in which a phoropter may be used by the appropriate eye care professional to determine the eye's refractive error and the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented to determine which provides the sharpest, clearest vision.[3]

Acoustics In underwater acoustics, refraction is the bending or curving of a sound ray that results when the ray passes through a sound speed gradient from a region of one sound speed to a region of a different speed. The amount of ray bending is dependent upon the amount of difference between sound speeds, that is, the variation in temperature, salinity, and pressure of the water.[4] Similar acoustics effects are also found in the Earth's atmosphere. The phenomenon of refraction of sound in the atmosphere has been known for centuries;[5] however, beginning in the early 1970s, widespread analysis of this effect came into vogue through the designing of urban highways and noise barriers to

Refraction

122

address the meteorological effects of bending of sound rays in the lower atmosphere.[6]

Gallery

Refraction in a Perspex (acrylic) block. 

Refraction

123

The straw appears to be broken, due to refraction of light as it emerges into the air. 

Photograph of refraction of waves in a ripple tank. 

Refraction at a steep angle of incidence 

Refraction

124

References [5] Mary Somerville (1840), On the Connexion of the Physical Sciences, J. Murray Publishers, (originally by Harvard University)

External links • Java illustration of refraction (http://www.falstad.com/ripple/ex-refraction.html) • Java simulation of refraction through a prism (http://www.phy.hk/wiki/englishhtm/RefractionByPrism.htm) • Reflections and Refractions in Ray Tracing (http://www.flipcode.com/archives/reflection_transmission.pdf), a simple but thorough discussion of the mathematics behind refraction and reflection. • Flash refraction simulation- includes source (http://www.interactagram.com/physics/optics/refraction/), Explains refraction and Snell's Law. • Animations demonstrating optical refraction (http://qed.wikina.org/refraction/) by QED

Total internal reflection Total internal reflection is a phenomenon that happens when a propagating wave strikes a medium boundary at an angle larger than a particular critical angle with respect to the normal to the surface. If the refractive index is lower on the other side of the boundary and the incident angle is greater than the critical angle, the wave cannot pass through and is entirely reflected. The critical angle is the angle of incidence above which the total internal reflectance occurs. This is particularly common as an optical phenomenon, where light waves are involved, but it occurs with many types of waves, such as electromagnetic waves in general or sound waves. When a wave crosses a boundary between materials with different kinds of refractive indices, the wave will be partially refracted at the boundary surface, and partially reflected. However, if the angle of incidence is greater (i.e. the direction of propagation or ray is closer to being parallel to the boundary) than the critical angle – the angle of incidence at which light is refracted such that it travels along the boundary – then the wave will not cross the boundary and instead be totally reflected back internally. This can only occur where the wave travels from a medium with a higher [n1=higher refractive index] to one with a lower refractive index [n2=lower refractive index]. For example, it will occur with light when passing from glass to air, but not when passing from air to glass.

The larger the angle to the normal, the smaller is the fraction of light transmitted rather than reflected, until the angle at which total internal reflection occurs. (The color of the rays is to help distinguish the rays, and is not meant to indicate any color dependence.)

Total internal reflection in a block of acrylic

Total internal reflection

125

Optical description Total internal reflection of light can be demonstrated using a semi-circular block of glass or plastic. A "ray box" shines a narrow beam of light (a "ray") onto the glass. The semi-circular shape ensures that a ray pointing towards the centre of the flat face will hit the curved surface at a right angle; this will prevent refraction at the air/glass boundary of the curved surface. At the glass/air boundary of the flat surface, what happens will depend on the angle. Where θc is the critical angle measurement which is caused by the sun or a light source (measured normal to the surface):

Total internal reflection in a semi-circular acrylic block

• If θ < θc, the ray will split. Some of the ray will reflect off the boundary, and some will refract as it passes through. This is not total internal reflection. • If θ > θc, the entire ray reflects from the boundary. None passes through. This is called total internal reflection. This physical property makes optical fibers useful and prismatic binoculars possible. It is also what gives diamonds their distinctive sparkle, as diamond has an unusually high refractive index.

Critical angle The critical angle is the angle of incidence above which total internal reflection occurs. The angle of incidence is measured with respect to the normal at the refractive boundary (see diagram illustrating Snell's law). Consider a light ray passing from glass into air. The light emanating from the interface is bent towards the glass. When the incident angle is increased sufficiently, the transmitted angle (in air) reaches 90 degrees. It is at this point no light is transmitted into air. The critical angle is given by Snell's law, .

Illustration of Snell's law, .

Rearranging Snell's Law, we get incidence .

To find the critical angle, we find the value for is equal to the critical angle Now, we can solve for

when

90° and thus

. The resulting value of

.

, and we get the equation for the critical angle:

If the incident ray is precisely at the critical angle, the refracted ray is tangent to the boundary at the point of incidence. If for example, visible light were traveling through acrylic glass (with an index of refraction of approximately 1.50) into air (with an index of refraction of 1.00), the calculation would give the critical angle for light from acrylic into air, which is . Light incident on the border with an angle less than 41.8° would be partially transmitted, while light incident on the border at larger angles with respect to normal would be totally internally reflected.

Total internal reflection If the fraction

126 is greater than 1, then arcsine is not defined—meaning that total internal reflection does not

occur even at very shallow or grazing incident angles. So the critical angle is only defined when

is less than 1.

Refraction of light at the interface between two media, including total internal reflection.

A special name is given to the angle of incidence that produces an angle of refraction of 90˚. It is called the critical angle.

Derivation of evanescent wave An important side effect of total internal reflection is the propagation of an evanescent wave across the boundary surface. Essentially, even though the entire incident wave is reflected back into the originating medium, there is some penetration into the second medium at the boundary. The evanescent wave appears to travel along the boundary between the two materials, leading to the Goos-Hänchen shift. If a plane wave, confined to the xz plane, is incident on a dielectric with an angle

and wavevector

then a

transmitted ray will be created with a corresponding angle of transmittance as shown in Fig. 1. The transmitted wavevector is given by:

If

, then

since in the relation

obtained from Snell's law,

is greater than one. As a result of this

becomes complex:

The electric field of the transmitted plane wave is given by

and so evaluating this further

one obtains: and . Using the fact that

where

and Snell's law, one finally obtains

and

.

Total internal reflection

127

This wave in the optically less dense medium is known as the evanescent wave. Its characterized by its propagation in the x direction and its exponential attenuation in the z direction. Although there is a field in the second medium, it can be shown that no energy flows across the boundary. The component of Poynting vector in the direction normal to the boundary is finite, but its time average vanishes. Whereas the other two components of Poynting vector (here x-component only), and their time averaged values are in general found to be finite.

Fractional total internal reflection Under "ordinary conditions" it is true that the creation of an evanescent wave does not affect the conservation of energy, i.e. the evanescent wave transmits zero net energy. However, if a third medium with a higher refractive index than the low-index second medium is placed within less than several wavelengths distance from the interface between the first medium and the second medium, the evanescent wave will be different from the one under "ordinary conditions" and it will pass energy across the second into the third medium. (See evanescent wave coupling.) This process is called "frustrated" total internal reflection (FTIR) and is very similar to quantum tunneling. The quantum tunneling model is mathematically analogous if one thinks of the electromagnetic field as being the wave function of the photon. The low index medium can be thought of as a potential barrier through which photons can tunnel.

When a glass of water is held firmly, ridges making up the fingerprints are made visible by frustrated total internal reflection. Light tunnels from the glass into the ridges through the very [] short air gap.

The transmission coefficient for FTIR is highly sensitive to the spacing between the high index media (the function is approximately exponential until the gap is almost closed), so this effect has often been used to modulate optical transmission and reflection with a large dynamic range. An example application of this principle is the multi-touch sensing technology for displays [1] as developed at the New York University’s Media Research Lab.

Phase shift upon total internal reflection A lesser-known aspect of total internal reflection is that the reflected light has an angle dependent phase shift between the reflected and incident light. Mathematically this means that the Fresnel reflection coefficient becomes a complex rather than a real number. This phase shift is polarization dependent and grows as the incidence angle deviates further from the critical angle toward grazing incidence. The polarization dependent phase shift is long known and was used by Fresnel to design the Fresnel rhomb which allows to transform circular polarization to linear polarization and vice versa for a wide range of wavelengths (colors), in contrast to the quarter wave plate. The polarization dependent phase shift is also the reason why TE and TM guided modes have different dispersion relations.

Total internal reflection

128

Applications • Total internal reflection is the operating principle of optical fibers, which are used in endoscopes and telecommunications. • Total internal reflection is the operating principle of automotive rain sensors, which control automatic windscreen/windshield wipers. • Another application of total internal reflection is the spatial filtering of light.[2] • Prismatic binoculars use the principle of total internal reflections to get a very clear image. • Some multi-touch screens use frustrated total internal reflection in combination with a camera and appropriate software to pick up multiple targets.

Mirror like effect

• Gonioscopy employs total internal reflection to view the anatomical angle formed between the eye's cornea and iris. • A gait analysis instrument, CatWalk XT,[3] uses frustrated total internal reflection in combination with a high speed camera to capture and analyze footprints of laboratory rodents. • Optical fingerprinting devices use frustrated total internal reflection in order to record an image of a person's fingerprint without the use of ink. • A Total internal reflection fluorescence microscope uses the evanescent wave produced by TIR to excite fluorophores close to a surface. This is useful for the study of surface properties of biological samples.[4]

Examples in everyday life Total internal reflection can be observed while swimming, when one opens one's eyes just under the water's surface. If the water is calm, its surface appears mirror-like. One can demonstrate total internal reflection by filling a sink or bath with water, taking a glass tumbler, and placing it upside-down over the plug hole (with the tumbler completely filled with water). While water remains both in the upturned tumbler and in the sink surrounding it, the plug hole and plug are visible since the angle of refraction between glass and water is not greater than the critical angle. If the drain is opened and Total internal reflection can be seen at the air-water boundary. the tumbler is kept in position over the hole, the water in the tumbler drains out leaving the glass filled with air, and this then acts as the plug. Viewing this from above, the tumbler now appears mirrored because light reflects off the air/glass interface. Another common example of total internal reflection is a critically cut diamond. This is what gives it maximum brilliance.

Total internal reflection

129

References  This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" [5] (in support of MIL-STD-188). [1] http:/ / www. cs. nyu. edu/ ~jhan/ ftirsense/ [5] http:/ / www. its. bldrdoc. gov/ fs-1037/ fs-1037c. htm

External links • • • •

FTIR Touch Sensing (http://cs.nyu.edu/~jhan/ftirsense/index.html) Multi-Touch Interaction Research (http://cs.nyu.edu/~jhan/ftirtouch/index.html) Georgia State University (http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/totint.html) Total Internal Reflection (http://demonstrations.wolfram.com/TotalInternalReflection/) by Michael Schreiber, Wolfram Demonstrations Project • Total Internal Reflection (http://www.stmary.ws/highschool/physics/home/notes/waves/ TotalInternalReflection.htm) – St. Mary's Physics Online Notes • Bowley, Roger (2009). "Total Internal Reflection" (http://www.sixtysymbols.com/videos/reflection.htm). Sixty Symbols. Brady Haran for the University of Nottingham.

List of refractive indices Many materials have a well-characterized refractive index, but these indices depend strongly upon the frequency of light. Standard refractive index measurements are taken at yellow doublet sodium D line, with a wavelength of 589 nanometres. There are also weaker dependencies on temperature, pressure/stress, et cetera, as well on precise material compositions (presence of dopants et cetera); for many materials and typical conditions, however, these variations are at the percent level or less. Thus, it is especially important to cite the source for an index measurement if precision is required. In general, an index of refraction is a complex number with both a real and imaginary part, where the latter indicates the strength of absorption loss at a particular wavelength—thus, the imaginary part is sometimes called the extinction coefficient . Such losses become particularly significant, for example, in metals at short (e.g. visible) wavelengths, and must be included in any description of the refractive index.

Refraction, critical angle and total internal reflection of light at the interface between two media.

List of refractive indices

130

List Some representative refractive indices Material

λ (nm)

n

Vacuum

1 (by definition)

Air at STP

1.000277

Ref.

Gases at 0 °C and 1 atm Air

589.29 1.000293

[]

Carbon dioxide

589.29 1.00045

[] [] []

Helium

589.29 1.000036

[]

Hydrogen

589.29 1.000132

[]

Liquids at 20 °C Arsenic trisulfide and sulfur in methylene iodide

1.9

[1]

Benzene

589.29 1.501

[]

Carbon disulfide

589.29 1.628

[]

Carbon tetrachloride

589.29 1.461

[]

Ethyl alcohol (ethanol)

589.29 1.361

[]

Silicone oil

1.52045

Water

589.29 1.3330

[2] []

Solids at room temperature Titanium dioxide (Rutile phase )

589.29 2.496

[3]

Diamond

589.29 2.419

[]

Strontium titanate

589.29 2.41

Amber

589.29 1.55

[]

Fused silica (also called Fused Quartz)

589.29 1.458

[]

Sodium chloride

589.29 1.544

[]

Other materials Liquid helium

1.025

Water ice

1.31

Cornea (human)

1.373/1.380/1.401 [4]

Lens (human)

1.386 - 1.406

Acetone

1.36

Ethanol

1.36

Glycerol

1.4729

Bromine

1.661

List of refractive indices

131 Teflon AF

1.315

Teflon

1.35 - 1.38

Cytop

1.34

[]

Sylgard 184

1.4118

[]

PLA

1.46

[]

Acrylic glass

1.490 - 1.492

Polycarbonate

1.584 - 1.586

PMMA

1.4893 - 1.4899

PETg

1.57

PET

1.5750

Crown glass (pure)

1.50 - 1.54

Flint glass (pure)

1.60 - 1.62

Crown glass (impure)

1.485 - 1.755

Flint glass (impure)

1.523 - 1.925

Pyrex (a borosilicate glass)

1.470

Cryolite

1.338

Rock salt

1.516

Sapphire

1.762–1.778

Sugar Solution, 25%

1.3723

[]

Sugar Solution, 50%

1.4200

[]

Sugar Solution, 75%

1.4774

[]

Cubic zirconia

2.15 - 2.18

Potassium Niobate (KNbO3)

2.28

Silicon carbide

2.65 - 2.69

Cinnabar (Mercury sulfide)

3.02

Gallium(III) phosphide

3.5

Gallium(III) arsenide

3.927

Zinc Oxide

390

Germanium Silicon

[]

[]

2.4 4.01

590

3.96

[]

List of refractive indices

132

References [1] Meyrowitz, R, A compilation and classification of immersion media of high index of refraction, American Mineralogist 40: 398 (1955) (http:/ / www. minsocam. org/ ammin/ AM40/ AM40_398. pdf) [2] Silicon and Oil Refractive Index Standards (http:/ / www. dcglass. com/ htm/ p-ri-oil. htm) [3] RefractiveIndex.INFO - Refractive index and related constants (http:/ / refractiveindex. info/ ?group=CRYSTALS& material=TiO2)

External links • • • • • • •

International Association for the Properties of Water and Steam (http://www.iapws.org/relguide/rindex.pdf) Ioffe institute, Russian Federation (http://www.ioffe.ru/SVA/NSM/nk/index.html) Crystran, United Kingdom (http://www.crystran.co.uk/) Jena University, Germany (http://www.astro.uni-jena.de/Laboratory/Database/jpdoc/f-dbase.html) Hyperphysics list of refractive indices (http://hyperphysics.phy-astr.gsu.edu/hbase/tables/indrf.html#c1) Luxpop: Index of refraction values and photonics calculations (http://www.luxpop.com/) Kaye and Laby Online (http://www.kayelaby.npl.co.uk/general_physics/2_5/2_5_8.html) Provided by the National Physical Laboratory, UK • List of Refractive Indices of Solvents (http://macro.lsu.edu/HowTo/solvents/Refractive Index.htm)

Schlick's approximation In 3D computer graphics, Schlick's approximation is a formula for approximating the contribution of the Fresnel term in the specular reflection of light from a non-conducting interface (surface) between two media. According to Schlick's model, the specular reflection coefficient R can be approximated by:

where

is the angle between the viewing direction and the half-angle direction, which is halfway between the

incident light direction and the viewing direction, hence

. And

are the indices of

refraction of the two medias at the interface and

is the reflection coefficient for light incoming parallel to the

normal (i.e. the value of the Fresnel term when

or minimal reflection). In computer graphics, one of the

interfaces is usually air, meaning that

very well can be approximated as 1.

References • Schlick, C. (1994). "An Inexpensive BRDF Model for Physically-based Rendering". Computer Graphics Forum 13 (3): 233. doi:10.1111/1467-8659.1330233 [1].

References [1] http:/ / dx. doi. org/ 10. 1111%2F1467-8659. 1330233

Bidirectional scattering distribution function

133

Bidirectional scattering distribution function The definition of the BSDF (Bidirectional scattering distribution function) is not well standardized. The term was probably introduced in 1991 by Paul Heckbert[1]. Most often it is used to name the general mathematical function which describes the way in which the light is scattered by a surface. However in practice this phenomenon is usually split into the reflected and transmitted components, which are then treated separately as BRDF (Bidirectional reflectance distribution function) and BTDF (Bidirectional transmittance distribution function). • BSDF is a superset and the generalization of the BRDF and BTDF. The concept behind all BxDF functions could be described as a black box with the inputs being any two angles, one for incoming (incident) ray and the second one for the outgoing (reflected or transmitted) ray at a given point of the surface. The output of this black box is the value defining the ratio between the incoming and the outgoing light energy for the given couple of angles. The content of the black box may be a mathematical formula which more or less accurately tries to model and approximate the actual surface behavior or an algorithm which produces the output based on discrete samples of measured data. This implies that the function is 4 (+1) dimensional (4 values for 2 3D angles + 1 optional for wave length of the light), which means that it cannot be simply represented by 2D and not even by a 3D graph. Each 2D or 3D graph, sometimes seen in the literature, shows only a slice of the function. • Some tend to use the term BSDF simply as a category name covering the whole family of BxDF functions.

BSDF: BRDF + BTDF

• The term BSDF is sometimes used in a slightly different context, for the function describing the amount of the scatter (not scattered light), simply as a function of the incident light angle. An example to illustrate this context: for perfectly lambertian surface the BSDF(angle)=const. This approach is used for instance to verify the output quality by the manufacturers of the glossy surfaces.Wikipedia:Please clarify • Another recent usage of the term BSDF can be seen in some 3D packages, when vendors use it as a 'smart' category to encompass the simple well known cg algorithms like Phong, Blinn–Phong etc.

Bidirectional scattering distribution function

134

Overview of the BxDF functions • BSDF (Bidirectional scattering distribution function) is the most general function. • BSSRDF (Bidirectional surface scattering reflectance distribution function or B scattering surface RDF)[2]describes the relation between outgoing radiance and the incident flux, including the phenomena like subsurface scattering (SSS). The BSSRDF describes how light is transported between any two rays that hit a surface. • BRDF (Bidirectional reflectance distribution function) is a simplified BSSRDF, assuming that light enters and leaves at the same point (see the image on the right).

BRDF vs. BSSRDF

• BTDF (Bidirectional transmittance distribution function) is similar to BRDF but for the opposite side of the surface. (see the top image).

References [1] http:/ / en. wikipedia. org/ wiki/ Bidirectional_scattering_distribution_function#endnote_endnote_veach1997_a [2] http:/ / en. wikipedia. org/ wiki/ Bidirectional_scattering_distribution_function#endnote_nicodemus_1977

1. ^ Eric Veach (1997), "Robust Monte Carlo Methods for Light Transport Simulation" (http://graphics.stanford. edu/papers/veach_thesis/thesis-bw.pdf), page 86 (http://graphics.stanford.edu/papers/veach_thesis/ chapter3.ps) citing Paul Heckbert (1991). "Simulating Global Illumination Using Adaptive Meshing", PhD thesis, University of California, Berkeley, page 26. 2. ^ Randall Rauwendaal "Rendering General BSDFs and BSSDFs" (http://graphics.cs.ucdavis.edu/~bcbudge/ ecs298_2004/General_BSDFs_BSSDFs.ppt) 3. ^ The original definition in Nicodemus et al. 1977 has scattering surface, but somewhere along the way, the word ordering was reversed.

135

Object Intersection Line–sphere intersection In analytic geometry, a line and a sphere can intersect in three ways: no intersection at all, at exactly one point, or in two points. Methods for distinguishing these cases, and determining equations for the points in the latter cases, are useful in a number of circumstances. For example, this is a common calculation to perform during ray tracing (Eberly 2006:698).

Calculation using vectors in 3D

The three possible line-sphere intersections: 1. No intersection. 2. Point intersection. 3. Two point intersection.

In vector notation, the equations are as follows: Equation for a sphere

• • •

- center point - radius - points on the sphere

Equation for a line starting at

• • • •

- distance along line from starting point - direction of line (a unit vector) - origin of the line - points on the line

Searching for points that are on the line and on the sphere means combining the equations and solving for

:

Equations combined

Expanded

Rearranged

The form of a Quadratic formula is now observable. (This quadratic equation is an example of Joachimsthal's Equation [1].)

Linesphere intersection

136

where • • • Simplified

Note that is a unit vector, and thus

. Thus, we can simplify this further to

• If the value under the square-root (

) is less than zero, then it is clear that no

solutions exist, i.e. the line does not intersect the sphere (case 1). • If it is zero, then exactly one solution exists, i.e. the line just touches the sphere in one point (case 2). • If it is greater than zero, two solutions exist, and thus the line touches the sphere in two points (case 3).

References • David H. Eberly (2006), 3D game engine design: a practical approach to real-time computer graphics, 2nd edition, Morgan Kaufmann. ISBN 0-12-229063-1

References [1] http:/ / mathworld. wolfram. com/ JoachimsthalsEquation. html

Line-plane intersection In analytic geometry, the intersection of a line and a plane can be the empty set, a point, or a line. Distinguishing these cases, and determining equations for the point and line in the latter cases have use, for example, in computer graphics, motion planning, and collision detection.

The three possible plane-line intersections: 1. No intersection. 2. Point intersection. 3. Line intersection.

Line-plane intersection

137

Parametric form A line is described by all points that are a given direction from a point. Thus a general point on a line can be represented as

where

and are two distinct points

along the line. Similarly a general point on a plane can be represented as

where

, The intersection of line and plane.

are three points in the plane which are not co-linear.

The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation:

This can be simplified to

which can be expressed in matrix form as:

The point of intersection is then equal to

If the line is parallel to the plane then the vectors

,

, and

will be linearly dependent and

the matrix will be singular. This situation will also occur when the line lies in the plane. If the solution satisfies the condition

, then the intersection point is on the line between

and

If the solution satisfies

then the intersection point is in the plane inside the triangle spanned by the three points This problem is typically solved by expressing it in matrix form, and inverting it:

,

and

.

.

Line-plane intersection

138

Algebraic form In vector notation, a plane can be expressed as the set of points

where

is a normal vector to the plane and

where is a vector in the direction of the line,

for which

is a point on the plane. The vector equation for a line is

is a point on the line, and

is a scalar in the real number domain.

Substitute the line into the plane equation to get Distribute

to get

And solve for

If the line starts outside the plane and is parallel to the plane, there is no intersection. In this case, the above denominator will be zero and the numerator will be non-zero. If the line starts inside the plane and is parallel to the plane, the line intersects the plane everywhere. In this case, both the numerator and denominator above will be zero. In all other cases, the line intersects the plane once and represents the intersection as the distance along the line from

, i.e.

Uses In the ray tracing method of computer graphics a surface can be represented as a set of pieces of planes. The intersection of a ray of light with each plane is used to produce an image of the surface. In vision-based 3D reconstruction, a subfield of computer vision, depth values are commonly measured by so-called triangulation method, which finds the intersection between light plane and ray reflected toward camera. The algorithm can be generalised to cover intersection with other planar figures, in particular, the intersection of a polyhedron with a line.

Point in polygon

139

Point in polygon In computational geometry, the point-in-polygon (PIP) problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon. It is a special case of point location problems and finds applications in areas that deal with processing geometrical data, such as computer graphics, computer vision, geographical information systems (GIS), motion planning, and CAD. An early description of the problem in computer graphics shows two common approaches (ray casting and angle summation) in use as early as 1974.[1] An attempt of computer graphics veterans to trace the history of the problem and some tricks for its solution can be found in an issue of the Ray Tracing News.[2]

An example of a simple polygon

Ray casting algorithm One simple way of finding whether the point is inside or outside a simple polygon is to test how many times a ray, starting from the point and going ANY fixed direction, intersects the edges of the polygon. If the point in question is not on the boundary of the polygon, the number of intersections is an even number if the point is outside, and it is odd if inside. This algorithm is sometimes also known as the crossing The number of intersections for a ray passing number algorithm or the even-odd rule algorithm. The algorithm is from the exterior of the polygon to any point; if based on a simple observation that if a point moves along a ray from odd, it shows that the point lies inside the polygon. If it is even, the point lies outside the infinity to the probe point and if it crosses the boundary of a polygon, polygon; this test also works in three dimensions. possibly several times, then it alternately goes from the outside to inside, then from the inside to the outside, etc. As a result, after every two "border crossings" the moving point goes outside. This observation may be mathematically proved using the Jordan curve theorem. If implemented on a computer with finite precision arithmetics, the results may be incorrect if the point lies very close to that boundary, because of rounding errors. This is not normally a concern, as speed is much more important than complete accuracy in most applications of computer graphics. However, for a formally correct computer program, one would have to introduce a numerical tolerance ε and test in line whether P lies within ε of L, in which case the algorithm should stop and report "P lies very close to the boundary." Most implementations of the ray casting algorithm consecutively check intersections of a ray with all sides of the polygon in turn. In this case the following problem must be addressed. If the ray passes exactly through a vertex of a polygon, then it will intersect 2 segments at their endpoints. While it is OK for the case of the topmost vertex in the example or the vertex between crossing 4 and 5, the case of the rightmost vertex (in the example) requires that we count one intersection for the algorithm to work correctly. A similar problem arises with horizontal segments that happen to fall on the ray. The issue is solved as follows: If the intersection point is a vertex of a tested polygon side, then the intersection counts only if the second vertex of the side lies below the ray. This is effectively equivalent to considering vertices on the ray as lying slightly above the ray. Once again, the case of the ray passing through a vertex may pose numerical problems in finite precision arithmetics: for two sides adjacent to the same vertex the straightforward computation of the intersection with a ray may not give

Point in polygon the vertex in both cases. If the polygon is specified by its vertices, then this problem is eliminated by checking the y-coordinates of the ray and the ends of the tested polygon side before actual computation of the intersection. In other cases, when polygon sides are computed from other types of data, other tricks must be applied for the numerical robustness of the algorithm.

Winding number algorithm Another algorithm is to compute the given point's winding number with respect to the polygon. If the winding number is non-zero, the point lies inside the polygon. One way to compute the winding number is to sum up the angles subtended by each side of the polygon. However, this involves costly inverse trigonometric functions, which generally makes this algorithm slower than the ray casting algorithm. Luckily, these inverse trigonometric functions do not need to be computed. Since the result, the sum of all angles, can add up to 0 or (or multiples of ) only, it is sufficient to track through which quadrants the polygon winds, as it turns around the test point, which makes the winding number algorithm comparable in speed to counting the boundary crossings.

Comparison For simple polygons, both algorithms will always give the same results for all points. However, for complex polygons, the algorithms may give different results for points in the regions where the polygon intersects itself, where the polygon does not have a clearly defined inside and outside. In this case, the former algorithm is called the even-odd rule. One solution is to transform (complex) polygons in simpler, but even-odd-equivalent ones before the intersection check.[3]

Point in polygon queries The point in polygon problem may be considered in the general repeated geometric query setting: given a single polygon and a sequence of query points, quickly find the answers for each query point. Clearly, any of the general approaches for planar point location may be used. Simpler solutions are available for some special polygons.

Special cases Simpler algorithms are possible for monotone polygons, star-shaped polygons and convex polygons.

References [1] Ivan Sutherland et al.,"A Characterization of Ten Hidden-Surface Algorithms" 1974, ACM Computing Surveys vol. 6 no. 1. [2] "Point in Polygon, One More Time..." (http:/ / jedi. ks. uiuc. edu/ ~johns/ raytracer/ rtn/ rtnv3n4. html#art22), Ray Tracing News, vol. 3 no. 4, October 1, 1990.

140

141

Efficiency Schemes Spatial index A spatial database is a database that is optimized to store and query data that represents objects defined in a geometric space. Most spatial databases allow representing simple geometric objects such as points, lines and polygons. Some spatial databases handle more complex structures such as 3D objects, topological coverages, linear networks, and TINs. While typical databases are designed to manage various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types efficiently. These are typically called geometry or feature. The Open Geospatial Consortium created the Simple Features specification and sets standards for adding spatial functionality to database systems.[1]

Features of spatial databases Database systems use indexes to quickly look up values and the way that most databases index data is not optimal for spatial queries. Instead, spatial databases use a spatial index to speed up database operations. In addition to typical SQL queries such as SELECT statements, spatial databases can perform a wide variety of spatial operations. The following operations and many more are specified by the Open Geospatial Consortium standard: • Spatial Measurements: Computes line length, polygon area, the distance between geometries, etc. • Spatial Functions: Modify existing features to create new ones, for example by providing a buffer around them, intersecting features, etc. • Spatial Predicates: Allows true/false queries about spatial relationships between geometries. Examples include "do two polygons overlap" or 'is there a residence located within a mile of the area we are planning to build the landfill?' (see DE-9IM) • Geometry Constructors: Creates new geometries, usually by specifying the vertices (points or nodes) which define the shape. • Observer Functions: Queries which return specific information about a feature such as the location of the center of a circle Some databases support only simplified or modified sets of these operations, especially in cases of NoSQL systems like MongoDB and CouchDB.

Spatial index Spatial indices are used by spatial databases (databases which store information related to objects in space) to optimize spatial queries. Conventional index types do not efficiently handle spatial queries such as how far two points differ, or whether points fall within a spatial area of interest. Common spatial index methods include: • • • • •

Grid (spatial index) Z-order (curve) Quadtree Octree UB-tree

• R-tree: Typically the preferred method for indexing spatial data.[citation needed] Objects (shapes, lines and points) are grouped using the minimum bounding rectangle (MBR). Objects are added to an MBR within the index that

Spatial index

• • • • • •

will lead to the smallest increase in its size. R+ tree R* tree Hilbert R-tree X-tree kd-tree m-tree - an m-tree index can be used for the efficient resolution of similarity queries on complex objects as compared using an arbitrary metric.

Spatial database systems • • • • • •

All OpenGIS Specifications compliant products[2] Open source spatial databases and APIs, some of which are OpenGIS compliant[3] Boeing's Spatial Query Server spatially enables Sybase ASE. Smallworld VMDS, the native GE Smallworld GIS database SpatiaLite extends Sqlite with spatial datatypes, functions, and utilities. IBM DB2 Spatial Extender can be used to enable any edition of DB2, including the free DB2 Express-C, with support for spatial types

• Oracle Spatial • Microsoft SQL Server has support for spatial types since version 2008 • PostgreSQL DBMS (database management system) uses the spatial extension PostGIS to implement the standardized datatype geometry and corresponding functions. • MySQL DBMS implements the datatype geometry plus some spatial functions that have been implemented according to the OpenGIS specifications.[4] However, in MySQL version 5.5 and earlier, functions that test spatial relationships are limited to working with minimum bounding rectangles rather than the actual geometries. MySQL versions earlier than 5.0.16 only supported spatial data in MyISAM tables. As of MySQL 5.0.16, InnoDB, NDB, BDB, and ARCHIVE also support spatial features. • Neo4j - Graph database that can build 1D and 2D indexes as Btree, Quadtree and Hilbert curve directly in the graph • AllegroGraph - a Graph database provides a novel mechanism for efficient storage and retrieval of two-dimensional geospatial coordinates for Resource Description Framework data. It includes an extension syntax for SPARQL queries • MongoDB supports geospatial indexes in 2D • Esri has a number of both single-user and multiuser geodatabases. • SpaceBase [5] is a real-time spatial database.[6] • CouchDB a document based database system that can be spatially enabled by a plugin called Geocouch • CartoDB [7] is a cloud based geospatial database on top of PostgreSQL with PostGIS. • StormDB [8] is an upcoming cloud based database on top of PostgreSQL with geospatial capabilities. • SpatialDB [9] by MineRP is the worlds first open standards (OGC) spatial database with spatial type extensions for the Mining Industry.[10]

142

Spatial index

References [1] OGC Homepage (http:/ / www. opengeospatial. org) [2] All Registered Products at opengeospatial.org (http:/ / www. opengeospatial. org/ resources/ ?page=products) [3] Open Source GIS website (http:/ / opensourcegis. org/ ) [4] http:/ / dev. mysql. com/ doc/ refman/ 5. 5/ en/ gis-introduction. html [5] http:/ / paralleluniverse. co [6] SpaceBase product page on the Parallel Universe website (http:/ / paralleluniverse. co/ product) [7] http:/ / cartodb. com/ [8] http:/ / www. stormdb. com [9] http:/ / www. minerpsolutions. com/ en/ software/ enterprise/ spatialDB [10] SpatialDB product page on the MineRP website (http:/ / www. minerpsolutions. com/ en/ software/ enterprise/ spatialDB)

Further reading • Spatial Databases: A Tour (http://www.spatial.cs.umn.edu/Book/), Shashi Shekhar and Sanjay Chawla, Prentice Hall, 2003 (ISBN 0-13-017480-7) • ESRI Press (http://gis.esri.com/esripress/). ESRI Press titles include Modeling Our World: The ESRI Guide to Geodatabase Design, and Designing Geodatabases: Case Studies in GIS Data Modeling , 2005 Ben Franklin Award (http://www.pma-online.org/benfrank2005_winnerfinalist.cfm) winner, PMA, The Independent Book Publishers Association. • Spatial Databases - With Application to GIS (http://mkp.com/books/data-management) Philippe Rigaux, Michel Scholl and Agnes Voisard. Morgan-Kauffman Publishers. 2002 (ISBN 1-55860-588-6)

External links • An introduction to PostgreSQL PostGIS (http://www.mapbender.org/presentations/ Spatial_Data_Management_Arnulf_Christl/html/) • PostgreSQL PostGIS as components in a Service Oriented Architecture (http://www.gisdevelopment.net/ magazine/years/2006/jan/18_1.htm) SOA • A Trigger Based Security Alarming Scheme for Moving Objects on Road Networks (http://www.springerlink. com/content/vn7446g28924jv5v/) Sajimon Abraham, P. Sojan Lal, Published by Springer Berlin / Heidelberg-2008.

143

Grid

144

Grid In the context of a spatial index, a grid (a.k.a. "mesh", also "global grid" if it covers the entire surface of the globe) is a regular tessellation of a manifold or 2-D surface that divides it into a series of contiguous cells, which can then be assigned unique identifiers and used for spatial indexing purposes. A wide variety of such grids have been proposed or are currently in use, including grids based on "square" or "rectangular" cells, triangular grids or meshes, hexagonal grids, grids based on diamond-shaped cells, and possibly more. The range is broad and the possibilities are expanding.

Types of grids "Square" or "rectangular" grids are frequently the simplest in use, i.e. for translating spatial information expressed in Cartesian coordinates (latitude and longitude) into and out of the grid system. Such grids may or may not be aligned with the gridlines of latitude and longitude; for example, Marsden squares, World Meteorological Organization squares, c-squares and others are aligned, while UTM, and various national (=local) grid based systems such as the British national grid reference system are not. In general, these grids fall into two classes, those that are "equal angle", that have cell sizes that are constant in degrees of latitude and longitude but are unequal in area (particularly with varying latitude), or those that are "equal area", that have cell sizes that are constant in distance on the ground (e.g. 100 km, 10 km) but not in degrees of longitude, in particular. The most influential triangular grid is the "Quaternary Triangular Mesh" or QTM that was developed by Geoffrey Dutton in the early 1980s. It eventually resulted in a thesis entitled "A Hierarchical Coordinate System for Geoprocessing and Cartography" that was published in 1999 (see publications list on Dutton's Spatial Effects [1] website). This grid was also employed as the basis of the rotatable globe that forms part of the Microsoft Encarta product. For a discussion of Discrete Global Grid Systems featuring hexagonal and other grids (including diamond-shaped), the paper of Sahr et al. (2003)[2] is recommended reading. In general, triangular and hexagonal grids are constructed so as to better approach the goals of equal-area (or nearly so) plus more seamless coverage across the poles, which tends to be a problem area for square or rectangular grids since in these cases, the cell width diminishes to nothing at the pole and those cells adjacent to the pole then become 3- rather than 4-sided. Criteria for optimal discrete global gridding have been proposed by both Goodchild and Kimerling[3] in which equal area cells are deemed of prime importance. Quadtrees are a specialised form of grid in which the resolution of the grid is varied according to the nature and/or complexity of the data to be fitted, across the 2-d space, and are considered separately under that heading. Polar grids utilize the polar coordinate system. In polar grids, intervals of a prescribed radius (circles) that are divided into sectors of a certain angle. Coordinates are given as the radius and angle from the center of the grid

Grid

145 (pole).

Grid-based spatial indexing In practice, construction of grid-based spatial indices entails allocation of relevant objects to their position or positions in the grid, then creating an index of object identifiers vs. grid cell identifiers for rapid access. This is an example of a "space-driven" or data independent method, as opposed to "data-driven" or data dependent method, as discussed further in Rigaux et al. (2002)).[4] A grid-based spatial index has the advantage that the structure of the index can be created first, and data added on an ongoing basis without requiring any change to the index structure; indeed, if a common grid is used by disparate data collecting and indexing activities, such indices can easily be merged from a variety of sources. On the other hand, data driven structures such as R-trees can be more efficient for data storage and speed at search execution time, though they are generally tied to the internal structure of a given data storage system. The use of such spatial indices is not limited to digital data; the "index" section of any global or street atlas commonly contains a list of named features (towns, streets, etc.) with associated grid square identifiers, and may be considered a perfectly acceptable example of a spatial index (in this case, typically organised by feature name, though the reverse is conceptually also possible).

Other uses The individual cells of a grid system can also be useful as units of aggregation, for example as a precursor to data analysis, presentation, mapping, etc. For some applications (e.g., statistical analysis), equal-area cells may be preferred, although for others this may not be a prime consideration. In computer science, one often needs to find out all cells a ray is passing through in a grid (for raytracing or collision detection) and that is called Grid Traversal.

References [1] http:/ / www. spatial-effects. com/ SE-papers1. html [2] Kevin Sahr, Denis White, and A. Jon Kimerling. 2003. Geodesic Discrete Global Grid Systems. Cartography and Geographic Information Science, 30(2), 121-134. (http:/ / www. sou. edu/ cs/ sahr/ dgg/ pubs/ gdggs03. pdf#search="hexagonal grid kimerling") [3] Criteria and Measures for the Comparison of Global Geocoding Systems, Keith C. Clarke, University of California (http:/ / www. ncgia. ucsb. edu/ globalgrids-book/ comparison) [4] Rigaux, P., Scholl, M., and Voisard, A. 2002. Spatial Databases - with application to GIS. Morgan Kaufmann, San Francisco, 410pp.

• Indexing the Sky - Clive Page (http://www.star.le.ac.uk/~cgp/ag/skyindex.html) - Grid indices for astronomy

External links • Grid Traversal implementation details and applet demonstration (http://www.gamerendering.com/2009/07/ 20/grid-traversal/) • PYXIS Discrete Global Grid System using the ISEA3H Grid (http://www.pyxisinnovation.com/pyxwiki/ index.php?title=How_PYXIS_Works)

Octree

146

Octree An octree is a tree data structure in which each internal node has exactly eight children. Octrees are most often used to partition a three dimensional space by recursively subdividing it into eight octants. Octrees are the three-dimensional analog of quadtrees. The name is formed from oct + tree, but note that it is normally written "octree" with only one "t". Octrees are often used in 3D graphics and 3D game engines. Left: Recursive subdivision of a cube into octants. Right: The corresponding octree.

Octrees for spatial representation Each node in an octree subdivides the space it represents into eight octants. In a point region (PR) octree, the node stores an explicit 3-dimensional point, which is the "center" of the subdivision for that node; the point defines one of the corners for each of the eight children. In an MX octree, the subdivision point is implicitly the center of the space the node represents. The root node of a PR octree can represent infinite space; the root node of an MX octree must represent a finite bounded space so that the implicit centers are well-defined. Note that Octrees are not the same as k-d trees. k-d trees split along a dimension and octrees split around a point and k-d trees are also always binary, which is not the case for octrees. By using a depth-first search the nodes are to be traversed and only required surfaces are to be viewed.

History The use of octrees for 3D computer graphics was pioneered by Donald Meagher at Rensselaer Polytechnic Institute, described in a 1980 report "Octree Encoding: A New Technique for the Representation, Manipulation and Display of Arbitrary 3-D Objects by Computer",[1] for which he holds a 1995 patent (with a 1984 priority date) "High-speed image generation of complex solid objects using octree encoding" [2]

Common uses of octrees • • • • • • •

3D computer graphics Spatial indexing Nearest neighbor search Efficient collision detection in three dimensions View frustum culling Fast Multipole Method Unstructured grid

• Finite element analysis • Sparse voxel octree • State estimation[3]

Octree

Application to color quantization The octree color quantization algorithm, invented by Gervautz and Purgathofer in 1988, encodes image color data as an octree up to nine levels deep. Octrees are used because and there are three color components in the RGB system. The node index to branch out from at the top level is determined by a formula that uses the most significant bits of the red, green, and blue color components, e.g. 4r + 2g + b. The next lower level uses the next bit significance, and so on. Less significant bits are sometimes ignored to reduce the tree size. The algorithm is highly memory efficient because the tree's size can be limited. The bottom level of the octree consists of leaf nodes that accrue color data not represented in the tree; these nodes initially contain single bits. If much more than the desired number of palette colors are entered into the octree, its size can be continually reduced by seeking out a bottom-level node and averaging its bit data up into a leaf node, pruning part of the tree. Once sampling is complete, exploring all routes in the tree down to the leaf nodes, taking note of the bits along the way, will yield approximately the required number of colors.

References [3] Henning Eberhardt, Vesa Klumpp, Uwe D. Hanebeck, Density Trees for Efficient Nonlinear State Estimation, Proceedings of the 13th International Conference on Information Fusion, Edinburgh, United Kingdom, July, 2010. (http:/ / isas. uka. de/ Publikationen/ Fusion10_EberhardtKlumpp. pdf)

External links • Octree Quantization in Microsoft Systems Journal (http://www.microsoft.com/msj/archive/S3F1.aspx) • Color Quantization using Octrees in Dr. Dobb's (http://www.ddj.com/184409805) • Color Quantization using Octrees in Dr. Dobb's Source Code (ftp://ftp.drdobbs.com/sourcecode/ddj/1996/ 9601.zip) • Octree Color Quantization Overview (http://web.cs.wpi.edu/~matt/courses/cs563/talks/color_quant/ CQoctree.html) • Parallel implementation of octtree generation algorithm, P. Sojan Lal, A Unnikrishnan, K Poulose Jacob, ICIP 1997, IEEE Digital Library (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=727419) • Generation of Octrees from Raster Scan with Reduced Information Loss, P. Sojan Lal, A Unnikrishnan, K Poulose Jacob, IASTED International conference VIIP 2001 (http://dblp.uni-trier.de/db/conf/viip/viip2001. html#LalUJ01) (http://www.actapress.com/catalogue2009/proc_series13.html#viip2001) • C++ implementation (GPL license) (http://nomis80.org/code/octree.html) • Parallel Octrees for Finite Element Applications (http://sc07.supercomputing.org/schedule/pdf/pap117.pdf) • Cube 2: Sauerbraten - a game written in the octree-heavy Cube 2 engine (http://www.sauerbraten.org/) • Ogre - A 3d Object-oriented Graphics Rendering Engine with a Octree Scene Manager Implementation (MIT license) (http://www.ogre3d.org) • Dendro: parallel multigrid for octree meshes (MPI/C++ implementation) (http://www.cc.gatech.edu/csela/ dendro) • Video: Use of an octree in state estimation (http://www.youtube.com/watch?v=Jw4VAgcWruY)

147

148

Global Illumination Global illumination

Rendering without global illumination. Areas that lie outside of the ceiling lamp's direct light lack definition. For example, the lamp's housing appears completely uniform. Without the ambient light added into the render, it would appear uniformly black.

Rendering with global illumination. Light is reflected by surfaces, and colored light transfers from one surface to another. Notice how color from the red wall and green wall (not visible) reflects onto other surfaces in the scene. Also notable is the caustic projected onto the red wall from light passing through the glass sphere. Global illumination is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not (indirect illumination). Theoretically reflections, refractions, and shadows are all examples of global illumination, because when simulating them, one object affects the rendering of another object (as opposed to an object being affected only by a direct light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination.

Global illumination Images rendered using global illumination algorithms often appear more photorealistic than images rendered using only direct illumination algorithms. However, such images are computationally more expensive and consequently much slower to generate. One common approach is to compute the global illumination of a scene and store that information with the geometry, i.e., radiosity. That stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly. Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport, ambient occlusion, photon mapping, and image based lighting are examples of algorithms used in global illumination, some of which may be used together to yield results that are not fast, but accurate. These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene. The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design. In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power.

Procedure More and more specialized algorithms are used in 3D programs that can effectively simulate the global illumination. These algorithms are numerical approximations to the rendering equation. Well known algorithms for computing global illumination include path tracing, photon mapping and radiosity. The following approaches can be distinguished here: • Inversion: • is not applied in practice • Expansion: • bi-directional approach: Photon mapping + Distributed ray tracing, Bi-directional path tracing, Metropolis light transport • Iteration: • Radiosity In Light path notation global lighting the paths of the type L (D | S) corresponds * E.

149

Global illumination

150

Image-based lighting Another way to simulate real global illumination, is the use of High dynamic range images (HDRIs), also known as environment maps, which encircle the scene, and they illuminate. This process is known as image-based lighting.

List of methods Method

Description/Notes

Ray tracing

Several enhanced variants exist for solving problems related to sampling, aliasing, soft shadows: Distributed ray tracing, Cone tracing, Beam tracing.

Path tracing

Unbiased, Variant: Bi-directional Path Tracing

Photon mapping

enhanced variants: Progressive Photon Mapping, Stochastic Progressive Photon Mapping (Consistent

Lightcuts

enhanced variants: Multidimensional Lightcuts, Bidirectional Lightcuts

Point Based Global Illumination

Extensively used in movie animations

Radiosity

Finite element method, very good for precomputations.

Metropolis light transport

Builds upon bi-directional path tracing, unbiased

[1]

)

[2][3]

References [1] http:/ / www. luxrender. net/ wiki/ SPPM [2] http:/ / graphics. pixar. com/ library/ PointBasedGlobalIlluminationForMovieProduction/ paper. pdf [3] http:/ / www. karstendaemen. com/ thesis/ files/ intro_pbgi. pdf

External links • SSRT (http://www.nirenstein.com/e107/page.php?11) – C++ source code for a Monte-carlo pathtracer (supporting GI) - written with ease of understanding in mind. • Video demonstrating global illumination and the ambient color effect (http://www.archive.org/details/ MarcC_AoI-Global_Illumination) • Real-time GI demos (http://realtimeradiosity.com/demos) – survey of practical real-time GI techniques as a list of executable demos • kuleuven (http://www.cs.kuleuven.be/~phil/GI/) - This page contains the Global Illumination Compendium, an effort to bring together most of the useful formulas and equations for global illumination algorithms in computer graphics. • GI Tutorial (http://www.youtube.com/watch?v=K5a-FqHz3o0) - Video tutorial on faking global illumination within 3D Studio Max by Jason Donati

Rendering equation

151

Rendering equation In computer graphics, the rendering equation is an integral equation in which the equilibrium radiance leaving a point is given as the sum of emitted plus reflected radiance under a geometric optics approximation. It was simultaneously introduced into computer graphics by David Immel et al.[1] and James Kajiya[2] in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation.

The rendering equation describes the total amount of light emitted from a point x along a particular viewing direction, given a function for incoming light and a BRDF.

The physical basis for the rendering equation is the law of conservation of energy. Assuming that L denotes radiance, we have that at each particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light itself is the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and cosine of the incident angle.

Equation form The rendering equation may be written in the form

where • • • • •

is a particular wavelength of light is time is the location in space is the direction of the outgoing light is the negative direction of the incoming light

• • •

is the total spectral radiance of wavelength

directed outward along direction

at time

, from a particular position is emitted spectral radiance is the unit hemisphere containing all possible values for



is an integral over



is the bidirectional reflectance distribution function, the proportion of light reflected from to

• •

at position , time , and at wavelength is spectral radiance of wavelength coming inward toward from direction at time is the weakening factor of inward irradiance due to incident angle, as the light flux is smeared across a surface whose area is larger than the projected area perpendicular to the ray

Two noteworthy features are: its linearity—it is composed only of multiplications and additions, and its spatial homogeneity—it is the same in all positions and orientations. These mean a wide range of factorings and rearrangements of the equation are possible.

Rendering equation Note this equation's spectral and time dependence —

152 may be sampled at or integrated over sections of the visible

spectrum to obtain, for example, a trichromatic color sample. A pixel value for a single frame in an animation may be obtained by fixing motion blur can be produced by averaging over some given time interval (by integrating over the time interval and dividing by the length of the interval).[3]

Applications Solving the rendering equation for any given scene is the primary challenge in realistic rendering. One approach to solving the equation is based on finite element methods, leading to the radiosity algorithm. Another approach using Monte Carlo methods has led to many different algorithms including path tracing, photon mapping, and Metropolis light transport, among others.

Limitations Although the equation is very general, it does not capture every aspect of light reflection. Some missing aspects include the following: • Transmission, which occurs when light is transmitted through the surface, like for example when it hits a glass object or a water surface, • Subsurface scattering, where the spatial locations for incoming and departing light are different. Surfaces rendered without accounting for subsurface scattering may appear unnaturally opaque — however, it is not necessary to account for this if transmission is included in the equation, since that will effectively include also light scattered under the surface, • Polarization, where different light polarizations will sometimes have different reflection distributions, for example when light bounces at a water surface, • Phosphorescence, which occurs when light or other electromagnetic radiation is absorbed at one moment in time and emitted at a later moment in time, usually with a longer wavelength (unless the absorbed electromagnetic radiation is very intense), • Interference, where the wave properties of light are exhibited, • Fluorescence, where the absorbed and emitted light have different wavelengths, • Non-linear effects, where very intense light can increase the energy level of an electron with more energy than that of a single photon (this can occur if the electron is hit by two photons at the same time), and emission of light with higher frequency than the frequency of the light that hit the surface suddenly becomes possible, and • Relativistic Doppler effect, where light that bounces on an object that is moving in a very high speed will get its wavelength changed; if the light bounces at an object that is moving towards it, the impact will compress the photons, so the wavelength will become shorter and the light will be blueshifted and the photons will be packed more closely so the photon flux will be increased; if it bounces at an object that is moving away from it, it will be redshifted and the photons will be packed more sparsely so the photon flux will be decreased. For scenes that are either not composed of simple surfaces in a vacuum or for which the travel time for light is an important factor, researchers have generalized the rendering equation to produce a volume rendering equation[4] suitable for volume rendering and a transient rendering equation[5] for use with data from a time-of-flight camera.

Rendering equation

References External links • Lecture notes (http://graphics.stanford.edu/courses/cs348b-00/lectures/lecture12/) from Stanford University course CS 348B, Computer Graphics: Image Synthesis Techniques

Distributed ray tracing Distributed ray tracing, also called distribution ray tracing and stochastic ray tracing, is a refinement of ray tracing that allows for the rendering of "soft" phenomena. Conventional ray tracing uses single rays to sample many different domains. For example, when the color of an object is calculated, ray tracing might send a single ray to each light source in the scene. This leads to sharp shadows, since there is no way for a light source to be partially occluded (another way of saying this is that all lights are point sources and have zero area). Conventional ray tracing also typically spawns one reflection ray and one transmission ray per intersection. As a result, reflected and transmitted images are perfectly (and unrealistically) sharp. Distributed ray tracing removes these restrictions by averaging multiple rays distributed over an interval. For example, soft shadows can be rendered by distributing shadow rays over the light source area. Blurry reflections and transmissions can be rendered by distributing reflection and transmission rays over a solid angle about the "true" reflection or transmission direction. Adding "soft" phenomena to ray-traced images in this way can improve realism immensely, since the sharp phenomena rendered by conventional ray tracing are almost never seen in reality.[citation needed]

More advanced effects are also possible using the same framework. For instance, depth of field can be achieved by distributing ray origins over the lens area. In an animated scene, motion blur can be simulated by distributing rays in time. Distributing rays in the spectrum allows for the rendering of dispersion effects, such as rainbows and prisms. Mathematically, in order to evaluate the rendering equation, one must evaluate several integrals. Conventional ray tracing estimates these integrals by sampling the value of the integrand at a single point in the domain, which is clearly a very bad approximation. Distributed ray tracing samples the integrand at many randomly chosen points and averages the results to obtain a better approximation. It is essentially an application of the Monte Carlo method to 3D computer graphics, and for this reason is also called stochastic ray tracing. The term distributed ray tracing also sometimes refers to the application of distributed computing techniques to ray tracing, but because of ambiguity this is more properly called parallel ray tracing (in reference to parallel computing).

External links • Stochastic rasterization [1]

References [1] http:/ / research. nvidia. com/ publication/ real-time-stochastic-rasterization-conventional-gpu-architectures

153

Monte Carlo method

154

Monte Carlo method Computational physics

Numerical analysis · Simulation Data analysis · Visualization

Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; i.e., by running simulations many times over in order to calculate those same probabilities heuristically just like actually playing and recording your results in a real casino situation: hence the name. They are often used in physical and mathematical problems and are most suited to be applied when it is impossible to obtain a closed-form expression or infeasible to apply a deterministic algorithm. Monte Carlo methods are mainly used in three distinct problems: optimization, numerical integration and generation of samples from a probability distribution. Monte Carlo methods are especially useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). They are used to model phenomena with significant uncertainty in inputs, such as the calculation of risk in business. They are widely used in mathematics, for example to evaluate multidimensional definite integrals with complicated boundary conditions. When Monte Carlo simulations have been applied in space exploration and oil exploration, their predictions of failures, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.[1] The modern version of the Monte Carlo method was invented in the late 1940s by Stanislaw Ulam, while he was working on nuclear weapon projects at the Los Alamos National Laboratory. It was named, by Nicholas Metropolis, after the Monte Carlo Casino, where Ulam's uncle often gambled.[] Immediately after Ulam's breakthrough, John von Neumann understood its importance and programmed the ENIAC computer to carry out Monte Carlo calculations.

Monte Carlo method

155

Introduction Monte Carlo methods vary, but tend to follow a particular pattern: 1. Define a domain of possible inputs. 2. Generate inputs randomly from a probability distribution over the domain. 3. Perform a deterministic computation on the inputs. 4. Aggregate the results. For example, consider a circle inscribed in a unit square. Given that the circle and the square have a ratio of areas that is π/4, the value of π can be approximated using a Monte Carlo method:[] 1. Draw a square on the ground, then inscribe a circle within it. 2. Uniformly scatter some objects of uniform size (grains of rice or sand) over the square. 3. Count the number of objects inside the circle and the total number of objects. 4. The ratio of the two counts is an estimate of the ratio of the two areas, which is π/4. Multiply the result by 4 to estimate π.

Monte Carlo method applied to approximating the value of π. After placing 30000 random points, the estimate for π is within 0.07% of the actual value. This happens with an approximate probability of 20%.

In this procedure the domain of inputs is the square that circumscribes our circle. We generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the circle). Finally, we aggregate the results to obtain our final result, the approximation of π. If grains are purposely dropped into only the center of the circle, they are not uniformly distributed, so our approximation is poor. Second, there should be a large number of inputs. The approximation is generally poor if only a few grains are randomly dropped into the whole square. On average, the approximation improves as more grains are dropped.

History Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using a probabilistic analog (see Simulated annealing). An early variant of the Monte Carlo method can be seen in the Buffon's needle experiment, in which π can be estimated by dropping needles on a floor made of parallel strips of wood. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but did not publish anything on it.[] In 1946, physicists at Los Alamos Scientific Laboratory were investigating radiation shielding and the distance that neutrons would likely travel through various materials. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus, and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Stanislaw Ulam had the idea of using random experiments. He recounts his inspiration as follows: The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than "abstract thinking" might not be to lay it out say one hundred times and simply observe and count the number of successful plays. This was already possible

Monte Carlo method

156

to envisage with the beginning of the new era of fast computers, and I immediately thought of problems of neutron diffusion and other questions of mathematical physics, and more generally how to change processes described by certain differential equations into an equivalent form interpretable as a succession of random operations. Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations. –Stanislaw Ulam[2] Being secret, the work of von Neumann and Ulam required a code name. Von Neumann chose the name Monte Carlo. The name refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble.[][3][4] Using lists of "truly random" random numbers was extremely slow, but von Neumann developed a way to calculate pseudorandom numbers, using the middle-square method. Though this method has been criticized as crude, von Neumann was aware of this: he justified it as being faster than any other method at his disposal, and also noted that when it went awry it did so obviously, unlike methods that could be subtly incorrect. Monte Carlo methods were central to the simulations required for the Manhattan Project, though severely limited by the computational tools at the time. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers that had been previously used for statistical sampling.

Definitions There is no consensus on how Monte Carlo should be defined. For example, Ripley[] defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky[] distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to determine the properties of some phenomenon (or behavior). Examples: • Simulation: Drawing one pseudo-random uniform variable from the interval (0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation. • Monte Carlo method: The area of an irregular figure inscribed in a unit square can be determined by throwing darts at the square and computing the ratio of hits within the irregular figure to the total number of darts thrown. This is a Monte Carlo method of determining area, but not a simulation. • Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval (0,1], and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin. Kalos and Whitlock[] point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."

Monte Carlo method

Monte Carlo and random numbers Monte Carlo simulation methods do not always require truly random numbers to be useful — while for some applications, such as primality testing, unpredictability is vital.[5] Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense. What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest, and most common ones. Sawilowsky lists the characteristics of a high quality Monte Carlo simulation:[] • the (pseudo-random) number generator has certain characteristics (e.g., a long “period” before the sequence repeats) • the (pseudo-random) number generator produces values that pass tests for randomness • there are enough samples to ensure accurate results • the proper sampling technique is used • the algorithm used is valid for what is being modeled • it simulates the phenomenon in question. Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution. Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods.

Monte Carlo simulation versus "what if" scenarios There are ways of using probabilities that are definitely not Monte Carlo simulations — for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a “best guess” estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.[6] By contrast, Monte Carlo simulations sample probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring.[7] For example, a comparison of a spreadsheet cost construction model run using traditional “what if” scenarios, and then run again with Monte Carlo simulation and Triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the “what if” analysis.Wikipedia:Please clarify This is because the “what if” analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".

Applications Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with a large number of coupled degrees of freedom. Areas of application include:

Physical sciences Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms. In statistical physics Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems.[8] Quantum Monte Carlo methods solve the many-body problem for quantum systems. In

157

Monte Carlo method experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both the evolution of galaxies[9] and the transmission of microwave radiation through a rough planetary surface.[10] Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.

Engineering Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example, • in microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits. • in geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis. • in wind energy yield analysis, the predicted energy output of a wind farm during its lifetime is calculated giving different levels of uncertainty (P90, P50, etc.) • impacts of pollution are simulated[] and diesel compared with petrol.[] • In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or Particle filter that forms the heart of the SLAM (Simultaneous Localization and Mapping) algorithm. • In Aerospace engineering, Monte Carlo methods are used to ensure that multiple parts of an assembly will fit into an engine component.

Computational biology Monte Carlo methods are used in computational biology, such for as Bayesian inference in phylogeny. Biological systems such as proteins[11] membranes,[12] images of cancer,[13] are being studied by means of computer simulations. The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy. Computer simulations allow us to monitor the local environment of a particular molecule to see if some chemical reaction is happening for instance. We can also conduct thought experiments when the physical experiments are not feasible, for instance breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields.

Computer graphics Path Tracing, occasionally referred to as Monte Carlo Ray Tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.

Applied statistics In applied statistics, Monte Carlo methods are generally used for two purposes: 1. To compare competing statistics for small samples under realistic data conditions. Although Type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.[14]

158

Monte Carlo method 2. To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions. Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice – or more frequently—for the efficiency of not having to track which permutations have already been selected).

Artificial intelligence for games Monte Carlo methods have been developed into a technique called Monte Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and a large number of random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.[15] The Monte Carlo Tree Search (MCTS) method has four steps:[16] 1. 2. 3. 4.

Starting at root node of the tree, select optimal child nodes until a leaf node is reached. Expand the leaf node and choose one of its children. Play a simulated game starting with that node. Use the results of that simulated game to update the node and its ancestors.

The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move. Monte Carlo Tree Search has been used successfully to play games such as Go,[17] Tantrix,[18] Battleship,[19] Havannah,[20] and Arimaa.[21]

Design and visuals Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.[22]

Finance and business Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at a business unit or corporate level, or to evaluate financial derivatives. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project.

159

Monte Carlo method

160

Telecommunications When planning a wireless network, design must be proved to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.

Use in mathematics In general, Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.

Integration Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to a series of nested one-dimensional integrals.[] 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom. Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays convergence—i.e., quadrupling the number of

Monte-Carlo integration works by comparing random points with the value of the function

sampled points halves the error, regardless of the number of dimensions.[] A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling[23][24] or the VEGAS algorithm.

Errors reduce by a factor of

A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly. Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis-Hastings algorithm, Gibbs sampling and the Wang and Landau

Monte Carlo method algorithm.

Simulation and optimization Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has a large number of dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. However, let's assume that instead of wanting to minimize the total distance traveled to visit each desired destination, we wanted to minimize the total time needed to reach each destination. This goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine our optimal path we would want to use simulation - optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize our travel decisions to identify the best path to follow taking that uncertainty into account. Simulating industry problems using spreadsheet software is a powerful application of Monte Carlo simulation. With a basic spreadsheet tool and some formulas built into the sheet, industry can often attain good solutions without having to procure expensive simulation software.

Inverse problems Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.[25][26]

Computational mathematics Monte Carlo methods are useful in many areas of computational mathematics, where a "lucky choice" can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n that is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this

161

Monte Carlo method answer when it is wrong too often—in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.

Notes [11] [15] [16] [17] [18] [19] [20] [21]

, http:/ / sander. landofsand. com/ publications/ Monte-Carlo_Tree_Search_-_A_New_Framework_for_Game_AI. pdf http:/ / mcts. ai/ about/ index. html http:/ / link. springer. com/ chapter/ 10. 1007/ 978-3-540-87608-3_6 http:/ / www. tantrix. com:4321/ Tantrix/ TRobot/ MCTS%20Final%20Report. pdf http:/ / www0. cs. ucl. ac. uk/ staff/ D. Silver/ web/ Publications_files/ pomcp. pdf http:/ / link. springer. com/ chapter/ 10. 1007/ 978-3-642-17928-0_10 http:/ / www. arimaa. com/ arimaa/ papers/ ThomasJakl/ bc-thesis. pdf

References • Anderson, H.L. (1986). "Metropolis, Monte Carlo and the MANIAC" (http://library.lanl.gov/cgi-bin/ getfile?00326886.pdf). Los Alamos Science 14: 96–108. • Baeurle, Stephan A. (2009). "Multiscale modeling of polymer materials using field-theoretic methodologies: A survey about recent developments". Journal of Mathematical Chemistry 46 (2): 363–426. doi: 10.1007/s10910-008-9467-3 (http://dx.doi.org/10.1007/s10910-008-9467-3). • Berg, Bernd A. (2004). Markov Chain Monte Carlo Simulations and Their Statistical Analysis (With Web-Based Fortran Code). Hackensack, NJ: World Scientific. ISBN 981-238-935-0. • Binder, Kurt (1995). The Monte Carlo Method in Condensed Matter Physics. New York: Springer. ISBN 0-387-54369-4. • Caflisch, R. E. (1998). Monte Carlo and quasi-Monte Carlo methods. Acta Numerica 7. Cambridge University Press. pp. 1–49. • Davenport, J. H. "Primality testing revisited". Proceeding ISSAC '92 Papers from the international symposium on Symbolic and algebraic computation: 123 129. doi: 10.1145/143242.143290 (http://dx.doi.org/10.1145/ 143242.143290). ISBN 0-89791-489-9. • Doucet, Arnaud; Freitas, Nando de; Gordon, Neil (2001). Sequential Monte Carlo methods in practice. New York: Springer. ISBN 0-387-95146-6. • Eckhardt, Roger (1987). "Stan Ulam, John von Neumann, and the Monte Carlo method" (http://www.lanl.gov/ history/admin/files/Stan_Ulam_John_von_Neumann_and_the_Monte_Carlo_Method.pdf). Los Alamos Science, Special Issue (15): 131–137. • Fishman, G. S. (1995). Monte Carlo: Concepts, Algorithms, and Applications. New York: Springer. ISBN 0-387-94527-X. • C. Forastero and L. Zamora and D. Guirado and A. Lallena (2010). "A Monte Carlo tool to simulate breast cancer screening programmes". Phys. In Med. And Biol. 55 (17): 5213. Bibcode: 2010PMB....55.5213F (http://adsabs. harvard.edu/abs/2010PMB....55.5213F). doi: 10.1088/0031-9155/55/17/021 (http://dx.doi.org/10.1088/ 0031-9155/55/17/021). • Golden, Leslie M. (1979). "The Effect of Surface Roughness on the Transmission of Microwave Radiation Through a Planetary Surface". Icarus 38 (3): 451. Bibcode: 1979Icar...38..451G (http://adsabs.harvard.edu/ abs/1979Icar...38..451G). doi: 10.1016/0019-1035(79)90199-4 (http://dx.doi.org/10.1016/ 0019-1035(79)90199-4). • Gould, Harvey; Tobochnik, Jan (1988). An Introduction to Computer Simulation Methods, Part 2, Applications to Physical Systems. Reading: Addison-Wesley. ISBN 0-201-16504-X. • Grinstead, Charles; Snell, J. Laurie (1997). Introduction to Probability. American Mathematical Society. pp. 10–11. • Hammersley, J. M.; Handscomb, D. C. (1975). Monte Carlo Methods. London: Methuen. ISBN 0-416-52340-4.

162

Monte Carlo method • Hartmann, A.K. (2009). Practical Guide to Computer Simulations (http://www.worldscibooks.com/physics/ 6988.html). World Scientific. ISBN 978-981-283-415-7. • Hubbard, Douglas (2007). How to Measure Anything: Finding the Value of Intangibles in Business. John Wiley & Sons. p. 46. • Hubbard, Douglas (2009). The Failure of Risk Management: Why It's Broken and How to Fix It. John Wiley & Sons. • Kahneman, D.; Tversky, A. (1982). Judgement under Uncertainty: Heuristics and Biases. Cambridge University Press. • Kalos, Malvin H.; Whitlock, Paula A. (2008). Monte Carlo Methods. Wiley-VCH. ISBN 978-3-527-40760-6. • Kroese, D. P.; Taimre, T.; Botev, Z.I. (2011). Handbook of Monte Carlo Methods (http://www. montecarlohandbook.org). New York: John Wiley & Sons. p. 772. ISBN 0-470-17793-4. • MacGillivray, H. T.; Dodd, R. J. (1982). "Monte-Carlo simulations of galaxy systems" (http://www. springerlink.com/content/rp3g1q05j176r108/fulltext.pdf). Astrophysics and Space Science (Springer Netherlands) 86 (2). • MacKeown, P. Kevin (1997). Stochastic Simulation in Physics. New York: Springer. ISBN 981-3083-26-3. • Metropolis, N. (1987). "The beginning of the Monte Carlo method" (http://library.lanl.gov/la-pubs/00326866. pdf). Los Alamos Science (1987 Special Issue dedicated to Stanislaw Ulam): 125–130. • Metropolis, Nicholas; Rosenbluth, Arianna W.; Rosenbluth, Marshall N.; Teller, Augusta H.; Teller, Edward (1953). "Equation of State Calculations by Fast Computing Machines". Journal of Chemical Physics 21 (6): 1087. Bibcode: 1953JChPh..21.1087M (http://adsabs.harvard.edu/abs/1953JChPh..21.1087M). doi: 10.1063/1.1699114 (http://dx.doi.org/10.1063/1.1699114). • Metropolis, N.; Ulam, S. (1949). "The Monte Carlo Method". Journal of the American Statistical Association (American Statistical Association) 44 (247): 335–341. doi: 10.2307/2280232 (http://dx.doi.org/10.2307/ 2280232). JSTOR  2280232 (http://www.jstor.org/stable/2280232). PMID  18139350 (http://www.ncbi. nlm.nih.gov/pubmed/18139350). • M. Milik and J. Skolnick (Jan 1993). "Insertion of peptide chains into lipid membranes: an off-lattice Monte Carlo dynamics model". Proteins 15 (1): 10–25. doi: 10.1002/prot.340150104 (http://dx.doi.org/10.1002/ prot.340150104). PMID  8451235 (http://www.ncbi.nlm.nih.gov/pubmed/8451235). • Mosegaard, Klaus; Tarantola, Albert (1995). "Monte Carlo sampling of solutions to inverse problems". J. Geophys. Res. 100 (B7): 12431–12447. Bibcode: 1995JGR...10012431M (http://adsabs.harvard.edu/abs/ 1995JGR...10012431M). doi: 10.1029/94JB03097 (http://dx.doi.org/10.1029/94JB03097). • P. Ojeda and M. Garcia and A. Londono and N.Y. Chen (Feb 2009). "Monte Carlo Simulations of Proteins in Cages: Influence of Confinement on the Stability of Intermediate States". Biophys. Jour. (Biophysical Society) 96 (3): 1076–1082. Bibcode: 2009BpJ....96.1076O (http://adsabs.harvard.edu/abs/2009BpJ....96.1076O). doi: 10.1529/biophysj.107.125369 (http://dx.doi.org/10.1529/biophysj.107.125369). • Int Panis L; De Nocker L, De Vlieger I, Torfs R (2001). "Trends and uncertainty in air pollution impacts and external costs of Belgian passenger car traffic International". Journal of Vehicle Design 27 (1–4): 183–194. doi: 10.1504/IJVD.2001.001963 (http://dx.doi.org/10.1504/IJVD.2001.001963). • Int Panis L, Rabl A, De Nocker L, Torfs R (2002). "Diesel or Petrol ? An environmental comparison hampered by uncertainty". In P. Sturm. Mitteilungen Institut für Verbrennungskraftmaschinen und Thermodynamik (Technische Universität Graz Austria). Heft 81 Vol 1: 48–54. • Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (1996) [1986]. Numerical Recipes in Fortran 77: The Art of Scientific Computing. Fortran Numerical Recipes 1 (Second ed.). Cambridge University Press. ISBN 0-521-43064-X. • Ripley, B. D. (1987). Stochastic Simulation. Wiley & Sons. • Robert, C. P.; Casella, G. (2004). Monte Carlo Statistical Methods (2nd ed.). New York: Springer. ISBN 0-387-21239-6.

163

Monte Carlo method • Rubinstein, R. Y.; Kroese, D. P. (2007). Simulation and the Monte Carlo Method (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-470-17793-8. • Savvides, Savvakis C. (1994). "Risk Analysis in Investment Appraisal". Project Appraisal Journal 9 (1). doi: 10.2139/ssrn.265905 (http://dx.doi.org/10.2139/ssrn.265905). • Sawilowsky, Shlomo S.; Fahoome, Gail C. (2003). Statistics via Monte Carlo Simulation with Fortran. Rochester Hills, MI: JMASM. ISBN 0-9740236-0-4. • Sawilowsky, Shlomo S. (2003). "You think you've got trivials?" (http://education.wayne.edu/jmasm/ sawilowsky_effect_size_debate.pdf). Journal of Modern Applied Statistical Methods 2 (1): 218–225. • Silver, David; Veness, Joel (2010). "Monte-Carlo Planning in Large POMDPs" (http://books.nips.cc/papers/ files/nips23/NIPS2010_0740.pdf). In Lafferty, J.; Williams, C. K. I.; Shawe-Taylor, J.; Zemel, R. S.; Culotta, A. Advances in Neural Information Processing Systems 23. Neural Information Processing Systems Foundation. • Szirmay-Kalos, László (2008). Monte Carlo Methods in Global Illumination - Photo-realistic Rendering with Randomization. VDM Verlag Dr. Mueller e.K. ISBN 978-3-8364-7919-6. • Tarantola, Albert (2005). Inverse Problem Theory (http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/ SIAM/index.html). Philadelphia: Society for Industrial and Applied Mathematics. ISBN 0-89871-572-5. • Vose, David (2008). Risk Analysis, A Quantitative Guide (Third ed.). John Wiley & Sons.

External links • Overview and reference list (http://mathworld.wolfram.com/MonteCarloMethod.html), Mathworld • Café math : Monte Carlo Integration (http://www.cafemath.fr/mathblog/article.php?page=MonteCarlo.php) : A blog article describing Monte Carlo integration (principle, hypothesis, confidence interval) • Feynman-Kac models and particle Monte Carlo algorithms (http://www.math.u-bordeaux1.fr/~delmoral/ simulinks.html) Website on the applications of particle Monte Carlo methods in signal processing, rare event simulation, molecular dynamics, financial mathematics, optimal control, computational physics, and biology. • Introduction to Monte Carlo Methods (http://www.phy.ornl.gov/csep/CSEP/MC/MC.html), Computational Science Education Project • The Basics of Monte Carlo Simulations (http://www.chem.unl.edu/zeng/joy/mclab/mcintro.html), University of Nebraska-Lincoln • Introduction to Monte Carlo simulation (http://office.microsoft.com/en-us/excel-help/ introduction-to-monte-carlo-simulation-HA010282777.aspx) (for Microsoft Excel), Wayne L. Winston • Monte Carlo Simulation for MATLAB and Simulink (http://www.mathworks.com/discovery/ monte-carlo-simulation.html) • Monte Carlo Methods – Overview and Concept (http://www.brighton-webs.co.uk/montecarlo/concept.htm), brighton-webs.co.uk • Monte Carlo techniques applied in physics (http://personal-pages.ps.ic.ac.uk/~achremos/Applet1-page.htm) • Monte Carlo Method Example (http://waqqasfarooq.com/waqqasfarooq/index.php?option=com_content& view=article&id=47:monte-carlo&catid=34:statistics&Itemid=53), A step-by-step guide to creating a monte carlo excel spreadsheet • Approximate And Double Check Probability Problems Using Monte Carlo method (http://orcik.net/ programming/approximate-and-double-check-probability-problems-using-monte-carlo-method/) at Orcik Dot Net

164

Unbiased rendering

Unbiased rendering In computer graphics, unbiased rendering refers to a rendering technique that does not introduce any systematic error, or bias, into the radiance approximation. Because of this, they are often used to generate the reference image to which other rendering techniques are compared. Mathematically speaking, the expected value of the unbiased estimator will always be the correct value, for any number of samples. Error found in an unbiased rendering will be due to variance, which manifests itself as high-frequency noise in the resultant image. Variance is reduced by and standard deviation by for samples, meaning that four times as many samples are needed to halve the error. This makes unbiased rendering techniques less attractive for realtime or interactive rate applications. Conversely, an image produced by an unbiased renderer that appears smooth and noiseless is probabilistically correct. A biased rendering method is not necessarily wrong, and it can still converge to the correct answer if the An example of an unbiased render using Indigo Renderer estimator is consistent. It does, however, introduce a certain bias error, usually in the form of a blur, in efforts to reduce the variance (high-frequency noise). It is important to note that an unbiased technique may not consider all possible paths. Path tracing can not handle caustics generated from a point light source, as it is impossible to randomly generate the path that directly reflects into the point. Progressive photon mapping (PPM), a biased rendering technique, can handle caustics quite well. PPM is also provably consistent, meaning that as the number of samples goes to infinity, the bias error goes to zero, and the probability that the estimate is correct reaches one. Unbiased rendering methods include: • • • • •

Path Tracing Light tracing Bidirectional path tracing Metropolis light transport (and derived Energy Redistribution Path Tracing[]) Stochastic progressive photon mapping[1]

165

Unbiased rendering

Unbiased renderers • • • • • • • • • • • •

Arnold[] Blender Cycles Luxrender Fryrender Indigo Renderer Maxwell Render Octane Render NOX renderer Thea render Kerkythea (Hybrid) mental ray (optional) VRay (optional)

References [1] http:/ / www. luxrender. net/ wiki/ SPPM

Bibliography • "fryrender F.A.Q." (http://randomcontrol.com/component/rcfaq/?faqid=1&id=3). RandomControl, SLU. Retrieved 2010-05-20. • Mike Farnsworth. "Biased vs Unbiased Rendering" (http://renderspud.blogspot.com/2006/10/ biased-vs-unbiased-rendering.html). RenderSpud. Retrieved 2010-05-20. • "How to choose rendering software" (http://www.3dworldmag.com/2010/01/15/ how_to_choose_rendering_software_part_2/).

166

Path tracing

Path tracing Path tracing is a computer graphics method of rendering images of three dimensional scenes such that the global illumination is faithful to reality. Fundamentally, the algorithm is integrating over all the illuminance arriving to a single point on the surface of an object. This illuminance is then reduced by a surface reflectance function to determine how much of it will go towards the viewpoint camera. This integration procedure is repeated for every pixel in the output image. When combined with physically accurate models of surfaces, accurate models of real light sources (light bulbs), and optically-correct cameras, path tracing can produce still images that are indistinguishable from photographs. Path tracing naturally simulates many effects that have to be specifically added to other methods (conventional ray tracing or scanline rendering), such as soft shadows, depth of field, motion blur, caustics, ambient occlusion, and indirect lighting. Implementation of a renderer including these effects is correspondingly simpler. Due to its accuracy and unbiased nature, path tracing is used to generate reference images when testing the quality of other rendering algorithms. In order to get high quality images from path tracing, a large number of rays must be traced to avoid visible noisy artifacts.

History The rendering equation and its use in computer graphics was presented by James Kajiya in 1986.[1] Path Tracing was introduced then as an algorithm to find a numerical solution to the integral of the rendering equation. A decade later, Lafortune suggested many refinements, including bidirectional path tracing.[2] Metropolis light transport, a method of perturbing previously found paths in order to increase performance for difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas. More recently, CPUs and GPUs have become powerful enough to render images more quickly, causing more widespread interest in path tracing algorithms. Tim Purcell first presented a global illumination algorithm running on a GPU in 2002.[3] In February 2009 Austin Robison of Nvidia demonstrated the first commercial implementation of a path tracer running on a GPU [4], and other implementations have followed, such as that of Vladimir Koylazov in August 2009. [5] This was aided by the maturing of GPGPU programming toolkits such as CUDA and OpenCL and GPU ray tracing SDKs such as OptiX.

Description The rendering equation of Kajiya adheres to three particular principles of optics; the Principle of global illumination, the Principle of Equivalence (reflected light is equivalent to emitted light), and the Principle of Direction (reflected light and scattered light have a direction). In the real world, objects and surfaces are visible due to the fact that they are reflecting light. This reflected light then illuminates other objects in turn. From that simple observation, two principles follow. I. For a given indoor scene, every object in the room must contribute illumination to every other object. II. Second, there is no distinction to be made between illumination emitted from a light source and illumination reflected from a surface. Invented in 1984, a rather different method called radiosity was faithful to both principles. However, radiosity equivocates the illuminance falling on a surface with the luminance that leaves the surface. This forced all surfaces to be Lambertian, or "perfectly diffuse". While radiosity received a lot of attention at its invocation, perfectly diffuse surfaces do not exist in the real world. The realization that illumination scattering throughout a scene must also scatter with a direction was the focus of research throughout the 1990s, since accounting for direction always exacted a price of steep increases in calculation times on desktop computers. Principle III follows.

167

Path tracing

168

III. The illumination coming from surfaces must scatter in a particular direction that is some function of the incoming direction of the arriving illumination, and the outgoing direction being sampled. Kajiya's equation is a complete summary of these three principles, and path tracing, which approximates a solution to the equation, remains faithful to them in its implementation. There are other principles of optics which are not the focus of Kajiya's equation, and therefore are often difficult or incorrectly simulated by the algorithm. Path Tracing is confounded by optical phenomena not contained in the three principles. For example, • Bright, sharp caustics; radiance scales by the density of illuminance in space. • Subsurface scattering; a violation of principle III above. • Chromatic aberration. fluorescence. iridescence. Light is a spectrum of frequencies.

Bidirectional path tracing Sampling the integral for a point can be done by solely gathering from the surface, or by solely shooting rays from light sources. (1) Shooting rays from the light sources and creating paths in the scene. The path is cut off at a random number of bouncing steps and the resulting light is sent through the projected pixel on the output image. During rendering, billions of paths are created, and the output image is the mean of every pixel that received some contribution. (2) Gathering rays from a point on a surface. A ray is projected from the surface to the scene in a bouncing path that terminates when a light source is intersected. The light is then sent backwards through the path and to the output pixel. The creation of a single path is called a "sample". For a single point on a surface, approximately 800 samples (up to as many as 3 thousand samples) are taken. The final output of the pixel is the arithmetic mean of all those samples, not the sum. Bidirectional Path Tracing combines both Shooting and Gathering in the same algorithm to obtain faster convergence of the integral. A shooting path and a gathering path are traced independently, and then the head of the shooting path is connected to the tail of the gathering path. The light is then attenuated at every bounce and back out into the pixel. This technique at first seems paradoxically slower, since for every gathering sample we additionally trace a whole shooting path. In practice however, the extra speed of convergence far outweighs any performance loss from the extra ray casts on the shooting side. The following pseudocode is a procedure for performing naive path tracing. This function calculates a single sample of a pixel, where only the Gathering Path is considered. Color TracePath(Ray r, depth) { if (depth == MaxDepth) { return Black;

// Bounced enough times.

}

r.FindNearestObject(); if (r.hitSomething == false) { return Black;

// Nothing was hit.

}

Material m = r.thingHit->material; Color emittance = m.emittance;

// Pick a random direction from here and keep going. Ray newRay; newRay.origin = r.pointWhereObjWasHit; newRay.direction = RandomUnitVectorInHemisphereOf(r.normalWhereObjWasHit);

// This is NOT a cosine-weighted distribution!

Path tracing

169

// Compute the BRDF for this ray (assuming Lambertian reflection) float cos_theta = DotProduct(newRay.direction, r.normalWhereObjWasHit); Color BDRF = m.reflectance * cos_theta; Color reflected = TracePath(newRay, depth + 1);

// Apply the Rendering Equation here. return emittance + (BDRF * reflected); }

All these samples must then be averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be 1). There are other considerations to take into account to ensure conservation of energy. In particular, in the naive case, the reflectance of a diffuse BRDF must not exceed

or the object will reflect more light than it receives (this

however depends on the sampling scheme used, and can be difficult to get right).

Performance A path tracer continuously samples pixels of an image. The image starts to become recognisable after only a few samples per pixel, perhaps 100. However, for the image to "converge" and reduce noise to acceptable levels usually takes around 5000 samples for most images, and many more for pathological cases. Noise is particularly a problem for animations, giving them a normally-unwanted "film-grain" quality of random speckling. The central performance bottleneck in Path Tracing is the complex geometrical calculation of casting a ray. Importance Sampling is a technique which is motivated to cast less rays through the scene while still converging correctly to outgoing luminance on the surface point. This is done by casting more rays in directions in which the luminance would have been greater anyway. If the density of rays cast in certain directions matches the strength of contributions in those directions, the result is identical, but far less rays were actually cast. Importance Sampling is used to match ray density to Lambert's Cosine law, and also used to match BRDFs. Metropolis light transport can result in a lower-noise image with fewer samples. This algorithm was created in order to get faster convergence in scenes in which the light must pass through odd corridors or small holes in order to reach the part of the scene that the camera is viewing. It is also shown promise on correctly rendering pathological situations with caustics. Instead of generating random paths, new sampling paths are created as slight mutations of existing ones. In this sense, the algorithm "remembers" the successful paths from light sources to the camera.

Path tracing

170

Scattering distribution functions The reflective properties (amount, direction and colour) of surfaces are modelled using BRDFs. The equivalent for transmitted light (light that goes through the object) are BSDFs. A path tracer can take full advantage of complex, carefully modelled or measured distribution functions, which controls the appearance ("material", "texture" or "shading" in computer graphics terms) of an object.

In real time An example of advanced path-tracing engine capable of real-time graphic is Brigade [6] by Jacco Bikker. The first version of this highly-optimized game-oriented engine was released on January 26th, 2012. It's the successor of the Arauna real-time ray-tracing engine, made by the same author, and it requires the CUDA architecture (by Nvidia) to run.

Scattering distribution functions

Notes [1] [2] [3] [4] [5] [6]

http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_kajiya1986rendering http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_lafortune1996mathematical http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_purcell2002ray http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_robisonNVIRT http:/ / en. wikipedia. org/ wiki/ Path_tracing#endnote_pathGPUimplementations http:/ / igad. nhtv. nl/ ~bikker/

1. ^ Kajiya, J. T. (1986). "The rendering equation". Proceedings of the 13th annual conference on Computer graphics and interactive techniques. ACM. CiteSeerX: 10.1.1.63.1402 (http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.63.1402). 2. ^ Lafortune, E, Mathematical Models and Monte Carlo Algorithms for Physically Based Rendering (http:// www.graphics.cornell.edu/~eric/thesis/index.html), (PhD thesis), 1996. 3. ^ Purcell, T J; Buck, I; Mark, W; and Hanrahan, P, "Ray Tracing on Programmable Graphics Hardware", Proc. SIGGRAPH 2002, 703 - 712. See also Purcell, T, Ray tracing on a stream processor (http://graphics.stanford. edu/papers/tpurcell_thesis/) (PhD thesis), 2004. 4. ^ Robison, Austin, "Interactive Ray Tracing on the GPU and NVIRT Overview" (http://realtimerendering.com/ downloads/NVIRT-Overview.pdf), slide 37, I3D 2009. 5. ^ Vray demo (http://www.youtube.com/watch?v=eRoSFNRQETg); Other examples include Octane Render, Arion, and Luxrender. 6. ^ Veach, E., and Guibas, L. J. Metropolis light transport (http://graphics.stanford.edu/papers/metro/metro. pdf). In SIGGRAPH’97 (August 1997), pp. 65–76. 7. This "Introduction to Global Illumination" (http://www.thepolygoners.com/tutorials/GIIntro/GIIntro.htm) has some good example images, demonstrating the image noise, caustics and indirect lighting properties of images rendered with path tracing methods. It also discusses possible performance improvements in some detail. 8. SmallPt (http://www.kevinbeason.com/smallpt/) is an educational path tracer by Kevin Beason. It uses 99 lines of C++ (including scene description). This page has a good set of examples of noise resulting from this technique.

Radiosity

Radiosity Radiosity is a global illumination algorithm used in 3D computer graphics rendering. Radiosity is an application of the finite element method to solving the rendering equation for scenes with surfaces that reflect light diffusely. Unlike rendering methods that use Monte Carlo algorithms (such as path tracing), which handle all types of light paths, typical radiosity methods only account for paths which leave a light source and are reflected diffusely some number of times (possibly zero) before hitting the eye; such paths are represented by the code "LD*E". Radiosity is a global illumination [1] Screenshot of scene rendered with RRV (simple implementation of radiosity algorithm in the sense that the illumination renderer based on OpenGL) 79th iteration. arriving at the eye comes not just from the light sources, but all the scene surfaces interacting with each other as well. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. Radiosity methods were first developed in about 1950 in the engineering field of heat transfer. They were later refined specifically for application to the problem of rendering computer graphics in 1984 by researchers at Cornell University.[2] Notable commercial radiosity engines are Enlighten by Geomerics, used for games including Battlefield 3 and Need for Speed: The Run, 3D Studio Max, form•Z, LightWave 3D and the Electric Image Animation System.

Visual characteristics The inclusion of radiosity calculations in the rendering process often lends an added element of realism to the finished scene, because of the way it mimics real-world phenomena. Consider a simple room scene. The image on the left was rendered with a typical direct illumination renderer. There are three types of lighting in this scene which have been specifically chosen and placed by the Difference between standard direct illumination without shadow umbra, and radiosity with shadow umbra artist in an attempt to create realistic lighting: spot lighting with shadows (placed outside the window to create the light shining on the floor), ambient lighting (without which any part of the room not lit directly by a light source would be totally dark), and omnidirectional lighting without shadows (to reduce the flatness of the ambient lighting).

171

Radiosity The image on the right was rendered using a radiosity algorithm. There is only one source of light: an image of the sky placed outside the window. The difference is marked. The room glows with light. Soft shadows are visible on the floor, and subtle lighting effects are noticeable around the room. Furthermore, the red color from the carpet has bled onto the grey walls, giving them a slightly warm appearance. None of these effects were specifically chosen or designed by the artist.

Overview of the radiosity algorithm The surfaces of the scene to be rendered are each divided up into one or more smaller surfaces (patches). A view factor is computed for each pair of patches. View factors (also known as form factors) are coefficients describing how well the patches can see each other. Patches that are far away from each other, or oriented at oblique angles relative to one another, will have smaller view factors. If other patches are in the way, the view factor will be reduced or zero, depending on whether the occlusion is partial or total. The view factors are used as coefficients in a linearized form of the rendering equation, which yields a linear system of equations. Solving this system yields the radiosity, or brightness, of each patch, taking into account diffuse interreflections and soft shadows. Progressive radiosity solves the system iteratively in such a way that after each iteration we have intermediate radiosity values for the patch. These intermediate values correspond to bounce levels. That is, after one iteration, we know how the scene looks after one light bounce, after two passes, two bounces, and so forth. Progressive radiosity is useful for getting an interactive preview of the scene. Also, the user can stop the iterations once the image looks good enough, rather than wait for the computation to numerically converge. Another common method for solving the radiosity equation is "shooting radiosity," which iteratively solves the radiosity equation by "shooting" light from the patch with the most error at each step. After the first pass, only As the algorithm iterates, light can be seen to flow into the scene, as multiple bounces are those patches which are in direct line computed. Individual patches are visible as squares on the walls and floor. of sight of a light-emitting patch will be illuminated. After the second pass, more patches will become illuminated as the light begins to bounce around the scene. The scene continues to grow brighter and eventually reaches a steady state.

Mathematical formulation The basic radiosity method has its basis in the theory of thermal radiation, since radiosity relies on computing the amount of light energy transferred among surfaces. In order to simplify computations, the method assumes that all scattering is perfectly diffuse. Surfaces are typically discretized into quadrilateral or triangular elements over which a piecewise polynomial function is defined. After this breakdown, the amount of light energy transfer can be computed by using the known reflectivity of the reflecting patch, combined with the view factor of the two patches. This dimensionless quantity is computed from the geometric orientation of two patches, and can be thought of as the fraction of the total possible emitting area of the first patch which is covered by the second patch. More correctly, radiosity B is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy:

where:

172

Radiosity

173

• B(x)i dAi is the total energy leaving a small area dAi around a point x. • E(x)i dAi is the emitted energy. • ρ(x) is the reflectivity of the point, giving reflected energy per unit area by multiplying by the incident energy per unit area (the total energy which arrives from other patches). • S denotes that the integration variable x' runs over all the surfaces in the scene • r is the distance between x and x' • θx and θx' are the angles between the line joining x and x' and vectors normal to the surface at x and x' respectively. • Vis(x,x' ) is a visibility function, defined to be 1 if the two points x and x' are visible from each other, and 0 if they are not. If the surfaces are approximated by a finite number of planar patches, each of which is taken to have a constant radiosity Bi and reflectivity ρi, the above equation gives the discrete radiosity equation,

where Fij is the geometrical view factor for the radiation leaving j and hitting patch i. This equation can then be applied to each patch. The equation is monochromatic, so color radiosity rendering requires calculation for each of the required colors.

Solution methods The equation can formally be solved as matrix equation, to give the vector solution:

The geometrical form factor (or "projected solid angle") Fij. Fij can be obtained by projecting the element Aj onto a the surface of a unit hemisphere, and then projecting that in turn onto a unit circle around the point of interest in the plane of Ai. The form factor is then equal to the proportion of the unit circle covered by this projection. Form factors obey the reciprocity relation AiFij = AjFji

This gives the full "infinite bounce" solution for B directly. However the number of calculations to compute the matrix solution scales according to n3, where n is the number of patches. This becomes prohibitive for realistically large values of n. Instead, the equation can more readily be solved iteratively, by repeatedly applying the single-bounce update formula above. Formally, this is a solution of the matrix equation by Jacobi iteration. Because the reflectivities ρi are less than 1, this scheme converges quickly, typically requiring only a handful of iterations to produce a reasonable solution. Other standard iterative methods for matrix equation solutions can also be used, for example the Gauss–Seidel method, where updated values for each patch are used in the calculation as soon as they are computed, rather than all being updated synchronously at the end of each sweep. The solution can also be tweaked to iterate over each of the sending elements in turn in its main outermost loop for each update, rather than each of the receiving patches. This is known as the shooting variant of the algorithm, as opposed to the gathering variant. Using the view factor reciprocity, Ai Fij = Aj Fji, the update equation can also be re-written in terms of the view factor Fji seen by each sending patch Aj:

This is sometimes known as the "power" formulation, since it is now the total transmitted power of each element that is being updated, rather than its radiosity. The view factor Fij itself can be calculated in a number of ways. Early methods used a hemicube (an imaginary cube centered upon the first surface to which the second surface was projected, devised by Cohen and Greenberg in 1985).

Radiosity The surface of the hemicube was divided into pixel-like squares, for each of which a view factor can be readily calculated analytically. The full form factor could then be approximated by adding up the contribution from each of the pixel-like squares. The projection onto the hemicube, which could be adapted from standard methods for determining the visibility of polygons, also solved the problem of intervening patches partially obscuring those behind. However all this was quite computationally expensive, because ideally form factors must be derived for every possible pair of patches, leading to a quadratic increase in computation as the number of patches increased. This can be reduced somewhat by using a binary space partitioning tree to reduce the amount of time spent determining which patches are completely hidden from others in complex scenes; but even so, the time spent to determine the form factor still typically scales as n log n. New methods include adaptive integration[3]

Sampling approaches The form factors Fij themselves are not in fact explicitly needed in either of the update equations; neither to estimate the total intensity ∑j Fij Bj gathered from the whole view, nor to estimate how the power Aj Bj being radiated is distributed. Instead, these updates can be estimated by sampling methods, without ever having to calculate form factors explicitly. Since the mid 1990s such sampling approaches have been the methods most predominantly used for practical radiosity calculations. The gathered intensity can be estimated by generating a set of samples in the unit circle, lifting these onto the hemisphere, and then seeing what was the radiosity of the element that a ray incoming in that direction would have originated on. The estimate for the total gathered intensity is then just the average of the radiosities discovered by each ray. Similarly, in the power formulation, power can be distributed by generating a set of rays from the radiating element in the same way, and spreading the power to be distributed equally between each element a ray hits. This is essentially the same distribution that a path-tracing program would sample in tracing back one diffuse reflection step; or that a bidirectional ray tracing program would sample to achieve one forward diffuse reflection step when light source mapping forwards. The sampling approach therefore to some extent represents a convergence between the two techniques, the key difference remaining that the radiosity technique aims to build up a sufficiently accurate map of the radiance of all the surfaces in the scene, rather than just a representation of the current view.

Reducing computation time Although in its basic form radiosity is assumed to have a quadratic increase in computation time with added geometry (surfaces and patches), this need not be the case. The radiosity problem can be rephrased as a problem of rendering a texture mapped scene. In this case, the computation time increases only linearly with the number of patches (ignoring complex issues like cache use). Following the commercial enthusiasm for radiosity-enhanced imagery, but prior to the standardization of rapid radiosity calculation, many architects and graphic artists used a technique referred to loosely as false radiosity. By darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping, a radiosity-like effect of patch interaction could be created with a standard scanline renderer (cf. ambient occlusion). Radiosity solutions may be displayed in realtime via Lightmaps on current desktop computers with standard graphics acceleration hardware.

174

Radiosity

175

Advantages One of the advantages of the Radiosity algorithm is that it is relatively simple to explain and implement. This makes it a useful algorithm for teaching students about global illumination algorithms. A typical direct illumination renderer already contains nearly all of the algorithms (perspective transformations, texture mapping, hidden surface removal) required to implement radiosity. A strong grasp of mathematics is not required to understand or implement this algorithm[citation needed].

Limitations

A modern render of the iconic Utah teapot. Radiosity was used for all diffuse illumination in this scene.

Typical radiosity methods only account for light paths of the form LD*E, i.e., paths which start at a light source and make multiple diffuse bounces before reaching the eye. Although there are several approaches to integrating other illumination effects such as specular[4] and glossy [5] reflections, radiosity-based methods are generally not used to solve the complete rendering equation. Basic radiosity also has trouble resolving sudden changes in visibility (e.g., hard-edged shadows) because coarse, regular discretization into piecewise constant elements corresponds to a low-pass box filter of the spatial domain. Discontinuity meshing [6] uses knowledge of visibility events to generate a more intelligent discretization.

Confusion about terminology Radiosity was perhaps the first rendering algorithm in widespread use which accounted for diffuse indirect lighting. Earlier rendering algorithms, such as Whitted-style ray tracing were capable of computing effects such as reflections, refractions, and shadows, but despite being highly global phenomena, these effects were not commonly referred to as "global illumination." As a consequence, the term "global illumination" became confused with "diffuse interreflection," and "Radiosity" became confused with "global illumination" in popular parlance. However, the three are distinct concepts. The radiosity method in the current computer graphics context derives from (and is fundamentally the same as) the radiosity method in heat transfer. In this context radiosity is the total radiative flux (both reflected and re-radiated) leaving a surface, also sometimes known as radiant exitance. Calculation of Radiosity rather than surface temperatures is a key aspect of the radiosity method that permits linear matrix methods to be applied to the problem.

References [2] "Cindy Goral, Kenneth E. Torrance, Donald P. Greenberg and B. Battaile, Modeling the interaction of light between diffuse surfaces (http:/ / www. cs. rpi. edu/ ~cutler/ classes/ advancedgraphics/ S07/ lectures/ goral. pdf)",, Computer Graphics, Vol. 18, No. 3. [3] G Walton, Calculation of Obstructed View Factors by Adaptive Integration, NIST Report NISTIR-6925 (http:/ / www. bfrl. nist. gov/ IAQanalysis/ docs/ NISTIR-6925. pdf), see also http:/ / view3d. sourceforge. net/ [4] http:/ / portal. acm. org/ citation. cfm?id=37438& coll=portal& dl=ACM [5] http:/ / www. cs. huji. ac. il/ labs/ cglab/ papers/ clustering/ [6] http:/ / www. cs. cmu. edu/ ~ph/ discon. ps. gz

Radiosity

Further reading • Radiosity Overview, from HyperGraph of SIGGRAPH (http://www.siggraph.org/education/materials/ HyperGraph/radiosity/overview_1.htm) (provides full matrix radiosity algorithm and progressive radiosity algorithm) • Radiosity, by Hugo Elias (http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm) (also provides a general overview of lighting algorithms, along with programming examples) • Radiosity, by Allen Martin (http://web.cs.wpi.edu/~matt/courses/cs563/talks/radiosity.html) (a slightly more mathematical explanation of radiosity) • ROVER, by Tralvex Yeap (http://www.tralvex.com/pub/rover/abs-mnu.htm) (Radiosity Abstracts & Bibliography Library)

External links • RADical, by Parag Chaudhuri (http://www.cse.iitd.ernet.in/~parag/projects/CG2/asign2/report/RADical. shtml) (an implementation of shooting & sorting variant of progressive radiosity algorithm with OpenGL acceleration, extending from GLUTRAD by Colbeck) • Radiosity Renderer and Visualizer (http://dudka.cz/rrv) (simple implementation of radiosity renderer based on OpenGL) • Enlighten (http://www.geomerics.com) (Licensed software code that provides realtime radiosity for computer game applications. Developed by the UK company Geomerics)

Photon mapping In computer graphics, photon mapping is a two-pass global illumination algorithm developed by Henrik Wann Jensen that approximately solves the rendering equation. Rays from the light source and rays from the camera are traced independently until some termination criterion is met, then they are connected in a second step to produce a radiance value. It is used to realistically simulate the interaction of light with different objects. Specifically, it is capable of simulating the refraction of light through a transparent substance such as glass or water, diffuse interreflection between illuminated objects, the subsurface scattering of light in translucent materials, and some of the effects caused by particulate matter such as smoke or water vapor. It can also be extended to more accurate simulations of light such as spectral rendering. Unlike path tracing, bidirectional path tracing and Metropolis light transport, photon mapping is a "biased" rendering algorithm, which means that averaging many renders using this method does not converge to a correct solution to the rendering equation. However, since it is a consistent method, a correct solution can be achieved by increasing the number of photons.

176

Photon mapping

Effects Caustics Light refracted or reflected causes patterns called caustics, usually visible as concentrated patches of light on nearby surfaces. For example, as light rays pass through a wine glass sitting on a table, they are refracted and patterns of light are visible on the table. Photon mapping can trace the paths of individual photons to model where these concentrated patches of light will appear.

Diffuse interreflection A model of a wine glass ray traced with photon Diffuse interreflection is apparent when light from one diffuse object is mapping to show caustics. reflected onto another. Photon mapping is particularly adept at handling this effect because the algorithm reflects photons from one surface to another based on that surface's bidirectional reflectance distribution function (BRDF), and thus light from one object striking another is a natural result of the method. Diffuse interreflection was first modeled using radiosity solutions. Photon mapping differs though in that it separates the light transport from the nature of the geometry in the scene. Color bleed is an example of diffuse interreflection.

Subsurface scattering Subsurface scattering is the effect evident when light enters a material and is scattered before being absorbed or reflected in a different direction. Subsurface scattering can accurately be modeled using photon mapping. This was the original way Jensen implemented it; however, the method becomes slow for highly scattering materials, and bidirectional surface scattering reflectance distribution functions (BSSRDFs) are more efficient in these situations.

Usage Construction of the photon map (1st pass) With photon mapping, light packets called photons are sent out into the scene from the light sources. Whenever a photon intersects with a surface, the intersection point and incoming direction are stored in a cache called the photon map. Typically, two photon maps are created for a scene: one especially for caustics and a global one for other light. After intersecting the surface, a probability for either reflecting, absorbing, or transmitting/refracting is given by the material. A Monte Carlo method called Russian roulette is used to choose one of these actions. If the photon is absorbed, no new direction is given, and tracing for that photon ends. If the photon reflects, the surface's bidirectional reflectance distribution function is used to determine the ratio of reflected radiance. Finally, if the photon is transmitting, a function for its direction is given depending upon the nature of the transmission. Once the photon map is constructed (or during construction), it is typically arranged in a manner that is optimal for the k-nearest neighbor algorithm, as photon look-up time depends on the spatial distribution of the photons. Jensen advocates the usage of kd-trees. The photon map is then stored on disk or in memory for later usage.

177

Photon mapping

Rendering (2nd pass) In this step of the algorithm, the photon map created in the first pass is used to estimate the radiance of every pixel of the output image. For each pixel, the scene is ray traced until the closest surface of intersection is found. At this point, the rendering equation is used to calculate the surface radiance leaving the point of intersection in the direction of the ray that struck it. To facilitate efficiency, the equation is decomposed into four separate factors: direct illumination, specular reflection, caustics, and soft indirect illumination. For an accurate estimate of direct illumination, a ray is traced from the point of intersection to each light source. As long as a ray does not intersect another object, the light source is used to calculate the direct illumination. For an approximate estimate of indirect illumination, the photon map is used to calculate the radiance contribution. Specular reflection can be, in most cases, calculated using ray tracing procedures (as it handles reflections well). The contribution to the surface radiance from caustics is calculated using the caustics photon map directly. The number of photons in this map must be sufficiently large, as the map is the only source for caustics information in the scene. For soft indirect illumination, radiance is calculated using the photon map directly. This contribution, however, does not need to be as accurate as the caustics contribution and thus uses the global photon map. Calculating radiance using the photon map In order to calculate surface radiance at an intersection point, one of the cached photon maps is used. The steps are: 1. Gather the N nearest photons using the nearest neighbor search function on the photon map. 2. Let S be the sphere that contains these N photons. 3. For each photon, divide the amount of flux (real photons) that the photon represents by the area of S and multiply by the BRDF applied to that photon. 4. The sum of those results for each photon represents total surface radiance returned by the surface intersection in the direction of the ray that struck it.

Optimizations • To avoid emitting unneeded photons, the initial direction of the outgoing photons is often constrained. Instead of simply sending out photons in random directions, they are sent in the direction of a known object that is a desired photon manipulator to either focus or diffuse the light. There are many other refinements that can be made to the algorithm: for example, choosing the number of photons to send, and where and in what pattern to send them. It would seem that emitting more photons in a specific direction would cause a higher density of photons to be stored in the photon map around the position where the photons hit, and thus measuring this density would give an inaccurate value for irradiance. This is true; however, the algorithm used to compute radiance does not depend on irradiance estimates. • For soft indirect illumination, if the surface is Lambertian, then a technique known as irradiance caching may be used to interpolate values from previous calculations. • To avoid unnecessary collision testing in direct illumination, shadow photons can be used. During the photon mapping process, when a photon strikes a surface, in addition to the usual operations performed, a shadow photon is emitted in the same direction the original photon came from that goes all the way through the object. The next object it collides with causes a shadow photon to be stored in the photon map. Then during the direct illumination calculation, instead of sending out a ray from the surface to the light that tests collisions with objects, the photon map is queried for shadow photons. If none are present, then the object has a clear line of sight to the light source and additional calculations can be avoided. • To optimize image quality, particularly of caustics, Jensen recommends use of a cone filter. Essentially, the filter gives weight to photons' contributions to radiance depending on how far they are from ray-surface intersections.

178

Photon mapping This can produce sharper images. • Image space photon mapping [1] achieves real-time performance by computing the first and last scattering using a GPU rasterizer.

Variations • Although photon mapping was designed to work primarily with ray tracers, it can also be extended for use with scanline renderers.

External links • • • • •

Global Illumination using Photon Maps [2] Realistic Image Synthesis Using Photon Mapping [3] ISBN 1-56881-147-0 Photon mapping introduction [4] from Worcester Polytechnic Institute Bias in Rendering [5] Siggraph Paper [6]

References [1] [2] [3] [4] [5] [6]

http:/ / research. nvidia. com/ publication/ hardware-accelerated-global-illumination-image-space-photon-mapping http:/ / graphics. ucsd. edu/ ~henrik/ papers/ photon_map/ global_illumination_using_photon_maps_egwr96. pdf http:/ / graphics. ucsd. edu/ ~henrik/ papers/ book/ http:/ / www. cs. wpi. edu/ ~emmanuel/ courses/ cs563/ write_ups/ zackw/ photon_mapping/ PhotonMapping. html http:/ / www. cgafaq. info/ wiki/ Bias_in_rendering http:/ / www. cs. princeton. edu/ courses/ archive/ fall02/ cs526/ papers/ course43sig02. pdf

Metropolis light transport The Metropolis light transport (MLT) is a SIGGRAPH 1997 paper by Eric Veach and Leonidas J. Guibas, describing an application of a variant of the Monte Carlo method called the Metropolis-Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three dimensional scenes. The procedure constructs paths from the eye to a light source using bidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path's 'nodes' in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new 'nodes' to add and whether or not these new nodes will actually create a new path. Metropolis Light Transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing.[citation needed]

179

Metropolis light transport

References External links • • • •

Metropolis project at Stanford (http://graphics.stanford.edu/papers/metro/) LuxRender - an open source render engine that supports MLT (http://www.luxrender.net/) Kerkythea 2008 - a freeware rendering system that uses MLT (http://www.kerkythea.net/) A Practical Introduction to Metropolis Light Transport (http://rivit.cs.byu.edu/a3dg/publications/ metropolisTutorial.pdf) • Unbiased physically based rendering on the GPU (http://repository.tudelft.nl/view/ir/ uuid:4a5be464-dc52-4bd0-9ede-faefdaff8be6/)

180

181

Other Topics Anti-aliasing In digital signal processing, spatial anti-aliasing is the technique of minimizing the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications. Anti-aliasing means removing signal components that have a higher frequency than is able to be properly resolved by the recording (or sampling) device. This removal is done before (re)sampling at a lower resolution. When sampling is performed without removing this part of the signal, it causes undesirable artifacts such as the black-and-white noise near the top of figure 1-a below. In signal acquisition and audio, anti-aliasing is often done using an analog anti-aliasing filter to remove the out-of-band component of the input signal prior to sampling with an analog-to-digital converter. In digital photography, optical anti-aliasing filters are made of birefringent materials, and smooth the signal in the spatial optical domain. The anti-aliasing filter essentially blurs the image slightly in order to reduce the resolution to or below that achievable by the digital sensor (the larger the pixel pitch, the lower the achievable resolution at the sensor level).

Examples

(a)

(b)

(c)

Figure 1

In computer graphics, anti aliasing improves the appearance of polygon edges, so they are not "jagged" but are smoothed out on the screen. However, it incurs a performance cost for the graphics card and uses more video memory. The level of anti-aliasing determines how smooth polygon edges are (and how much video memory it consumes). Figure 1-a illustrates the visual distortion that occurs when anti-aliasing is not used. Near the top of the image, where the checkerboard is very small, the image is both difficult to recognize and not aesthetically appealing. In contrast, Figure 1-b shows an anti-aliased version of the scene. The checkerboard near the top Figure 2 blends into gray, which is usually the desired effect when the resolution is insufficient to show the detail. Even near the bottom of the image, the edges appear much smoother in the anti-aliased image. Figure 1-c shows another anti-aliasing algorithm, based on the sinc filter, which is considered better than the algorithm used in 1-b.[] Figure 2 shows magnified portions (interpolated using the nearest neighbor algorithm) of Figure 1-a (left) and 1-c (right) for comparison. In Figure 1-c, anti-aliasing has interpolated the brightness of the pixels at the boundaries to produce gray pixels since the space is occupied by both black and white tiles. These help make Figure 1-c appear

Anti-aliasing much smoother than Figure 1-a at the original magnification. In Figure 3, anti-aliasing was used to blend the boundary pixels of a sample graphic; this reduced the aesthetically jarring effect of the sharp, step-like boundaries that appear in the aliased graphic at the left. Anti-aliasing is often applied in rendering text on a computer screen, to suggest smooth contours that better emulate the appearance of text produced by conventional ink-and-paper printing. Particularly with fonts displayed on typical LCD screens, it is common to use subpixel rendering techniques like ClearType. Subpixel rendering requires special color-balanced anti-aliasing filters to turn what would be severe color distortion into barely-noticeable color fringes. Equivalent results can be had by making individual subpixels addressable as if they were full pixels, and supplying a hardware-based anti-aliasing filter as is done in the OLPC XO-1 laptop's display controller. Pixel geometry affects all of this, whether the anti-aliasing and subpixel addressing are done in software or hardware.

Above left: an aliased version of a simple shape; above right: an anti-aliased version of the same shape; right: The anti-aliased graphic at 5x magnification

Figure 3

Signal processing approach to anti-aliasing In this approach, the ideal image is regarded as a signal. The image displayed on the screen is taken as samples, at each (x,y) pixel position, of a filtered version of the signal. Ideally, one would understand how the human brain would process the original signal, and provide an on-screen image that will yield the most similar response by the brain. The most widely accepted analytic tool for such problems is the Fourier transform; this decomposes a signal into basis functions of different frequencies, known as frequency components, and gives us the amplitude of each frequency component in the signal. The waves are of the form:

where j and k are arbitrary non-negative integers. There are also frequency components involving the sine functions in one or both dimensions, but for the purpose of this discussion, the cosine will suffice. The numbers j and k together are the frequency of the component: j is the frequency in the x direction, and k is the frequency in the y direction. The goal of an anti-aliasing filter is to greatly reduce frequencies above a certain limit, known as the Nyquist frequency, so that the signal will be accurately represented by its samples, or nearly so, in accordance with the sampling theorem; there are many different choices of detailed algorithm, with different filter transfer functions. Current knowledge of human visual perception is not sufficient, in general, to say what approach will look best.

182

Anti-aliasing

Two dimensional considerations The previous discussion assumes that the rectangular mesh sampling is the dominant part of the problem. The filter usually considered optimal is not rotationally symmetrical, as shown in this first figure; this is because the data is sampled on a square lattice, not using a continuous image. This sampling pattern is the justification for doing signal processing along each axis, as it is traditionally done on one dimensional data. Lanczos resampling is based on convolution of the data with a discrete representation of the sinc function. If the resolution is not limited by the rectangular sampling rate of Sinc function, with separate X and Y either the source or target image, then one should ideally use rotationally symmetrical filter or interpolation functions, as though the data were a two dimensional function of continuous x and y. The sinc function of the radius, in the second figure, has too long a tail to make a good filter (it is not even square-integrable). A more appropriate analog to the one-dimensional sinc is the two-dimensional Airy disc amplitude, the 2D Fourier transform of a circular region in 2D frequency space, as opposed to a square region. One might consider a Gaussian plus enough of its second derivative to flatten the top (in the frequency domain) or sharpen it up (in the spatial domain), as shown. Functions based on the Gaussian function are natural choices, because convolution with a Gaussian gives another Gaussian whether applied to x and y or to the radius. Similarly to wavelets, another of its properties is that it is halfway between being localized in the configuration (x and y) and in the spectral (j and k) representation. As an interpolation function, a Gaussian alone seems too spread out to preserve the maximum possible detail, and thus the second derivative is added. Gaussian plus differential function As an example, when printing a photographic negative with plentiful processing capability and on a printer with a hexagonal pattern, there is no reason to use sinc function interpolation. Such interpolation would treat diagonal lines differently from horizontal and vertical lines, which is like a weak form of aliasing.

Practical real-time anti-aliasing approximations There are only a handful of primitives used at the lowest level in a real-time rendering engine (either software or hardware accelerated). These include "points", "lines" and "triangles". If one is to draw such a primitive in white against a black background, it is possible to design such a primitive to have fuzzy edges, achieving some sort of anti-aliasing. However, this approach has difficulty dealing with adjacent primitives (such as triangles that share an edge). To approximate the uniform averaging algorithm, one may use an extra buffer for sub-pixel data. The initial (and least memory-hungry) approach used 16 extra bits per pixel, in a 4×4 grid. If one renders the primitives in a careful order, such as front-to-back, it is possible to create a reasonable image. Since this requires that the primitives be in some order, and hence interacts poorly with an application programming interface such as OpenGL, the latest methods simply have two or more full sub-pixels per pixel, including full color information for each sub-pixel. Some information may be shared between the sub-pixels (such as the Z-buffer.)

183

Anti-aliasing

184

Mipmapping There is also an approach specialized for texture mapping called mipmapping, which works by creating lower resolution, prefiltered versions of the texture map. When rendering the image, the appropriate-resolution mipmap is chosen and hence the texture pixels (texels) are already filtered when they arrive on the screen. Mipmapping is generally combined with various forms of texture filtering in order to improve the final result.

An example of an image with extreme pseudo-random aliasing Because fractals have unlimited detail and no noise other than arithmetic roundoff error, they illustrate aliasing more clearly than do photographs or other measured data. The escape times, which are converted to colors at the exact centers of the pixels, go to infinity at the border of the set, so colors from centers near borders are unpredictable, due to aliasing. This example has edges in about half of its pixels, so it shows much aliasing. The first image is uploaded at its original sampling rate. (Since most modern software anti-aliases, one may have to download the full-size version to see all of the aliasing.) The second image is calculated at five times the sampling rate and down-sampled with anti-aliasing. Assuming that one would really like something like the average color over each pixel, this one is getting closer. It is clearly more orderly than the first. In order to properly compare these images, viewing them at full-scale is necessary.

1. As calculated with the program "MandelZot"

2. Anti-aliased by blurring and down-sampling by a factor of five

3. Edge points interpolated, then anti-aliased and down-sampled

4. An enhancement of the points removed from the previous image

5. Down-sampled again, without anti-aliasing

It happens that, in this case, there is additional information that can be used. By re-calculating with the distance estimator, points were identified that are very close to the edge of the set, so that unusually fine detail is aliased in from the rapidly changing escape times near the edge of the set. The colors derived from these calculated points have been identified as unusually unrepresentative of their pixels.Wikipedia:Please clarify Those points were replaced, in the third image, by interpolating the points around them. This reduces the noisiness of the image but has the side effect of brightening the colors. So this image is not exactly the same that would be obtained with an even larger set of calculated points. To show what was discarded, the rejected points, blended into a grey background, are shown in the fourth image. Finally, "Budding Turbines" is so regular that systematic (Moiré) aliasing can clearly be seen near the main "turbine axis" when it is downsized by taking the nearest pixel. The aliasing in the first image appears random because it comes from all levels of detail, below the pixel size. When the lower level aliasing is suppressed, to make the third

Anti-aliasing

185

image and then that is down-sampled once more, without anti-aliasing, to make the fifth image, the order on the scale of the third image appears as systematic aliasing in the fifth image. The best anti-aliasing and down-sampling method here depends on one's point of view. When fitting the most data into a limited array of pixels, as in the fifth image, sinc function anti-aliasing would seem appropriate. In obtaining the second and third images, the main objective is to filter out aliasing "noise", so a rotationally symmetrical function may be more appropriate. Pure down-sampling of an image has the following effect (viewing at full-scale is recommended):

1. A picture of a particular spiral feature of the Mandelbrot set.

2. 4 samples per pixel.

3. 25 samples per pixel.

4. 400 samples per pixel.

Super sampling / full-scene anti-aliasing Super sampling anti-aliasing (SSAA),[1] also called full-scene anti-aliasing (FSAA),[2] is used to avoid aliasing (or "jaggies") on full-screen images.[3] SSAA was the first type of anti-aliasing available with early video cards. But due to its tremendous computational cost and the advent of multisample anti-aliasing (MSAA) support on GPUs, it is no longer widely used in real time applications. MSAA provides somewhat lower graphic quality, but also tremendous savings in computational power. The resulting image of SSAA may seem softer, and should also appear more realistic. However, while useful for photo-like images, a simple anti-aliasing approach (such as supersampling and then averaging) may actually worsen the appearance of some types of line art or diagrams (making the image appear fuzzy), especially where most lines are horizontal or vertical. In these cases, a prior grid-fitting step may be useful (see hinting). In general, supersampling is a technique of collecting data points at a greater resolution (usually by a power of two) than the final data resolution. These data points are then combined (down-sampled) to the desired resolution, often just by a simple average. The combined data points have less visible aliasing artifacts (or moiré patterns). Full-scene anti-aliasing by supersampling usually means that each full frame is rendered at double (2x) or quadruple (4x) the display resolution, and then down-sampled to match the display resolution. Thus, a 2x FSAA would render 4 supersampled pixels for each single pixel of each frame. Rendering at larger resolutions will produce better results; however, more processor power is needed, which can degrade performance and frame rate. Sometimes FSAA is implemented in hardware in such a way that a graphical application is unaware the images are being supersampled and then down-sampled before being displayed.

Object-based anti-aliasing A graphics rendering system creates an image based on objects constructed of polygonal primitives; the aliasing effects in the image can be reduced by applying an anti-aliasing scheme only to the areas of the image representing silhouette edges of the objects. The silhouette edges are anti-aliased by creating anti-aliasing primitives which vary in opacity. These anti-aliasing primitives are joined to the silhouetted edges, and create a region in the image where the objects appear to blend into the background. The method has some important advantages over classical methods

Anti-aliasing based on the accumulation bufferWikipedia:Please clarify since it generates full-scene anti-aliasing in only two passes and does not require the use of additional memory required by the accumulation buffer. Object-based anti-aliasing was first developed at Silicon Graphics for their Indy workstation.

Anti-aliasing and gamma compression Digital images are usually stored in a gamma-compressed format, but most optical anti-aliasing filters are linear. So to downsample an image in a way that would match optical blurring, one should first convert it to a linear format, then apply the anti-aliasing filter, and finally convert it back to a gamma compressed format. Using linear arithmetic on a gamma-compressed image results in values which are slightly different from the ideal filter. This error is larger when dealing with high contrast areas, causing high contrast areas to become dimmer: bright details (such as a cat's whiskers) become visually thinner, and dark details (such as tree branches) become thicker, relative to the optically anti-aliased image.[4] Because the conversion to and from a linear format greatly slows down the process, and because the differences are usually subtle, almost all image editing software, including Final Cut Pro, Adobe Photoshop and GIMP, process images in the gamma-compressed domain.

History Important early works in the history of anti-aliasing include: • Freeman, H. (March 1974). "Computer processing of line drawing images". ACM Computing Surveys 6 (1): 57–97. doi:10.1145/356625.356627 [5]. • Crow, Franklin C. (November 1977). "The aliasing problem in computer-generated shaded images". Communications of the ACM 20 (11): 799–805. doi:10.1145/359863.359869 [6]. • Catmull, Edwin (August 23–25, 1978). "A hidden-surface algorithm with anti-aliasing". Proceedings of the 5th annual conference on Computer graphics and interactive techniques. pp. 6–11.

References [4] http:/ / www. 4p8. com/ eric. brasseur/ gamma. html [5] http:/ / dx. doi. org/ 10. 1145%2F356625. 356627 [6] http:/ / dx. doi. org/ 10. 1145%2F359863. 359869

External links • Antialiasing and Transparency Tutorial (http://lunaloca.com/tutorials/antialiasing/): Explains interaction between antialiasing and transparency, especially when dealing with web graphics • Interpolation and Gamma Correction (http://web.archive.org/web/20050408053948/http://home.no.net/ dmaurer/~dersch/gamma/gamma.html) In most real-world systems, gamma correction is required to linearize the response curve of the sensor and display systems. If this is not taken into account, the resultant non-linear distortion will defeat the purpose of anti-aliasing calculations based on the assumption of a linear system response. • The Future of Anti-Aliasing (http://www.eurogamer.net/articles/digital-foundry-future-of-anti-aliasing): A comparison of the different algorithms MSAA, MLAA, DLAA and FXAA • (French) Le rôle du filtre anti-aliasing dans les APN (the function of anti-aliasing filter in dSLR) (http://www. astrosurf.com/luxorion/apn-anti-aliasing.htm)

186

Ambient occlusion

187

Ambient occlusion In computer graphics, ambient occlusion attempts to approximate the way light radiates in real life, especially off what are normally considered non-reflective surfaces. Unlike local methods like Phong shading, ambient occlusion is a global method, meaning the illumination at each point is a function of other geometry in the scene. However, it is a very crude approximation to full global illumination. The soft appearance achieved by ambient occlusion alone is similar to the way an object appears on an overcast day.

Implementation Ambient occlusion is related to accessibility shading, which determines appearance based on how easy it is for a surface to be touched by various elements (e.g., dirt, light, etc.). It has been popularized in production animation due to its relative simplicity and efficiency. In the industry, ambient occlusion is often referred to as "sky light".[citation needed]

The ambient occlusion shading model has the nice property of offering a better perception of the 3d shape of the displayed objects. This was shown in a paper where the authors report the results of perceptual experiments showing that depth discrimination under diffuse uniform sky lighting is superior to that predicted by a direct lighting model.[1] The occlusion

at a point

over the hemisphere

where and

on a surface with normal

can be computed by integrating the visibility function

with respect to projected solid angle:

is the visibility function at

, defined to be zero if

is occluded in the direction

is the infinitesimal solid angle step of the integration variable

and one otherwise,

. A variety of techniques are used to

approximate this integral in practice: perhaps the most straightforward way is to use the Monte Carlo method by casting rays from the point and testing for intersection with other scene geometry (i.e., ray casting). Another approach (more suited to hardware acceleration) is to render the view from

by rasterizing black geometry against

a white background and taking the (cosine-weighted) average of rasterized fragments. This approach is an example of a "gathering" or "inside-out" approach, whereas other algorithms (such as depth-map ambient occlusion) employ "scattering" or "outside-in" techniques. In addition to the ambient occlusion value, a "bent normal" vector is often generated, which points in the average direction of unoccluded samples. The bent normal can be used to look up incident radiance from an environment map to approximate image-based lighting. However, there are some situations in which the direction of the bent normal is a misrepresentation of the dominant direction of illumination, e.g.,

Ambient occlusion

188

In this example the bent normal Nb has an unfortunate direction, since it is pointing at an occluded surface.

In this example, light may reach the point p only from the left or right sides, but the bent normal points to the average of those two sources, which is, unfortunately, directly toward the obstruction.

Recognition In 2010, Hayden Landis, Ken McGaugh and Hilmar Koch were awarded a Scientific and Technical Academy Award for their work on ambient occlusion rendering.[2]

References [2] Oscar 2010: Scientific and Technical Awards (http:/ / www. altfg. com/ blog/ awards/ oscar-2010-scientific-and-technical-awards-489/ ), Alt Film Guide, Jan 7, 2010

External links • Depth Map based Ambient Occlusion (http://www.andrew-whitehurst.net/amb_occlude.html) • NVIDIA's accurate, real-time Ambient Occlusion Volumes (http://research.nvidia.com/publication/ ambient-occlusion-volumes) • Assorted notes about ambient occlusion (http://www.cs.unc.edu/~coombe/research/ao/) • Ambient Occlusion Fields (http://www.tml.hut.fi/~janne/aofields/) — real-time ambient occlusion using cube maps • PantaRay ambient occlusion used in the movie Avatar (http://research.nvidia.com/publication/ pantaray-fast-ray-traced-occlusion-caching-massive-scenes) • Fast Precomputed Ambient Occlusion for Proximity Shadows (http://hal.inria.fr/inria-00379385) real-time ambient occlusion using volume textures • Dynamic Ambient Occlusion and Indirect Lighting (http://download.nvidia.com/developer/GPU_Gems_2/ GPU_Gems2_ch14.pdf) a real time self ambient occlusion method from Nvidia's GPU Gems 2 book • GPU Gems 3 : Chapter 12. High-Quality Ambient Occlusion (http://http.developer.nvidia.com/GPUGems3/ gpugems3_ch12.html) • ShadeVis (http://vcg.sourceforge.net/index.php/ShadeVis) an open source tool for computing ambient occlusion • xNormal (http://www.xnormal.net) A free normal mapper/ambient occlusion baking application

Ambient occlusion

189

• 3dsMax Ambient Occlusion Map Baking (http://www.mrbluesummers.com/893/video-tutorials/ baking-ambient-occlusion-in-3dsmax-monday-movie) Demo video about preparing ambient occlusion in 3dsMax

Caustics In optics, a caustic or caustic network [1] is the envelope of light rays reflected or refracted by a curved surface or object, or the projection of that envelope of rays on another surface.[] The caustic is a curve or surface to which each of the light rays is tangent, defining a boundary of an envelope of rays as a curve of concentrated light. [] Therefore in the image to the right, the caustics can be the patches of light or their bright edges. These shapes often have cusp singularities. Caustics produced by a glass of water

Explanation Concentration of light, especially sunlight, can burn. The word caustic, in fact, comes from the Greek καυστός, burnt, via the Latin causticus, burning. A common situation where caustics are visible is when light shines on a drinking glass. The glass casts a shadow, but also produces a curved region of bright light. In ideal circumstances (including perfectly parallel rays, as if from a point source at infinity), a nephroid-shaped patch of light can be produced.[2] Rippling caustics are commonly formed when light shines through waves on a body of water.

Nephroid caustic at bottom of tea cup

Another familiar caustic is the rainbow.[3][4] Scattering of light by raindrops causes different wavelengths of light to be refracted into arcs of differing radius, producing the bow.

Caustics made by the surface of water

Caustics

Computer graphics In computer graphics, most modern rendering systems support caustics. Some of them even support volumetric caustics. This is accomplished by raytracing the possible paths of the light beam through the glass, accounting for the refraction and reflection. Photon mapping is one implementation of this. The focus of most computer graphics systems is aesthetics rather than physical accuracy. Some computer graphic systems work by "forward ray tracing" wherein photons are modeled as coming from a light source and bouncing around the environment according to rules. Caustics are formed in the A computer-generated image of a wine glass ray regions where sufficient photons strike a surface causing it to be traced with photon mapping to simulate caustics brighter than the average area in the scene. “Backward ray tracing” works in the reverse manner beginning at the surface and determining if there is a direct path to the light source.[5] Some examples of 3D ray-traced caustics can be found here [6].

References [1] Lynch DK and Livingston W (2001). Color and Light in Nature. Cambridge University Press. ISBN 978-0-521-77504-5. Chapter 3.16 The caustic network, Google books preview (http:/ / books. google. com/ books?id=4Abp5FdhskAC& pg=PA93& lpg=PA93& dq=Caustic+ Network& source=bl& ots=bN-ULVuyq3& sig=VHt0Y8UFxFOaoDBL8E_gmVxgoIg& hl=en& ei=7qRgSpufMtW-lAedlpnRCQ& sa=X& oi=book_result& ct=result& resnum=4) [2] Circle Catacaustic (http:/ / mathworld. wolfram. com/ CircleCatacaustic. html). Wolfram MathWorld. Retrieved 2009-07-17. [3] Rainbow caustics (http:/ / atoptics. co. uk/ fz552. htm) [4] Caustic fringes (http:/ / atoptics. co. uk/ fz564. htm) [5] http:/ / http. developer. nvidia. com/ GPUGems/ gpugems_ch02. html [6] http:/ / www. theeshadow. com/ h/ caustic/

• Born, Max; and Wolf, Emil (1999). Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (7th ed.). Cambridge University Press. ISBN 0-521-64222-1. • Nye, John (1999). Natural Focusing and Fine Structure of Light: Caustics and Wave Dislocations. CRC Press. ISBN 978-0-7503-0610-2.

Further reading • Ferraro, Pietro (1996). "What a caustic!". The Physics Teacher 34 (9): 572. Bibcode: 1996PhTea..34..572F (http:/ /adsabs.harvard.edu/abs/1996PhTea..34..572F). doi: 10.1119/1.2344572 (http://dx.doi.org/10.1119/1. 2344572).

190

Subsurface scattering

191

Subsurface scattering Subsurface scattering (or SSS) is a mechanism of light transport in which light penetrates the surface of a translucent object, is scattered by interacting with the material, and exits the surface at a different point. The light will generally penetrate the surface and be reflected a number of times at irregular angles inside the material, before passing back out of the material at an angle other than the angle it would have if it had been reflected directly off the surface. Subsurface scattering is important in 3D computer graphics, being necessary for the realistic rendering of materials such as marble, skin, and milk.

Direct surface scattering (left), plus subsurface scattering (middle), create the final image on the right.

Rendering Techniques Most materials used in real-time computer graphics today only account for the interaction of light at the Example of Subsurface scattering made in Blender software. surface of an object. In reality, many materials are slightly translucent: light enters the surface; is absorbed, scattered and re-emitted — potentially at a different point. Skin is a good case in point; only about 6% of reflectance is direct, 94% is from subsurface scattering.[] An inherent property of semitransparent materials is absorption. The further through the material light travels, the greater the proportion absorbed. To simulate this effect, a measure of the distance the light has traveled through the material must be obtained.

Depth Map based SSS One method of estimating this distance is to use depth maps [] , in a manner similar to shadow mapping. The scene is rendered from the light's point of view into a depth map, so that the distance to the nearest surface is stored. The depth map is then projected onto it using standard projective texture mapping and the scene re-rendered. In this pass, when shading a given point, the distance from the light at the point the ray entered the surface can be obtained by a simple texture Depth estimation using depth maps lookup. By subtracting this value from the point the ray exited the object we can gather an estimate of the distance the light has traveled through the object. The measure of distance obtained by this method can be used in several ways. One such way is to use it to index directly into an artist created 1D texture that falls off exponentially with distance. This approach, combined with

Subsurface scattering other more traditional lighting models, allows the creation of different materials such as marble, jade and wax. Potentially, problems can arise if models are not convex, but depth peeling [] can be used to avoid the issue. Similarly, depth peeling can be used to account for varying densities beneath the surface, such as bone or muscle, to give a more accurate scattering model. As can be seen in the image of the wax head to the right, light isn’t diffused when passing through object using this technique; back features are clearly shown. One solution to this is to take multiple samples at different points on surface of the depth map. Alternatively, a different approach to approximation can be used, known as texture-space diffusion.

Texture Space Diffusion As noted at the start of the section, one of the more obvious effects of subsurface scattering is a general blurring of the diffuse lighting. Rather than arbitrarily modifying the diffuse function, diffusion can be more accurately modeled by simulating it in texture space. This technique was pioneered in rendering faces in The Matrix Reloaded,[] but has recently fallen into the realm of real-time techniques. The method unwraps the mesh of an object using a vertex shader, first calculating the lighting based on the original vertex coordinates. The vertices are then remapped using the UV texture coordinates as the screen position of the vertex, suitable transformed from the [0, 1] range of texture coordinates to the [-1, 1] range of normalized device coordinates. By lighting the unwrapped mesh in this manner, we obtain a 2D image representing the lighting on the object, which can then be processed and reapplied to the model as a light map. To simulate diffusion, the light map texture can simply be blurred. Rendering the lighting to a lower-resolution texture in itself provides a certain amount of blurring. The amount of blurring required to accurately model subsurface scattering in skin is still under active research, but performing only a single blur poorly models the true effects.[] To emulate the wavelength dependent nature of diffusion, the samples used during the (Gaussian) blur can be weighted by channel. This is somewhat of an artistic process. For human skin, the broadest scattering is in red, then green, and blue has very little scattering. A major benefit of this method is its independence of screen resolution; shading is performed only once per texel in the texture map, rather than for every pixel on the object. An obvious requirement is thus that the object have a good UV mapping, in that each point on the texture must map to only one point of the object. Additionally, the use of texture space diffusion provides one of the several factors that contribute to soft shadows, alleviating one cause of the realism deficiency of shadow mapping.

References External links • Henrik Wann Jensen's subsurface scattering website (http://graphics.ucsd.edu/~henrik/images/subsurf.html) • An academic paper by Jensen on modeling subsurface scattering (http://graphics.ucsd.edu/~henrik/papers/ bssrdf/) • Maya Tutorial - Subsurface Scattering: Using the Misss_Fast_Simple_Maya shader (http://www.highend3d. com/maya/tutorials/rendering_lighting/shaders/135.html) • 3d Studio Max Tutorial - The definitive guide to using subsurface scattering in 3dsMax (http://www. mrbluesummers.com/3510/3d-tutorials/3dsmax-mental-ray-sub-surface-scattering-guide/)

192

Motion blur

193

Motion blur Motion blur is the apparent streaking of rapidly moving objects in a still image or a sequence of images such as a movie or animation. It results when the image being recorded changes during the recording of a single frame, either due to rapid movement or long exposure.

Applications of motion blur Photography When a camera creates an image, that image does not represent a single instant of time. Because of technological constraints or artistic requirements, the image may represent the scene over a period of time. Most often this exposure time is brief enough that the image captured by the camera appears to capture an instantaneous moment, but this is not always so, and a fast moving object or a longer exposure time may result in blurring artifacts which make this apparent. As objects in a scene move, an image of that scene must represent an integration of all positions of those objects, as well as the camera's viewpoint, over the period of exposure determined by the shutter speed. In such an image, any object moving with respect to the camera will look blurred or smeared along the direction of relative motion. This smearing may occur on an object that is moving or on a static background if the camera is moving. In a film or television image, this looks natural because the human eye behaves in much the same way.

An example of motion blur showing a London bus passing a telephone box in London

Because the effect is caused by the relative motion between the camera, and the objects and scene, motion blur may be avoided by panning the camera to track those moving objects. In this case, even with long exposure times, the objects will appear sharper, and the background more blurred.

1920s example of motion blur

Motion blur

194

Animation In computer animation (2D or 3D) it is computer simulation in time and/or on each frame that the 3D rendering/animation is being made with real video camera during its fast motion or fast motion of "cinematized" objects or to make it look more natural or smoother. Two animations rotating around a figure, with motion blur (left) and without

Without this simulated effect each frame shows a perfect instant in time (analogous to a camera with an infinitely fast shutter), with zero motion blur. This is why a video game with a frame rate of 25-30 frames per second will seem staggered, while natural motion filmed at the same frame rate appears rather more continuous. Many modern video games feature motion blur, especially vehicle simulation games. Some of the better-known games that utilise this are the recent Need for Speed titles, Unreal Tournament III, The Legend of Zelda: Majora's Mask, among many others. There are two main methods used in video games to achieve motion blur: cheaper full-screen effects, which typically only take camera movement (and sometimes how fast the camera is moving in 3-D Space to create a radial blur) into mind, and more "selective" or "per-object" motion blur, which typically uses a shader to create a velocity buffer to mark motion intensity for a motion blurring effect to be applied to or uses a shader to perform geometry extrusion. In pre-rendered computer animation, such as CGI movies, realistic motion blur can be drawn because the renderer has more time to draw each frame. Temporal anti-aliasing produces frames as a composite of many instants. Motion lines in cel animation are drawn in the same direction as motion blur and perform much the same duty. Go motion is a variant of stop motion animation that moves the models during the exposure to create a less staggered effect.

Computer graphics In 2D computer graphics, motion blur is an artistic filter that converts the digital image[1]/bitmap[2]/raster image in order to simulate the effect. Many graphical software products (e.g. Adobe Photoshop or GIMP) offer simple motion blur filters. However, for advanced motion blur filtering including curves or non-uniform speed adjustment, specialized software products are necessary.[3]

Biology When an animal's eye is in motion, the image will suffer from motion blur, resulting in an inability to resolve details. To cope with this, humans generally alternate between saccades (quick eye movements) and fixation (focusing on a single point). Saccadic masking makes motion blur during a saccade invisible. Similarly, smooth pursuit allows the eye to track a target in rapid motion, eliminating motion blur of that target instead of the scene.

Motion blur

195

Negative effects of motion blur In televised sports, where conventional cameras expose pictures 25 or 30 times per second, motion blur can be inconvenient because it obscures the exact position of a projectile or athlete in slow motion. For this reason special cameras are often used which eliminate motion blurring by taking rapid exposures on the order of 1/1000 of a second, and then transmitting them over the course of the next 1/25 or 1/30 of a second. Although this gives sharper slow motion replays it can look strange at normal speed because the eye expects to see motion blurring and does not. Conversely, extra motion blur can unavoidably occur on displays when it is not desired. This occurs with some video displays (especially LCD) that exhibits motion blur during fast motion. This can lead to more perceived motion blurring above and beyond the pre-existing motion blur in the video material. See display motion blur.

A taxicab starting to drive off blurred the images of faces.

Sometimes, motion blur can be removed from images with the help of deconvolution. Some video game players claim that artificial motion blur causes headaches.[4] For some games, it is recommended to disable motion blur and use a high refresh rate screen and playing with a high fps count, that way it becomes more natural to pinpoint objects on the screen (useful when you have to react to them in small time windows). SomeWikipedia:Avoid weasel words players argue that motion blur should come naturally from the eyes, and screens shouldn't need to simulate that effect.

Restoration An example of blurred image restoration with Wiener deconvolution:

From left: original image, blurred image and de-blurred image. Notice some artifacts in de-blurred image.

Motion blur

196

References [1] Motion Blur Effect (http:/ / www. tutorialsroom. com/ tutorials/ graphics/ motion_blur. html), TutorialsRoom [2] Photoshop - Motion Blur (http:/ / artist. tizag. com/ photoshopTutorial/ motionblur. php), tizag.com [3] Traditional motion blur methods (http:/ / www. virtualrig-studio. com/ traditional-motion-blur. php), virtualrig-studio.com

Gallery

Motion blur is frequently employed in sports photography (particularly motor sports) to convey a sense of speed. To achieve this effect it is necessary to use a slow shutter speed and pan the lens of the camera in time with the motion of the object

Taken aboard an airplane turning above San Jose at night. The city lights form concentric strips.

The traffic on this street leaves brilliant streaks due to the low shutter speed of the camera and the cars' relatively fast speed.

A baby waving her arms violently up and down.

Strickland Falls in Tasmania, Australia, taken using a neutral density filter. ND filters reduce light of all colors or wavelengths equally, allowing an increase in aperture and decrease in shutter speed without overexposing the image. To create the motion blur seen here, the shutter must be kept open for a relatively long time, making it necessary to reduce the amount of light coming through the lens.

Long exposure photograph of moths showing exaggerated rod effect.

Beam tracing

Beam tracing Beam tracing is an algorithm to simulate wave propagation. It was developed in the context of computer graphics to render 3D scenes, but it has been also used in other similar areas such as acoustics and electromagnetism simulations. Beam tracing is a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with beams. Beams are shaped like unbounded pyramids, with (possibly complex) polygonal cross sections. Beam tracing was first proposed by Paul Heckbert and Pat Hanrahan.[1] In beam tracing, a pyramidal beam is initially cast through the entire viewing frustum. This initial viewing beam is intersected with each polygon in the environment, typically from nearest to farthest. Each polygon that intersects with the beam must be visible, and is removed from the shape of the beam and added to a render queue. When a beam intersects with a reflective or refractive polygon, a new beam is created in a similar fashion to ray-tracing. A variant of beam tracing casts a pyramidal beam through each pixel of the image plane. This is then split up into sub-beams based on its intersection with scene geometry. Reflection and transmission (refraction) rays are also replaced by beams.This sort of implementation is rarely used, as the geometric processes involved are much more complex and therefore expensive than simply casting more rays through the pixel. Cone tracing is a similar technique using a cone instead of a complex pyramid. Beam tracing solves certain problems related to sampling and aliasing, which can plague conventional ray tracing approaches.[2] Since beam tracing effectively calculates the path of every possible ray within each beam [3](which can be viewed as a dense bundle of adjacent rays), it is not as prone to under-sampling (missing rays) or over-sampling (wasted computational resources). The computational complexity associated with beams has made them unpopular for many visualization applications. In recent years, Monte Carlo algorithms like distributed ray tracing (and Metropolis light transport?) have become more popular for rendering calculations. A 'backwards' variant of beam tracing casts beams from the light source into the environment. Similar to backwards raytracing and photon mapping, backwards beam tracing may be used to efficiently model lighting effects such as caustics.[4] Recently the backwards beam tracing technique has also been extended to handle glossy to diffuse material interactions (glossy backward beam tracing) such as from polished metal surfaces.[5] Beam tracing has been successfully applied to the fields of acoustic modelling[6] and electromagnetic propagation modelling.[7] In both of these applications, beams are used as an efficient way to track deep reflections from a source to a receiver (or vice-versa). Beams can provide a convenient and compact way to represent visibility. Once a beam tree has been calculated, one can use it to readily account for moving transmitters or receivers. Beam tracing is related in concept to cone tracing.

References [1] P. S. Heckbert and P. Hanrahan, " Beam tracing polygonal objects (http:/ / www. eng. utah. edu/ ~cs7940/ papers/ p119-heckbert. pdf)", Computer Graphics 18(3), 119-127 (1984). [2] A. Lehnert, "Systematic errors of the ray-tracing algorithm", Applied Acoustics 38, 207-221 (1993). [3] Steven Fortune, "Topological Beam Tracing", Symposium on Computational Geometry 1999: 59-68 [4] M. Watt, "Light-water interaction using backwards beam tracing", in "Proceedings of the 17th annual conference on Computer graphics and interactive techniques(SIGGRAPH'90)",377-385(1990). [5] B. Duvenhage, K. Bouatouch, and D.G. Kourie, "Exploring the use of Glossy Light Volumes for Interactive Global Illumination", in "Proceedings of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa", 2010. [6] T. Funkhouser, I. Carlbom, G. Elko, G. Pingali, M. Sondhi, and J. West, "A beam tracing approach to acoustic modelling for interactive virtual environments", in Proceedings of the 25th annual conference on Computer graphics and interactive techniques (SIGGRAPH'98), 21-32 (1998). [7] Steven Fortune, "A Beam-Tracing Algorithm for Prediction of Indoor Radio Propagation", in WACG 1996: 157-166

197

Cone tracing

Cone tracing Principles Cone tracing[1] and beam tracing are a derivative of the ray tracing algorithm that replaces rays, which have no thickness, with thick rays. This is done for two reasons:

From a physics of light transport point of view The energy reaching the pixel comes from the whole solid angle by which the eyes see the scene, not from its central sample. This yields the key notion of pixel footprint on surfaces or in the texture space, which is the back projection of the pixel on the scene. The description above corresponds to the pinhole camera simplified optics classically used in Computer Graphics. Note that this approach also permit to represent lens-based camera and thus depth of field effects, using a cone which section decreases from lens size to zero at focal plane then increases. Moreover, a real optic system does not focus on exact points because of diffraction and imperfections. This can be model as a Point Spread Function weight within a solid angle larger than the pixel.

From a signal processing point of view Ray-tracing images suffer strong aliasing because the "projected geometric signal" has very high frequencies exceeding the Nyquist-Shannon maximal frequency that can be represented using the pixel sampling rate, so that the input signal has to be low-pass filtered - i.e., integrated over a solid angle around the pixel center. Note that contrary to intuition, the filter should not be the pixel footprint since a box filter has poor spectral properties. Conversely, the ideal Sinc function is not practical, having infinite support and possibly negative values. A gaussian or a Lanczos filter are considered good compromise.

Computer Graphics models Cone and Beam early papers rely on different simplifications: the first considers a circular section and treats the intersection with various possible shapes. The second treats an accurate pyramidal beam through the pixel and along a complex path, but it only works for polyedrical shapes. Cone tracing solves certain problems related to sampling and aliasing, which can plague conventional ray tracing. However, cone tracing creates a host of problems of its own. For example, just intersecting a cone with scene geometry leads to an enormous variety of possible results. For this reason, cone tracing has remained mostly unpopular. In recent years, increases in computer speed have made Monte Carlo algorithms like distributed ray tracing - i.e. stochastic explicit integration of the pixel - much more used than cone tracing because the results are exact provided enough sample are used. But the convergence is so slow that even in the context of off-line rendering a huge amount of time is requested to avoid noise. Differential cone-tracing, considering a differential angular neighborhood around a ray, avoids the complexity of exact geometry intersection but requires a LOD representation of the geometry and appearance of the objects. MIPmapping is an approximation of it limited to the integration of the surface texture within a cone footprint. Differential ray-tracing [2] extends it to textured surfaces viewed through complex paths of cones reflected or refracted by curved surfaces. The full differential cone-tracing - including geometry and appearance filtering - is mostly applicable as volumetric cone-tracing,[3] relying on 3D MIPmapping. A recent SVO implementation by Crassin et al.[4] has generalized this approach to global illumination and adapted it to GPU, showing remarkable quality images at 25-70 frames per second.[5] The game engine Unreal Engine 4 use this technique for real-time global illumination, replacing the need to precompute lighting.[6]

198

Cone tracing

References [1] [2] [3] [4]

Amanatides. Ray Tracing with Cones Siggraph'84 http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 129. 582 Homan Igehy. Tracing Ray Differentials. http:/ / www-graphics. stanford. edu/ papers/ trd/ Fabrice Neyret. Modeling Animating and Rendering Complex Scenes using Volumetric Textures. http:/ / hal. inria. fr/ inria-00537523 Cyril Crassin, Fabrice Neyret, Miguel Sainz, Simon Green, Elmar Eisemann. Interactive Indirect Illumination Using Voxel Cone Tracing. http:/ / www. icare3d. org/ research-cat/ publications/ interactive-indirect-illumination-using-voxel-cone-tracing. html [5] http:/ / www. youtube. com/ watch?v=fAsg_xNzhcQ

Ray tracing hardware Ray tracing hardware is special purpose computer hardware designed for accelerating ray tracing calculations.

Introduction: Ray tracing and rasterization The problem of rendering 3D graphics can be conceptually presented as finding all intersections between a set of "primitives" (typically triangles or polygons) and a set of "rays" (typically one or more per pixel).[1] Up to 2010 all typical graphic acceleration boards, called graphics processing units (GPUs), use rasterization algorithms. The ray tracing algorithm solves the rendering problem in a different way. In each step, it finds all intersections of a ray with a set of relevant primitives of the scene. Both approaches have their own benefits and drawbacks. Rasterization can be performed using devices based on a stream computing model, one triangle at the time, and access to the complete scene is needed only once.[2] The drawback of rasterization is that non-local effects, required for an accurate simulation of a scene, such as reflections and shadows are difficult; and refractions[3] nearly impossible to compute. The ray tracing algorithm is inherently suitable for scaling by parallelization of individual ray renders.[4] However anything other than ray casting requires recursion of the ray tracing algorithm (and random access to the scene graph) to complete their analysis,[5] since reflected, refracted, and scattered rays require that various parts of the scene be re-accessed in a way not easily predicted. But it can easily compute various kinds of physically correct effects, providing much more realistic impression than rasterization.[6] Whilst the complexity of the computation for rasterization scales linearly with number of trianglesWikipedia:Neutral point of viewTalk:Ray tracing hardware#[7][8] the complexity of a properly implemented ray tracing algorithm scales logarithmically;[9] this is due to objects (triangles and collections of triangles) being placed into BSP trees or similar structures, and only being analyzed if a ray intersects with the bounding volume of the binary space partition.[10][11]

Implementations Various implementations of ray tracing hardware have been created, both experimental and commercial: • (2002–2009) ART VPS company (founded 2002[12]), situated in the UK, sold ray tracing hardware for off-line rendering. The hardware used multiple specialized processors that accelerated ray-triangle intersection tests. Software provided integration with Maya (see Autodesk Maya) and Max (see Autodesk 3ds Max) data formats, and utilized the Renderman scene description language for sending data to the processors (the .RIB or Renderman Interface Bytestream file format).[13] As of 2010, ARTVPS no longer produces ray tracing hardware but continues to produce rendering software.[12] • (2002) The computer graphics laboratory at Saarland University headed by Dr. -Ing Slusallek has produced prototype ray tracing hardware including the FPGA based fixed function data driven SaarCOR (Saarbrücken’s Coherence Optimized Ray Tracer) chip[14][15][16] and a more advanced programmable (2005) processor, the Ray Processing Unit (RPU)[17]

199

Ray tracing hardware • (1996) Researchers at Princeton university proposed using DSPs to build a hardware unit for ray tracing acceleration, named "TigerSHARK"[18] • Implementations of volume rendering using ray tracing algorithms on custom hardware have also been proposed : (2002) VIZARD II[19] or built (1999) : vg500 / VolumePro ASIC based system[20][21] • Caustic Graphics[22] have produced a plug in card, the "CausticOne" (2010), that accelerates global illumination and other ray based rendering processes when coupled to a PC CPU and GPU. The hardware is designed to organize scattered rays (typically produced by global illumination problems) into more coherent sets (lower spatial or angular spread) for further processing by an external processor.[23] • Siliconarts[24] developed a dedicated real-time ray tracing hardware (2010). RayCore (2011), which is the world's first real-time ray tracing semiconductor IP, was announced.

References, notes and further reading Notes [1] Introduction to real time raytracing (https:/ / graphics. cg. uni-saarland. de/ fileadmin/ cguds/ courses/ old_courses/ OpenRT/ Siggraph05/ Slusallek_Intro. ppt) Course notes, Course 41 , Philipp Slusallek, Peter Shirley, Bill Mark, Gordon Stoll, Ingo Wald , SIGGRAPH 2005 , (powerpoint presentation), Slide 26 :Comparison Rasterization vs. Ray Tracing (Definitions) graphics.cg.uni-saarland.de [2] For additional visualisations such as shadows, or reflections such as produced by a large flat body of water an addition pass of the scene graph is required for each effect [3] Chris Wyman's Research: Interactive Refractions (http:/ / www. cs. uiowa. edu/ ~cwyman/ publications/ projects/ refractions. html) Department of Computer Science at The University of Iowa , www.cs.uiowa.edu [4] SaarCOR —A Hardware Architecture for Ray Tracing, Jörg Schmittler, Ingo Wald, Philipp Slusallek, Section 2, "Previous work" [5] SaarCOR —A Hardware Architecture for Ray Tracing, Jörg Schmittler, Ingo Wald, Philipp Slusallek, Section 3, "The Ray Tracing Algorithm" [6] Rasterisation methods are capable of generating realistic shadows (including shadows produced by partially transparent objects), and plane reflections easily (as of 2010), but does not easily implement reflections from non planar surfaces (excluding approximations using normal maps) or refractions. [7] SaarCOR —A Hardware Architecture for Ray Tracing, Jörg Schmittler, Ingo Wald, Philipp Slusallek, Section 1, "Introduction" [8] As a first approximation - more advanced implementations including those that implement occlusion culling or predicated rendering scale better than linearly for complex (especially high occluded) scenes (Note in common API's : DirectX 10 D3D10_QUERY_OCCLUSION_PREDICATE (http:/ / msdn. microsoft. com/ en-us/ library/ ee415853(VS. 85). aspx) , in OpenGL 3.0 HP_occlusion_query ) [9] That is if X is the number of triangles, then the number of computations to complete the scene is proportional to log(X) [10] Ray Tracing and Gaming - One Year Later (http:/ / www. pcper. com/ article. php?aid=506& type=expert& pid=3) Daniel Pohl , 17/1/2008 , via "PCperspective" , www.pcper.com [11] The same methods can be used in rasterization, but the culling is limited to those BSP partitions that lie within the much larger viewing frustum, with ray tracing the viewing frustum is replaced by the volume enclosed by a single ray (or ray bundle) [12] About ArtVPS (http:/ / www. artvps. com/ content/ artvps) www.artvps [13] ALL ABOUT ARTVPS, PURE CARDS, RENDERDRIVES and RAYBOX (http:/ / www. protograph. co. uk/ artvps. html) Mark Segasby (Protograph Ltd) , www.protograph.co.uk [18] A Hardware Accelerated Ray-tracing Engine (http:/ / cscott. net/ Publications/ tigershark-thesis. pdf) Greg Humphreys, C. Scott Ananian (Independent Work) , Department of Computer Science, Princeton University, 14/5/1996 , cscott.net [19] VIZARD II: An FPGA-based Interactive Volume Rendering System (http:/ / www. doggetts. org/ michael/ Meissner-2002-VIZARDII. pdf) Urs Kanus, Gregor Wetekam, Johannes Hirche, Michael Meißner, University of Tubingen / Philips Research Hamburg , Graphics Hardware (2002), pp. 1–11 , via www.doggetts.org [20] The vg500 Real-Time Ray-Casting ASIC (http:/ / www. hotchips. org/ archives/ hc11/ 3_Tue/ hc99. s5. 4. Pfister. pdf) Hanspeter Pfister , MERL - A Mitsubishi Electric Research Laboratory , Cambridge MA (USA) www.hotchips.org [22] Caustic Graphics company website (http:/ / www. caustic. com/ ) www.caustic.com [23] Reinventing Ray Tracing (http:/ / www. drdobbs. com/ hpc-high-performance-computing/ 218500694) 15/7/2009 , Jonathan Erickson interview with James McCombe of Caustic Graphics , www.drdobbs.com [24] Siliconarts company website (http:/ / www. siliconarts. com) www.siliconarts.com

200

Ray tracing hardware

References Further reading • State of the Art in Interactive Ray Tracing (http://www.flipcode.net/archives/State-of-the-Art in interactive ray tracing.pdf) Ingo Wald and Philipp Slusallek, Computer Graphics Group, Saarland University, Review article to year 2001

201

202

Ray Tracers 3Delight 3Delight Developer(s)

DNA Research

Stable release

9.0.84 / February 2, 2011

Operating system Windows, Mac OS X, Linux Type

3D computer graphics

Licence

Proprietary

Website

www.3delight.com

[1]

3Delight, is 3D computer graphics software that runs on Microsoft Windows, OS X and Linux. It is developed by DNA Research, commonly shortened to DNA, a subsidiary of Taarna Studios. It is a photorealistic, RenderMan-compliant offline renderer.

History Work on 3Delight started in 1999. The renderer became first publicly available in 2000.[2] 3Delight was the first RenderMan-compliant renderer combining the REYES algorithm with on-demand ray-tracing. The only other RenderMan-compliant renderer capable of ray tracing at the time was BMRT. BMRT was not a REYES renderer though. 3Delight was meant to be a commercial product from the beginning. However, DNA decided to make it available free of charge from August 2000 to March 2005 in order to build a user base. During this time, customers using a large number of licenses on their sites or requiring extensive support were asked to kindly work out an agreement with DNA that specified some form of fiscal compensation for this. In March 2005, the license was changed. The first license was still free. From the second license onwards, the renderer used to be 1,000 USD per two thread node resp. 1,500 USD per four thread node. The first company that licenses 3Delight commercially, in early 2005, was Rising Sun Pictures. The current licensing scheme is based on number of threads or cores. The first license, limited to two cores, is free.

Features 3Delight primarily uses the REYES algorithm but is also well capable of doing ray tracing and global illumination. The renderer is fully multi-threaded and also supports distributed rendering. This allows for accelerated rendering on multi-CPU hosts or environments where a large number of computers are joined into a grid. It implements all required capabilities for a RenderMan-compliant renderer and also the following optional ones:[3] • • • •

Area light sources Depth of field Displacement mapping Environment mapping

3Delight • • • • • • • • • •

Global illumination Level of detail Motion blur Programmable shading Special camera projections (through the "ray trace hider") Ray tracing Shadow depth mapping Solid modeling Texture mapping Volume shading

3Delight also supports the following capabilities, which are not part of any capabilities list: • • • • • •

Photon mapping Point clouds Hierarchical subdivision surfaces NURB curves Brick maps (3 dimensional, mip-mapped textures) (RIB) Conditionals

• Class-based shaders • Co-shaders

Modules 3Delight is based on modules. The primary module is the REYES module which implements a REYES-based renderer. Another module, called 'Sabretooth', is used for ray-tracing and also supports global illumination calculations through certain shadeops. 3Delight supports explicit ray tracing of camera rays by selecting a different hider, essentially turning the renderer from a hybrid REYES/ray tracing one into a full ray-tracer. Other features include: • Extended display subset functionality to allow rendering of geometric primitives, writing to the same display variable, to different images. For example, display subsets could be used to render the skin and fur of a creature to two separate images at once without the fur matting the skin passes. • Memory efficient point clouds. Like brick maps, point clouds are organized in a spatial data structure and are loaded lazily, keeping the memory requirements as low as possible. • Procedural geometry is instanced lazily even during ray tracing, keeping the memory requirements as low as possible. • Displacement shaders can be stacked. • Displacement shaders can (additionally) be run on the vertices of a geometric primitive, before that primitive is even shaded. • The gather() shadeop can be used on point clouds and to generate sample distributions from (high dynamic range) images, e.g. for easily combining photon mapping with image based lighting. • First order ray differentials on any ray fired from within a shader. • A read/write disk cache that allows the renderer to take strain off the network, when heavy scene data needs to be repeatedly distributed to clients on a render farm or image data sent back from such clients to a central storage server.

203

3Delight • A C API that allows running RenderMan Shading Language (RSL) code on arbitrary data, e.g. inside a modelling application.

Version Release History • • • • • • • • • • • • •

[4] "Blade Runner": October 2011 3Delight 9.0.0 [5] "Antonioni": December 2009 3Delight 8.5.0 [6] "Bronson": May 2009 3Delight 8.0.0 [7] "Midnight Express": October 2008 3Delight 7.0.0 [8] "Django": November 2007 3Delight 6.5.0 [9] "Ennio": February 2007 3Delight 6.0.1 [10] "Argento": November 2006 3Delight 5.0.0 [11] "Moroder": February 2006 3Delight 4.5.0 [12] "Lucio Fulci": August 2005 3Delight 4.0.0 [13] "Indiana": March 2005 3Delight 3.0.0 3Delight 2.1.0 [14]: June 2004 3Delight 2.0.0 [15]: January 2004

• • • • • • • • •

3Delight 1.0.6beta 3Delight 1.0.0beta [16]: January 2003 3Delight 0.9.6 [17]: August 2002 3Delight 0.9.4 [18]: June 2002 3Delight 0.9.2 [19]: December 2001 3Delight 0.9.0 [20]: August 2001 3Delight 0.8.0 [21]: March 2001 3Delight 0.6.0 [22]: September 2000 3Delight 0.5.1 [23]: August 2000

Supported platforms • Apple Mac OS X on the PowerPC and x86 architectures • GNU/Linux on the x86, x86-64 and Cell architectures • Microsoft Windows on the x86 and x86-64 architectures

Operating environments The renderer comes in both 32-bit and 64-bit versions. The latter allowing the processing of very large scene datasets.

Discontinued platforms Platforms supported in the past included: • Digital Equipment Corporation Digital UNIX on the Alpha architecture • Silicon Graphics IRIX on the MIPS architecture (might still be supported, on request) • Sun Microsystems Solaris on the SPARC architecture

204

3Delight

Film credits 3Delight has been used for visual effects work on many films. Some notable examples are: • • • • • • • • • • • • • • •

Assault on Precinct 13 Bailey's Billions Black Christmas Blades of Glory The Blood Diamond Charlotte's Web CJ7 / Cheung Gong 7 hou The Chronicles of Narnia: The Lion, the Witch and the Wardrobe The Chronicles of Riddick Cube Zero District 9 Fantastic Four Fantastic Four: Rise of the Silver Surfer Final Destination 3 Harry Potter and the Half-Blood Prince

• • • • • • • • • • •

Harry Potter and the Order of the Phoenix Hulk The Incredible Hulk The Last Mimzy The Ruins The Seeker: The Dark is Rising Terminator Salvation Superman Returns The Woods X-Men: The Last Stand X-Men Origins: Wolverine

It was also used to render the following full CG features: • Adventures in Animation (Imax 3D featurette) • Happy Feet Two • Free Jimmy

References [1] http:/ / www. 3delight. com/ [3] 3Delight Technical Specifications (http:/ / www. 3delight. com/ en/ uploads/ docs/ 3delight/ 3delight_tech_specs. pdf) [4] http:/ / www. 3delight. com/ en/ uploads/ press-release/ 3dsp-10. pdf [5] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ e1f4c1a9891aede7 [6] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ a8615ea70c31b587 [7] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ ab754ad521d41035 [8] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c389d5e90c943d24 [9] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8f4d252ff97b8aaa [10] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8454e655e24588c8 [11] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 96054503e7fd242a [12] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 22cac22ce089a235 [13] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c8c7c6337e998e55 [14] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8b7f2b432aad4e21 [15] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 6ed1bad3e15a9c07 [16] http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 8857c89e2856a1de

205

3Delight [17] [18] [19] [20] [21] [22] [23]

206 http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ e0b9c83a8ef7e433 http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c292a7283ae98b0d http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ c9a112d87632314c http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ 0b2cbf41ec7f1c95 http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ ffd884b847b3f7cc http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ fb1bf705bb874588 http:/ / groups. google. com/ group/ comp. graphics. rendering. renderman/ msg/ fb13237fd2bf20ad

External links • 3Delight home page (http://www.3delight.com/) • Rodmena Network (http://www.rodmena.com/)

Amiga Reflections Amiga Reflections is 3D modeling and rendering software developed by Carsten Fuchs for the Commodore Amiga. It was later renamed Monzoom. The first Bookware release was 1989, and contained a book and a floppy disk. The book was the manual and had some tutorials explaining how a raytracer works. The Floppy contained the software with some models and examples. Carsten Fuchs extended the software with a more advanced modeler and an animation module in 1992 the Reflections-Animator. As of version 4.3, in 1998,[1] Amiga Reflections was renamed Monzoom or Monzoom 3D and distributed by Oberland Computer.[2] Monzoom Pro was available on CD with the March/April 2008 issue of the German print magazine Amiga Future.[3][4][5] Monzoom also became available for PC as Shareware.[6]

Rendered with Reflections-Animator

Publications Books • Fuchs, Carsten (1992). Amiga reflections animator. Haar bei München: Markt-&-Technik-Verlag. ISBN 978-3-87791-166-2. • Fuchs, Carsten (1989-01). Amiga reflections. Haar bei München: Markt-&-Technik-Verlag. ISBN 978-3-89090-727-7.

Scientific articles Glittenberg, Carl (2004). "Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques" [7]. Proceedings of SPIE. Ophthalmic Technologies XIV. San Jose, CA, USA. pp. 275–285. doi:10.1117/12.555626 [8]. Retrieved 2010-03-01. - discusses using 3D software, including Reflektions 4.3 (an alternate name for Reflections/Monzoom), to teach ophthalmology and train for retinal surgery.

Amiga Reflections

References [1] Listing of Carsten Fuch's software at Amiga Future http:/ / www. amigafuture. de/ asd_search. php?submit=1& cat_id=13& e=Carsten+ Fuchs [2] Exhibition History, Computer '98, in Issue 10/98 of AMIGA-Magazin by Jörn-Erik Burkert (German) http:/ / www. amiga-magazin. de/ magazin/ a10-98/ koeln. html [3] Contents listing of March/April 2008 Number 71 of Amiga Future magazine(in German) http:/ / www. amigafuture. de/ kb. php?mode=article& k=2357& highlight=monzoom [4] Cover of March/April 2008 Number 71 of Amiga Future magazine http:/ / www. amigafuture. de/ album_page. php?pic_id=4526 [5] Alternate cover and ToC listing in English (but not from the publisher) http:/ / www. vesalia. de/ e_af71. htm [6] Monzoom for Windows (German) http:/ / software. magnus. de/ grafik-video-audio/ download/ 3d-programm-monzoom-s. html [7] http:/ / spie. org/ x648. html?pf=true%0D%0A%09%09%09%09& product_id=555626 [8] http:/ / dx. doi. org/ 10. 1117%2F12. 555626

External links Objects, plugins, images • Aminet objects and images (http://aminet.net/search?path=pix&desc=reflections) • (German) plugins, scripts, etc for Monzoom (http://www.geoxis.de/monzoom/)

Reviews and Tutorials • Brief review in German (http://www.amigafuture.de/kb.php?mode=article&k=2525&highlight=monzoom), reprinted from Amiga Times 13, which can be downloaded at http://www.amigafuture.de/downloads. php?view=detail&df_id=684 • Listing of tutorials (some bad links), mostly in German (http://www.geoxis.de/monzoom/tutorials/tutorials. htm)

207

Autodesk 3ds Max

208

Autodesk 3ds Max Autodesk 3ds Max Version 2010 interface with a rendered Utah teapot Developer(s)

Autodesk Media and Entertainment

Stable release

2014 / March 27, 2013

Operating system Windows 7 and Windows 8[1] Platform

x64

Type

3D computer graphics

License

Trialware

Website

www.autodesk.com/3dsmax

[2]

Autodesk 3ds Max, formerly 3D Studio Max, is 3D computer graphics software for making 3D animations, models, and images. It was developed and produced by Autodesk Media and Entertainment. It has modeling capabilities, a flexible plugin architecture and can be used on the Microsoft Windows platform. It is frequently used by video game developers (such as for a popular children's online game known as Roblox, and the Trainz franchise), many TV commercial studios and architectural visualization studios. It is also used for movie effects and movie pre-visualization. In addition to its modeling and animation tools, the latest version of 3ds Max also features shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface, and its own scripting language.[3]

Early history and releases The original 3D Studio product was created for the DOS platform by the Yost Group and published by Autodesk. After 3D Studio DOS Release 4, the product was rewritten for the Windows NT platform, and renamed "3D Studio MAX." This version was also originally created by the Yost Group. It was released by Kinetix, which was at that time Autodesk's division of media and entertainment. Autodesk purchased the product at the second release update of the 3D Studio MAX version and internalized development entirely over the next two releases. Later, the product name was changed to "3ds max" (all lower case) to better comply with the naming conventions of Discreet, a Montreal-based software company which Autodesk had purchased. When it was re-released (release 7), the product was again branded with the Autodesk logo, and the short name was again changed to "3ds Max" (upper and lower case), while the formal product name became the current "Autodesk 3ds Max"[4]. Graphics professionals usually just refer to it as "Max", unless they are referring to Gmax—the stripped down version used in 3D game modeling. List of Updates & Service Packs: [5] • • • • • •

Autodesk® 3ds Max® 2014 Autodesk® 3ds Max® 2013 Autodesk® 3ds Max® 2012 Autodesk® 3ds Max® 2011 Autodesk® 3ds Max® 2010 Autodesk® 3ds Max® 2009

Autodesk 3ds Max • • • • • • •

Autodesk® 3ds Max® 2008 Autodesk® 3ds Max® 9 Autodesk® 3ds Max® 8 Autodesk® 3ds Max® 7.5 Autodesk® 3ds Max® 7 3ds max® 6 3ds max® 5

Features MAXScript MAXScript is a built-in scripting language that can be used to automate repetitive tasks, combine existing functionality in new ways, develop new tools and user interfaces, and much more. Plugin modules can be created entirely within MAXScript. Character Studio Character Studio was a plugin which since version 4 of Max is now integrated in 3D Studio Max, helping users to animate virtual characters. The system works using a character rig or "Biped" skeleton which has stock settings that can be modified and customized to the fit character meshes and animation needs. This tool also includes robust editing tools for IK/FK switching, Pose manipulation, Layers and Keyframing workflows, and sharing of animation data across different Biped skeletons. These "Biped" objects have other useful features that help accelerate the production of walk cycles and movement paths, as well as secondary motion. Scene Explorer Scene Explorer, a tool that provides a hierarchical view of scene data and analysis, facilitates working with more complex scenes. Scene Explorer has the ability to sort, filter, and search a scene by any object type or property (including metadata). Added in 3ds Max 2008, it was the first component to facilitate .NET managed code in 3ds Max outside of MAXScript. DWG import 3ds Max supports both import and linking of DWG files. Improved memory management in 3ds Max 2008 enables larger scenes to be imported with multiple objects. Texture assignment/editing 3ds Max offers operations for creative texture and planar mapping, including tiling, mirroring, decals, angle, rotate, blur, UV stretching, and relaxation; Remove Distortion; Preserve UV; and UV template image export. The texture workflow includes the ability to combine an unlimited number of textures, a material/map browser with support for drag-and-drop assignment, and hierarchies with thumbnails. UV workflow features include Pelt mapping, which defines custom seams and enables users to unfold UVs according to those seams; copy/paste materials, maps and colors; and access to quick mapping types (box, cylindrical, spherical). General keyframing Two keying modes — set key and auto key — offer support for different keyframing workflows. Fast and intuitive controls for keyframing — including cut, copy, and paste — let the user create animations with ease. Animation trajectories may be viewed and edited directly in the viewport. Constrained animation Objects can be animated along curves with controls for alignment, banking, velocity, smoothness, and looping, and along surfaces with controls for alignment. Weight path-controlled animation between multiple curves, and animate the weight. Objects can be constrained to animate with other objects in many ways — including look at, orientation in different coordinate spaces, and linking at different points in time. These constraints also

209

Autodesk 3ds Max support animated weighting between more than one target. All resulting constrained animation can be collapsed into standard keyframes for further editing. Skinning Either the Skin or Physique modifier may be used to achieve precise control of skeletal deformation, so the character deforms smoothly as joints are moved, even in the most challenging areas, such as shoulders. Skin deformation can be controlled using direct vertex weights, volumes of vertices defined by envelopes, or both. Capabilities such as weight tables, paintable weights, and saving and loading of weights offer easy editing and proximity-based transfer between models, providing the accuracy and flexibility needed for complicated characters. The rigid bind skinning option is useful for animating low-polygon models or as a diagnostic tool for regular skeleton animation. Additional modifiers, such as Skin Wrap and Skin Morph, can be used to drive meshes with other meshes and make targeted weighting adjustments in tricky areas. Skeletons and inverse kinematics (IK) Characters can be rigged with custom skeletons using 3ds Max bones, IK solvers, and rigging tools powered by Motion Capture Data. All animation tools — including expressions, scripts, list controllers, and wiring — can be used along with a set of utilities specific to bones to build rigs of any structure and with custom controls, so animators see only the UI necessary to get their characters animated. Four plug-in IK solvers ship with 3ds Max: history-independent solver, history-dependent solver, limb solver, and spline IK solver. These powerful solvers reduce the time it takes to create high-quality character animation. The history-independent solver delivers smooth blending between IK and FK animation and uses preferred angles to give animators more control over the positioning of affected bones. The history-dependent solver can solve within joint limits and is used for machine-like animation. IK limb is a lightweight two-bone solver, optimized for real-time interactivity, ideal for working with a character arm or leg. Spline IK solver provides a flexible animation system with nodes that can be moved anywhere in 3D space. It allows for efficient animation of skeletal chains, such as a character’s spine or tail, and includes easy-to-use twist and roll controls. Integrated Cloth solver In addition to reactor’s cloth modifier, 3ds Max software has an integrated cloth-simulation engine that enables the user to turn almost any 3D object into clothing, or build garments from scratch. Collision solving is fast and accurate even in complex simulations.(image.3ds max.jpg) Local simulation lets artists drape cloth in real time to set up an initial clothing state before setting animation keys. Cloth simulations can be used in conjunction with other 3ds Max dynamic forces, such as Space Warps. Multiple independent cloth systems can be animated with their own objects and forces. Cloth deformation data can be cached to the hard drive to allow for nondestructive iterations and to improve playback performance. Integration with Autodesk Vault Autodesk Vault plug-in, which ships with 3ds Max, consolidates users’ 3ds Max assets in a single location, enabling them to automatically track files and manage work in progress. Users can easily and safely share, find, and reuse 3ds Max (and design) assets in a large-scale production or visualization environment.

210

Autodesk 3ds Max

211

Industry usage Many recent films have made use of 3ds Max, or previous versions of the program under previous names, in CGI animation, such as Avatar and 2012, which contain computer generated graphics from 3ds Max alongside live-action acting. 3ds Max has also been used in the development of 3D computer graphics for a number of video games. Architectural and engineering design firms use 3ds Max for developing concept art and previsualization.

Apples made with 3ds max

Educational usage Educational programs at secondary and tertiary level use 3ds Max in their courses on 3D computer graphics and computer animation. Students in the FIRST competition for 3d animation are known to use 3ds Max.

Modeling techniques Polygon modeling Polygon modeling is more common with game design than any other modeling technique as the very specific control over individual polygons allows for extreme optimization. Usually, the modeler begins with one of the 3ds max primitives, and using such tools as bevel and extrude, adds detail to and refines the model. Versions 4 and up feature the Editable Polygon object, which simplifies most mesh editing operations, and provides subdivision smoothing at customizable levels. Version 7 introduced the edit poly modifier, which allows the use of the tools available in the editable polygon object to be used higher in the modifier stack (i.e., on top of other modifications)

NURBS or non-uniform rational B-spline An alternative to polygons, it gives a smoothed out surface that eliminates the straight edges of a polygon model. NURBS is a mathematically exact representation of freeform surfaces like those used for car bodies and ship hulls, which can be exactly reproduced at any resolution whenever needed. With NURBS, a smooth sphere can be created with only one face. The non-uniform property of NURBS brings up an important point. Because they are generated mathematically, NURBS objects have a parameter space in addition to the 3D geometric space in which they are displayed. Specifically, an array of values called knots specifies the extent of influence of each control vertex (CV) on the curve or surface. Knots are invisible in 3D space and you can't manipulate them directly, but occasionally their behavior affects the visible appearance of the NURBS object. Parameter space is one-dimensional for curves, which have only a single U dimension topologically, even though they exist geometrically in 3D space. Surfaces have two dimensions in parameter space, called U and V.[citation needed] NURBS curves and surfaces have the important properties of not changing under the standard geometric affine transformations (Transforms), or under perspective projections. The CVs have local control of the object: moving a CV or changing its weight does not affect any part of the object beyond the neighboring CVs. (You can override this property by using the Soft Selection controls.) Also, the control lattice that connects CVs surrounds the surface. This is known as the convex hull property.[citation needed]

Autodesk 3ds Max

212

Surface tool/editable patch object Surface tool was originally a 3rd party plugin, but Kinetix acquired and included this feature since version 3.0.[citation needed] The surface tool is for creating common 3ds Max splines, and then applying a modifier called "surface." This modifier makes a surface from every 3 or 4 vertices in a grid. This is often seen as an alternative to "mesh" or "nurbs" modeling, as it enables a user to interpolate curved sections with straight geometry (for example a hole through a box shape). Although the surface tool is a useful way to generate parametrically accurate geometry, it lacks the "surface properties" found in the similar Edit Patch modifier, which enables a user to maintain the original parametric geometry whilst being able to adjust "smoothing groups" between faces.[citation needed]

Predefined primitives This is a basic method, in which one models something using only boxes, spheres, cones, cylinders and other predefined objects from the list of Predefined Standard Primitives or a list of Predefined Extended Primitives. One may also apply boolean operations, including subtract, cut and connect. For example, one can make two spheres which will work as blobs that will connect with each other. These are called metaballs.[citation needed] Some of the 3ds Max Primitives as they appear in the wireframe view of 3ds Max 9 3ds Max Standard Primitives: Box (top right), Cone (top center), Pyramid (top left), Sphere (bottom left), Tube (bottom center) and Geosphere (bottom right) 3ds Max Extended Primitives: Torus Knot (top left), ChamferCyl (top center), Hose (top right), Capsule (bottom left), Gengon (bottom, second from left), OilTank (bottom, second from right) and Prism (bottom right)

Standard primitives Box:

Produces a rectangular prism. An alternative variation of box, called Cube, proportionally constrains the length, width and height of the box.

Cylinder:

Produces a cylinder.

Torus:

Produces a torus – or a ring – with a circular cross section, sometimes referred to as a doughnut.

Teapot:

Produces a Utah teapot. Since the teapot is a parametric object, the user can choose which parts of the teapot to display after creation. These parts include the body, handle, spout and lid.

Cone:

Produces upright or inverted cones.

Sphere:

Produces a full sphere, hemisphere, or other portion of a sphere.

Tube:

Produces round or prismatic tubes. The tube is similar to the cylinder with a hole in it.

Pyramid:

Produces a pyramid with a square or rectangular base and triangular sides.

Plane:

Produces a special type of flat polygon mesh that can be enlarged by any amount at render time. The user can specify factors to magnify the size or number of segments, or both. Modifiers such as displace can be added to a plane to simulate a hilly terrain.

Geosphere: Produces spheres and hemispheres based on three classes of regular polyhedrons.

Extended primitives

Autodesk 3ds Max

Hedra:

Produces objects from several families of polyhedra.

ChamferBox: Produces a box with beveled or rounded edges. OilTank:

Creates a cylinder with convex caps.

Spindle:

Creates a cylinder with conical caps.

Gengon:

Creates an extruded, regular-sided polygon with optionally filleted side edges.

Prism:

Creates a three-sided prism with independently segmented sides.

Torus knot:

Creates a complex or knotted torus by drawing 2D curves in the normal planes around a 3D curve. The 3D curve (called the Base Curve) can be either a circle or a torus knot. It can be converted from a torus knot object to a NURBS surface.

ChamferCyl: Creates a cylinder with beveled or rounded cap edges. Capsule:

Creates a cylinder with hemispherical caps.

L-Ex:

Creates an extruded L-shaped object.

C-Ext:

Creates an extruded C-shaped object.

Hose:

Creates a flexible object, similar to a spring.

Rendering Scanline rendering The default rendering method in 3DS Max is scanline rendering. Several advanced features have been added to the scanliner over the years, such as global illumination, radiosity, and ray tracing. mental ray mental ray is a production quality renderer developed by mental images. It is integrated into the later versions of 3ds Max, and is a powerful raytracing renderer with bucket rendering, a technique that allows distributing the rendering task for a single image between several computers efficiently, using TCP network protocol. RenderMan A third party connection tool to RenderMan pipelines is also available for those that need to integrate Max into Renderman render farms. Used by Pixar for rendering several of their CGI animated films. V-Ray A third-party render engine plug-in for 3D Studio MAX. It is widely used, frequently substituting the standard and mental ray renderers which are included bundled with 3ds Max. V-Ray continues to be compatible with older versions of 3ds Max. Brazil R/S A third-party high-quality photorealistic rendering system created by SplutterFish, LLC capable of fast ray tracing and global illumination. FinalRender Another third-party raytracing render engine created by Cebas. Capable of simulating a wide range of real-world physical phenomena. Fryrender A third party photorealistic, physically based, unbiased and spectral renderer created by RandomControl capable of very high quality and realism. Arion Render A third party hybrid GPU+CPU interactive, unbiased raytracer created by RandomControl, based on Nvidia CUDA.

213

Autodesk 3ds Max Indigo Renderer A third-party photorealistic renderer with plugins for 3ds max. Maxwell Render A third-party photorealistic rendering system created by Next Limit Technologies providing robust materials and highly accurate unbiased rendering. Octane Render A third party unbiased GPU raytracer with plugins for 3ds max, based on Nvidia CUDA. BIGrender 3.0 Another third-party rendering plugin. Capable of overcoming 3DS rendering memory limitations with rendering huge pictures. Luxrender An open-source raytracer supporting 3DS Max, Cinema 4D, Softimage, and Blender. Focuses on photorealism by simulating real light physics as much as possible.

Licensing Earlier versions (up to and including 3D Studio Max R3.1) required a special copy protection device (called a dongle) to be plugged into the parallel port while the program was run, but later versions incorporated software based copy prevention methods instead. Current versions require online registration. Due to the high price of the commercial version of the program, Autodesk also offers a free student version, which explicitly states that it is to be used for "educational purposes only". The student version has identical features to the full version, but is only for single use and cannot be installed on a network. The student license expires after three years, at which time the user, if they are still a student, may download the latest version, renewing the license for another three years. Autodesk also sells a perpetual student license which allows 3ds Max to be used for the lifetime of the original purchaser, who must be a student only at the time of purchase. The software cannot be used commercially, but Autodesk offers a significant discount when upgrading to a commercial license. Because final renders are watermark free, the perpetual student license is suitable for portfolio creation. Typically, the perpetual student license version of 3ds Max is bundled with Maya, Softimage XSI, Motionbuilder, Mudbox and Sketchbook Pro as a complete package.

Notes [1] [2] [3] [4] [5]

http:/ / usa. autodesk. com/ adsk/ servlet/ ps/ dl/ item?siteID=123112& id=15458146& linkID=9241177 http:/ / www. autodesk. com/ 3dsmax "Autodesk 3ds Max — Detailed Features" (http:/ / usa. autodesk. com/ adsk/ servlet/ pc/ index?siteID=123112& id=13567426), 2008-03-25 History of Autodesk 3ds Max (http:/ / area. autodesk. com/ maxturns20/ history) (http:/ / usa. autodesk. com/ adsk/ servlet/ ps/ dl/ index?id=2334435& linkID=9241178& siteID=123112)

External links • 3ds max Official site (http://www.autodesk.com/3dsmax) • 3ds max Resource site (http://area.autodesk.com/) • 3ds Max - Free Video Tutorials (http://www.cgmeetup.net/home/tutorials/autodesk-3d-studio-max/) • Autodesk 3ds Max 2014 New Features video (http://www.cgmeetup.net/home/ autodesk-maya-max-and-softimage-2014-sneak-peek-videos/) • 3ds max Exporter (http://blog.sketchfab.com/3ds-max-exporter)

214

Autodesk 3ds Max

215

• 3D Studio Max (http://www.dmoz.org/Computers/Software/Graphics/3D/Rendering_and_Modelling/ 3D_Studio_Max//) at the Open Directory Project • History of 3ds Max (http://wiki.cgsociety.org/index.php/3ds_Max_History) • Pre-history of 3ds Max (http://www.asterius.com/atari/)

Anim8or Anim8or Anim8or screenshot Developer(s)

R. Steven Glanville

Initial release

July 20, 1999

Stable release

0.95c / April 2, 2007

Preview release

0.97d / September 21, 2008

Development status Active [1] Operating system

Microsoft Windows

Type

3D Modeling and Animation

License

Freeware

Website

www.anim8or.com

[2]

Anim8or is a freeware OpenGL based 3D modeling and animation program by R. Steven Glanville, a software engineer at NVidia. Currently at version 0.97, it is a compact program with several tools which would normally be expected in high-end, paid software. To date, every version released has been under 2 MB, despite the fact that it does not make full use of the windows native interface, carrying some graphical elements of its own. Although few official tutorials have been posted by the author, many other users have posted their own on sites such as YouTube and the anim8or home page. While Anim8or was once comparable to other freeware 3D animation software such as Blender, it has seen less progression in recent years.

Development On July 20, 1999, a message was posted to the newsgroup comp.graphics.packages.3dstudio, introducing the first version of Anim8or to the public.[3] In its first week, the original version was downloaded almost 100 times.[4] The next version, 0.2, was released on September 6, 1999, containing bug fixes and the ability to save images as JPEG files. In the past few years, newer versions have been released, introducing features such as undo and redo commands, keyboard shortcuts, an improved renderer and morph targets. With each new version, the popularity of Anim8or has grown. It has been featured in several magazines including 3D User, Freelog, c't and the Lockergnome newsletter. Anim8or's latest stable version, 0.95, was released to the public on November 4, 2006, although beta versions were available earlier for users wanting to test them and provide feedback. This version introduced features such as graphic material shaders, the ASL scripting language, plug-in support and numerous bug fixes. Version 0.95a was posted on December 2, 2006 and contains further bug fixes. Anim8or's mascot is a simple red robin, aptly named as Robin, that most users learn to model and animate in Anim8or's "A Simple Walk Tutorial". Users are often also very familiar with the eggplant, a model first designed by Steven to demonstrate 3D printers at SIGGRAPH. It is likely the first model most Anim8or modellers have ever

Anim8or created, as it is taught in the introductory tutorial to demonstrate the basics of the modeler and the tools available.

Layout Anim8or's interface is separated into four sections, each with its own tool set: • Object editor - individual objects are stored and edited within the object editor. Objects may be composed of primitives such as spheres, or more complex shapes made by extruding polygons along the z axis and adjusting the vertexes. Materials are then applied, per face if desired. The user also has the option to make morph targets for each object. • Figure editor - in order to animate more complex models, they can be given a Skeleton. Users can give each "bone" the ability to rotate on all 3 axes within certain limits and attach individual objects to each bone. • Sequence editor - this is an extension of the figure editor, allowing the use of key frame animation to animate individual bones with a degree of accuracy of 0.1°. • Scene editor - elements from the three other sections are imported and arranged in the scene editor. The key frames from the sequence editor can be modified, along with other variables, such as a figure's position in 3D space or the state of a morph target. An image can be rendered in any of the four editors, but only in the scene editor can lights and other graphical elements be used. The interface is a mixture of window's native interface, for such elements as the right-click context menu, and one specific to Anim8or, such as the graphical icons in the left-hand toolbar.

Features Although it is not as powerful as high-end commercial programs, it contains many features that are important to a 3D computer graphics package while remaining free. Such features include: • • • • • • • • • • • • • • • • •

3D Modeler with primitives such as spheres, cubes, and cylinders Mesh modification and subdivision Splines, extrusions, lathing, modifiers, bevelling and warping TrueType font support allowing for 2D and 3D text The ability to import .3DS, .LWO and .OBJ files for modification The ability to export .3DS, .OBJ, .VTX and .C files for use in external programs Plug-in support, using the Anim8or Scripting Language, also known as ASL for short 3D object browser to allow the user to view 3D files in a specified directory Textures in .BMP, .GIF and .JPG formats Environment maps, bump maps, transparency, specularity amongst others Character editor with joints Morph targets Renderer supporting fog, infinite, local and spot lights, anti-aliasing, alpha channels and depth channels Printing directly from the program Volumetric Shadows as well as ray traced hard and soft shadows A plain text file format, allowing for the development of external tools such as Terranim8or Hierarchies

A basic feature list can also be found at the Anim8or website [5], although the list is incomplete.

216

Anim8or

217

System requirements As far as multimedia standards go, Anim8or has very low system requirements. It is worth noting however, that certain features, particularly shadows, Anti-aliasing and Anim8or's resident ray tracer quickly become burdens on a computer's resources. While originally designed to work with Windows, users have reported running it successfully on Apple computers with Connectix Virtual PC and on Linux with WINE. This may be partially due to Anim8or's stand-alone design. This means that it can be pasted onto a USB memory stick or other removable media and run directly from it on any computer that meets the minimum specification. The minimum requirements are: • • • • •

300 MHz Processor Windows 95 or higher OpenGL graphics card with full ICD support 64MB of RAM (128MB recommended, 256MB with Windows XP) 5MB of hard drive space (the application is less than 2MB, but the manual and project/texture files can occupy several times this space).

Current preview features The fourth v0.97 preview is called the v0.97d preview, dated September 21, 2008. The major changes are (bug fixes first): • #097-019 - ART AA renders are too bright - Fixed. • #097-021 - Importing Object with Same Name can Crash - Fixed. • #097-009 - #097-022 - Copying Modifiers without a Bound Shape Crashes - Fixed. • Misc bugs: ASL constant PI was 3. It's now 3.141... • Various "..." buttons didn't connect to controllers on the scene editor. • Other minor fixes, small memory leaks patched, grid not always drawn, etc.

Render made in Anim8or 0.97D utilizing new features including reflections and Ambient Occlusion

Newest features: • Still image render size is saved with a project so it doesn't have to be reset each time a project is reloaded • Click-dragging in the render window will move the image around, allowing one to view all of an image even when it is larger than the window. It works while rendering movies as well but multi-threaded rendering needs to be enabled. • Scenes and sequences can have attributes. • ART ray tracer: new RayDepth integer attribute for the max level of rays to trace (for reflections and transparency). The default is 12 when RayDepth isn't defined. • ART ray tracer: new AmbientOccluder integer attribute. When set to 1 AA renders trace rays to the background for the ambient component:

Anim8or

Future releases Not much is known about what features will be modified or included in future versions, although users have posted suggestions on related forums. Inverse kinematics will likely be added,[6] as it was included the latest release, but was disabled because it was not quite ready for use.[7] According to the Anim8or forums, an admin has in 2011 heard back from the creator and has said that future release is not going to be expected for quite a while.[] Suspected planned features are: • Fast AVI creation using OpenGL • Advanced material manager Some of these features may not be included in the next release. • "Anim8or has come a long way since the first release called v0.1. There are still may areas that need improvement, primarily the renderer, but it's getting close to what I had originally imagined as the magic v1.0. I don't plan on stopping there, but it'll be a nice milestone along the way." - R. Steven Glanville[8] RecentWikipedia:Manual of Style/Dates and numbers#Chronological items discussions have suggested that a new version is in development; however, this may be awhile until we see such results.[] On May 23rd, 2013, a V0.97E pre-beta was released. This new version included many new features. The Anim8or Forum's Admin said that a stable 0.97 release would come anytime from June to November.

Community The Anim8or community is hosted on two forums, the official forum on the Anim8or.com website, and user-run forum at http:/ / Anim8orWorld [9] this site boasts an inbuilt Chat/Shoutbox, Media Gallery and modelling Workshop. Anim8orWorld.com incorporates both Animanon and Dotan8 the community Magazine. There are many fan sites hosted by community members with user-created tutorials, image galleries and programs.

References [1] http:/ / www. anim8or. com/ smf/ index. php?topic=4644. 0 [2] http:/ / www. anim8or. com/ main [3] Glanville, R. Steven (July 20, 1999). "Anim8or: New free animation software available". comp.graphics.packages.3dstudio. Archived by Google Groups. (http:/ / groups. google. ca/ group/ comp. graphics. packages. 3dstudio/ msg/ cc12cc557013bb?hl=en& lr=) [4] Glanville, R. Steven. Thanks for the support (http:/ / www. anim8or. com/ news/ index. html). July 27, 1999. URL accessed at 03:02, 20 January 2006 (UTC) [5] http:/ / www. anim8or. com/ main/ welcome. html#Features [8] Glanville, R. Steven. Anim8or.com home page (http:/ / www. anim8or. com/ main/ welcome. html). January 29, 2005. URL accessed at 03:02, 20 January 2006 (UTC) [9] http:/ / Anim8orWorld. com

External links • • • •

Official Anim8or Site, containing Tutorials, news, a gallery, etc. (http://www.anim8or.com/) Anim8orWorld (http://anim8orWorld.com/Forum) Anim8or User Manual (http://www.anim8or.com/manual/index.html) Terranim8or: an external tool for developing terrain and special effects (http://www.biederman.net/leslie/ terranim8or/terranim8or.htm)

• Original newsgroup message introducing Anim8or. Archived by Google Groups (http://groups.google.ca/ [email protected]) • Description of Anim8or's file format, .AN8 (http://www.anim8or.com/resources/an8_format.txt) • User run Anim8or wiki (http://wiki.anim8or.org/)

218

Anim8or • User led movie projects, "Anim8or: the Movie" (http://anim8orstudios.com/) • Specification of Anim8or Scripting Language (http://www.anim8or.com/scripts/spec/ Anim8or_Scripting_Language.html) • List of available PlugIns (http://homepage.ntlworld.com/w.watson3/main/parameteric.html) • Web browser plugin for showing .an8 files on webpages (http://www.3dmlw.com)

219

ASAP

220

ASAP Advanced Systems Analysis Program

Stable release

2010 Version 1, Release 1 / December 10, 2010

Development status Maintained Operating system

Windows

Type

CAD Software

License

Proprietary. Copyright(c) 1982-2010, Breault Research Organization, Inc.

Website

[1]

The Advanced Systems Analysis Program (ASAP) is optical engineering software used to simulate optical systems. ASAP can handle coherent as well as incoherent light sources. It is a non-sequential ray tracing tool which means that it can be used not only to analyze lens systems but also for stray light analysis. It uses a Gaussian beam approximation for analysis of coherent sources.

External links • NASA Tech article on ASAP [2] • Breault Research Organization Website [3]

References [1] http:/ / www. breault. com/ software/ asap. php [2] http:/ / www. techbriefs. com/ component/ option,com_docman/ task,doc_details/ gid,2813/ Itemid,41 [3] http:/ / www. breault. com/

Blender

221

Blender Blender

Blender 2.66 welcome screen Developer(s)

Blender Foundation

Initial release

1995

Stable release

2.68 / July 18, 2013

Written in

C, C++ and Python

[]

Operating system FreeBSD, GNU/Linux, Mac OS X and Microsoft Windows[1] Type

3D computer graphics software

License

GNU General Public License v2 or later

Website

www.blender.org

[2]

Blender is a free and open-source 3D computer graphics software product used for creating animated films, visual effects, art, 3D printed models, interactive 3D applications and video games. Blender's features include 3D modeling, UV unwrapping, texturing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body simulation, animating, match moving, camera tracking, rendering, video editing and compositing. It also features a built-in game engine.

History The Dutch animation studio Neo Geo and Not a Number Technologies (NaN) developed Blender as an in-house application. The primary author was Ton Roosendaal, who previously wrote a ray tracer called Traces for Amiga in 1989. The name Blender was inspired by a song by Yello, from the album Baby.[3] Roosendaal founded NaN in June 1998 to further develop and distribute the program. They initially distributed the program as shareware until NaN went bankrupt in 2002.

The desktop in version 2.63

Blender

222

The creditors agreed to release Blender under the GNU General Public License, for a one-time payment of €100,000 (US$100,670 at the time). On July 18, 2002, Roosendaal started a Blender funding campaign to collect donations, and on September 7, 2002, announced that they had collected enough funds and would release the Blender source code. Today, Blender is free, open-source software and is—apart from the Blender Institute's two half-time and two full-time employees—developed by the community.[4] The Blender Foundation initially reserved the right to use dual licensing, so that, in addition to GNU GPL, Blender would have been available also under the Blender License that did not require disclosing source code but required payments to the Blender Foundation. However, they never exercised this option and suspended it indefinitely in 2005.[5] Currently, Blender is solely available under GNU GPL.

Blender 2.4 screenshot

Suzanne In January–February 2002 it was clear that NaN could not survive, and would close the doors in March. Nevertheless, they put out one more release: 2.25. As a sort-of easter egg, a last personal tag, the artists and developers decided to add a 3D model of a chimpanzee. It was created by Willem-Paul van Overbruggen (SLiD3), who named it Suzanne after the orangutan in the Kevin Smith film Jay and Silent Bob Strike Back. Suzanne is Blender's alternative to more common test models such as the Utah Teapot and the Stanford Bunny. A low-polygon model with only 500 faces, Suzanne is often used as a quick and easy way to test Suzanne material, animation, rigs, texture, and lighting setups, and is also frequently used in joke images[citation needed]. Suzanne is still included in Blender. The largest Blender contest gives out an award called the Suzanne Awards.

Blender

223

Features Blender has a relatively small installation size, of about 70 megabytes for builds and 115 megabytes for official releases. Official versions of the software are released for Linux, Mac OS X, Microsoft Windows, and FreeBSD[6] in both 32 and 64 bits. Though it is often distributed without extensive example scenes found in some other programs,[7] the software contains features that are characteristic of high-end 3D software.[8] Among its capabilities are: • Support for a variety of geometric primitives, including polygon meshes, fast subdivision surface modeling, Bezier curves, NURBS surfaces, metaballs, multi-res digital sculpting (including maps baking, remeshing, resymetrize, decimation..), outline font, and a new n-gon modeling system called B-mesh.

Steps of forensic facial reconstruction of a mummy made on Blender

• Internal render engine with scanline ray tracing, indirect lighting, and ambient occlusion that can export in a wide variety of formats. • A pathtracer render engine called Cycles, which can use GPU to assist rendering. Cycles supported Open Shading Language shading since blender 2.65.[9] • Integration with a number of external render engines through plugins. • Keyframed animation tools including inverse kinematics, armature (skeletal), hook, curve and lattice-based deformations, shape keys (morphing), non-linear animation, constraints, and vertex weighting. • Simulation tools for Soft body dynamics including mesh collision detection, LBM fluid dynamics, smoke simulation, Bullet rigid body dynamics, ocean generator with waves. • A particle system that includes support for particle-based hair. • Modifiers to apply non-destructive effects. • Python scripting for tool creation and prototyping, game logic, importing and/or exporting from other formats, task automation and custom tools. • Basic non-linear video/audio editing. • The Blender Game Engine, a sub-project, offers interactivity features such as collision detection, dynamics engine, and programmable logic. It also allows the creation of stand-alone, real-time applications ranging from architectural visualization to video game construction. • A fully integrated node-based compositor within the rendering pipeline accelerated with OpenCL. • Procedural and node-based textures, as well as texture painting, projective painting, vertex painting, weight painting and dynamic painting. • Realtime control during physics simulation and rendering. • Camera and object tracking.

Blender

224

Using the node editor to create anisotropic metallic materials

A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay

Blender can create very high resolution models and renderings

Sintel and her dragon rendered with Blender. Blender offers the ability to create realistic-looking character models.

User interface Blender has had a reputation of being difficult to learn for users accustomed to other 3D graphics software[citation needed]. Nearly every function has a direct keyboard shortcut and there can be several different shortcuts per key. Since Blender became free software, there has been effort to add comprehensive contextual menus as well as make the tool usage more logical and streamlined. There have also been efforts to visually enhance the user interface, with the Blender's user interface underwent a significant update during the 2.5x series introduction of color themes, transparent floating widgets, a new and improved object tree overview, and other small improvements (such as a color picker widget). Blender's user interface incorporates the following concepts: Editing modes The two primary modes of work are Object Mode and Edit Mode, which are toggled with the Tab key. Object mode is used to manipulate individual objects as a unit, while Edit mode is used to manipulate the actual object data. For example, Object Mode can be used to move, scale, and rotate entire polygon meshes, and Edit Mode can be used to manipulate the individual vertices of a single mesh. There are also several other modes, such as Vertex Paint, Weight Paint, and Sculpt Mode. The 2.45 release also had the UV Mapping Mode, but it was merged with the Edit Mode in 2.46 Release Candidate 1.[10] Hotkey utilization Most of the commands are accessible via hotkeys. Until the 2.x and especially the 2.3x versions, this was in fact the only way to give commands, and this was largely responsible for creating Blender's reputation as a difficult-to-learn program. The new versions have more comprehensive GUI menus. Numeric input Numeric buttons can be "dragged" to change their value directly without the need to aim at a particular widget, thus saving screen real estate and time. Both sliders and number buttons can be constrained to various step sizes with modifiers like the Ctrl and Shift keys. Python expressions can also be typed directly into

Blender

225 number entry fields, allowing mathematical expressions to specify values.

Workspace management The Blender GUI is made up of one or more screens, each of which can be divided into sections and subsections that can be of any type of Blender's views or window-types. Each window-type's own GUI elements can be controlled with the same tools that manipulate 3D view. For example, one can zoom in and out of GUI-buttons in the same way one zooms in and out in the 3D viewport. The GUI viewport and screen layout is fully user-customizable. It is possible to set up the interface for specific tasks such as video editing or UV mapping or texturing by hiding features not utilized for the task.[11]

Hardware requirements Blender has very low hardware requirements compared to other 3D suites.[12][13] However, for advanced effects and high-poly editing, a more powerful system is needed.

Blender hardware requirements[14] Hardware

Minimum

Recommended

Production-standard

Processor

2 GHz, Dual Core

Quad Core

Dual 8-Core

Memory

2 GB RAM

8 GB

16 GB

Graphics card

OpenGL card with 256 MB Video RAM

OpenGL card with 1 GB Video RAM (CUDA or OpenCL for GPU rendering)

Dual OpenGL cards with 3 GB RAM, ATI FireGL or Nvidia Quadro

1280×768 pixels, 24-bit color

1920×1080 pixels, 24-bit color

2x 1920×1080 pixels, 24-bit color

Two-button mouse

Three-button mouse

Three-button mouse and a graphics tablet

Display

Input

File format Blender features an internal file system that allows one to pack multiple scenes into a single file (called a ".blend" file). • All of Blender's ".blend" files are forward, backward, and cross-platform compatible with other versions of Blender, with the following exceptions: • Loading animations stored in post-2.5 files in Blender pre-2.5. This is due to the reworked animation subsystem introduced in Blender 2.5 being inherently incompatible with older versions. • Loading meshes stored in post 2.63. This is due to the introduction of BMesh [15], a more versatile/featureful mesh format. • Snapshot ".blend" files can be auto-saved periodically by the program, making it easier to survive a program crash. • All scenes, objects, materials, textures, sounds, images, post-production effects for an entire animation can be stored in a single ".blend" file. Data loaded from external sources, such as images and sounds, can also be stored externally and referenced through either an absolute or relative pathname. Likewise, ".blend" files themselves can also be used as libraries of Blender assets. • Interface configurations are retained in the ".blend" files, such that what you save is what you get upon load. This file can be stored as "user defaults" so this screen configuration, as well as all the objects stored in it, is used every time you load Blender. The actual ".blend" file is similar to the EA Interchange File Format, starting with its own header (for example BLENDER_v248) that specifies the version, endianness and pointer size, followed by the file's DNA (a full specification of the data format used) and, finally, a collection of binary blocks storing actual data. Presence of the

Blender

226

DNA block in .blend files means the format is self-descriptive and any software able to decode the DNA can read any .blend file, even if some fields or data block types must be ignored. Although it is relatively difficult to read and convert a ".blend" file to another format using external tools, there are several software packages able to do this, for example readblend. A wide variety of import/export scripts that extend Blender capabilities (accessing the object data via an internal API) make it possible to inter-operate with other 3D tools. CAD software uses surface description models that are significantly different from the ones used in Blender because Blender is not designed for CAD. Therefore, the direct import or export of CAD files is not possible. Jeroen Bakker documented the Blender file format to allow inter-operation with other tooling. The document can be found at the The mystery of the blend website.[16] A DNA structure browser[17] is also available on this site. Blender organizes data as various kinds of "data blocks", such as Objects, Meshes, Lamps, Scenes, Materials, Images and so on. An object in Blender consists of multiple data blocks – for example, what the user would describe as a polygon mesh consists of at least an Object and a Mesh data block, and usually also a Material and many more, linked together. This allows various data blocks to refer to each other. There may be, for example, multiple Objects that refer to the same Mesh, allowing Blender to keep a single copy of the mesh data in memory, and making subsequent editing of the shared mesh result in shape changes in all Objects using this Mesh. This data-sharing approach is fundamental to Blender's philosophy and its user interface and can be applied to multiple data types. Objects, meshes, materials, textures etc. can also be linked to from other .blend files, allowing the use of .blend files as reusable resource libraries.

Comparison with other 3D software A 2007 article stated that Blender's interface was not up to industry standards, but was nevertheless suited to fast workflow and was sometimes more intuitive. Poor documentation was also criticized.[18] Blender is a dominant open-source product with a range of features comparable to mid- to high-range commercial, proprietary software.[18] In 2010, CGenie rated Blender as a fledgling product with the majority of its users being "hobbyists" rather than students or professionals but with its high standards rising yearly.[19] They also reported that users thought Blender needed more development and required more compatibility with other programs.[20] Blender is also used by scientific groups all over the world [21] in tandem with applications such as MatLab. In 2011, Blender 2.5 was released. Featuring a completely redesigned user interface, it aims to improve work flow and ease of use.[22] During beta-testing, the Sintel animators considered Blender 2.5's animation system as good as, or better, than some commercial packages.[23]

Development Since the opening of the source, Blender has experienced significant refactoring of the initial codebase and major additions to its feature set. Improvements include an animation system refresh;[24] a stack-based modifier system;[25] an updated particle system[26] (which can also be used to simulate hair and fur); fluid dynamics; soft-body dynamics; GLSL shaders support[27] in the game engine; advanced UV unwrapping;[28] a fully recoded render pipeline, allowing separate render passes and "render to texture"; node-based material editing and compositing; Projection painting.[29]

Game engine GLSL materials

Blender Part of these developments were fostered by Google's Summer of Code program, in which the Blender Foundation has participated since 2005. The current stable release version is 2.67b[] (as of 30 May 2013), a second bugfix release to the previous version 2.67 that was released on May 7, 2013 and the 2.67a update released on May 22, 2013. New features in 2.67 included:[] • New Freestyle rendering engine • Improved paint system • Improved compositing and nodes and improvements to the Cycles render engine, the motion tracker, mesh editing tools, 3D printing Addon, and other new Addons. Besides new features, 260 bugs from previous releases were fixed.

Support In the month following the release of Blender v2.44, it was downloaded 800,000 times;[30] this worldwide user base forms the core of the support mechanisms for the program. Most users learn Blender through community tutorials and discussion forums on the internet such as Blender Artists;[31] however, another learning method is to download and inspect ready-made Blender models. Numerous other sites, for example BlenderArt Magazine[32]—a free, downloadable magazine with each issue handling a particular area in 3D development—and BlenderNation [33], provide information on everything surrounding Blender, showcase new techniques and features, and provide tutorials and other guides.

Use in the media industry

Big Buck Bunny poster

Sintel promotional poster

227

Blender

228

Tears of Steel promotional poster Blender started out as an inhouse tool for a Dutch commercial animation company, NeoGeo.[34] Blender has been used for television commercials in several parts of the world including Australia,[35] Iceland,[36] Brazil,[37][38] Russia[39] and Sweden.[40] The first large professional project that used Blender was Spider-Man 2, where it was primarily used to create animatics and pre-visualizations for the storyboard department. As an animatic artist working in the storyboard department of Spider-Man 2, I used Blender's 3D modeling and character animation tools to enhance the storyboards, re-creating sets and props, and putting into motion action and camera moves in 3D space to help make Sam Raimi's vision as clear to other departments as possible.[41] – Anthony Zierhut,[42] Animatic Artist, Los Angeles. The French-language film Friday or Another Day (Vendredi ou un autre jour) was the first 35 mm feature film to use Blender for all the special effects, made on GNU/Linux workstations.[43] It won a prize at the Locarno International Film Festival. The special effects were by Digital Graphics[44] of Belgium. Blender has also been used for shows on the History Channel, alongside many other professional 3D graphics programs.[45] Tomm Moore's The Secret of Kells, which was partly produced in Blender by the Belgian studio Digital Graphics, has been nominated for an Oscar in the category 'Best Animated Feature Film'.[46] Plumíferos, a commercial animated feature film created entirely in Blender,[47] was premiered in February 2010 in Argentina. Its main characters are anthropomorphic talking animals.

Open Projects Every 1-2 years the Blender Foundation announces a new creative project to help drive innovation in Blender.

Elephants Dream (Open Movie Project: Orange) In September 2005, some of the most notable Blender artists and developers began working on a short film using primarily free software, in an initiative known as the Orange Movie Project hosted by the Netherlands Media Art Institute (NIMk). The resulting film, Elephants Dream, premiered on March 24, 2006. In response to the success of Elephants Dream, the Blender Foundation founded the Blender Institute to do additional projects with two announced projects: Big Buck Bunny, also known as "Project Peach" (a 'furry and funny' short open animated film project) and Yo Frankie, also known as Project Apricot (an open game in collaboration with CrystalSpace that reused some of the assets created during Project Peach). This has later made its way to Nintendo 3DS's Nintendo Video between the years 2012 and 2013.

Blender

229

Big Buck Bunny (Open Movie Project: Peach) On October 1, 2007, a new team started working on a second open project, "Peach", for the production of the short movie Big Buck Bunny. This time, however, the creative concept was totally different. Instead of the deep and mystical style of Elephants Dream, things are more "funny and furry" according to the official site.[48] The movie had its premiere on April 10, 2008.

Yo Frankie! (Open Game Project: Apricot) "Apricot" is a project for production of a game based on the universe and characters of the Peach movie (Big Buck Bunny) using free software. The game is titled Yo Frankie. The project started February 1, 2008, and development was completed at the end of July 2008. A finalized product was expected at the end of August; however, the release was delayed. The game was released on December 9, 2008, under either the GNU GPL or LGPL, with all content being licensed under Creative Commons Attribution 3.0.[]

Sintel (Open Movie Project: Durian) The Blender Foundation's Project Durian[49] (in keeping with the tradition of fruits as code names) was this time chosen to make a fantasy action epic of about twelve minutes in length,[50] starring a female teenager and a young dragon as the main characters. The film premiered online on September 30, 2010.[51] A game based on Sintel was officially announced on Blenderartists.org on May 12, 2010.[52][53] Many of the new features integrated into Blender 2.5 and beyond were a direct result of Project Durian.

Tears of Steel (Open Movie Project: Mango) On October 2, 2011, the fourth open movie project, codenamed "Mango", was announced by the Blender Foundation.[54][55] A team of artists was to be assembled using an open call of community participation. It is the first blender open movie to use live action as well as CG. Derek de Lint in a scene from Tears of Steel

Filming for Mango started on May 7, 2012, and the movie was released on September 26, 2012. As with the previous films, all footage, scenes and models were made available under a free content compliant Creative Commons license.[56][57] According the film's press release, "The film's premise is about a group of warriors and scientists, who gather at the 'Oude Kerk' in Amsterdam to stage a crucial event from the past, in a desperate attempt to rescue the world from destructive robots."[58]

References [2] http:/ / www. blender. org [10] New features currently in SVN (http:/ / web. archive. org/ web/ 20080512001842/ http:/ / www. blender. org/ development/ current-projects/ changes-since-245/ ). Blender.org [11] Using Blender with multiple monitors (http:/ / www. blenderguru. com/ quick-tip-use-blender-on-multiple-monitors/ ). Blenderguru.com. Retrieved on 2012-07-06. [15] http:/ / wiki. blender. org/ index. php/ Dev:Ref/ Release_Notes/ 2. 63/ BMesh [16] Jeroen Bakker. The mystery of the blend. The blender file-format explained (http:/ / www. atmind. nl/ blender/ mystery_ot_blend. html). Atmind.nl (2009-03-27). Retrieved on 2012-07-06. [17] Blender SDNA 249. Internal SDNA structures. Atmind.nl. Retrieved on 2012-07-06. (http:/ / www. atmind. nl/ blender/ blender-sdna-249. html) [18] Benoît Saint-Moulin. 3D softwares comparisons table (http:/ / www. tdt3d. be/ articles_viewer. php?art_id=99), TDT 3D, November 7, 2007 [19] The Big CG Survey 2010, Industry Perspective (http:/ / cgenie. com/ articles/ 1289-big-cg-survey-2010-industry-perspective. html), CGenie, 2010

Blender [20] The Big CG Survey 2010, Initial Results (http:/ / cgenie. com/ articles/ 1158-cgenies-big-cg-survey-is-now-open-have-your-say. html), CGenie, 2010 [21] Scientific Visualization Lecture 7: Other Visualization software, Patrik Malm, Centre for Image Analysis, Swedish University of Agricultural Sciences, Uppsala University (http:/ / www. it. uu. se/ edu/ course/ homepage/ vetvis/ ht09/ Handouts/ Lecture07. pdf), Swedish University of Agricultural Sciences, p32 [22] Blender's 2.56 release log – "What to Expect" and "User Interface" details (http:/ / www. blender. org/ development/ release-logs/ blender-256-beta/ ). Blender.org. Retrieved on 2012-07-06. [23] The Making of Sintel (http:/ / web. archive. org/ web/ 20110707063540/ http:/ / www. 3dworldmag. com/ 2011/ 02/ 09/ the-making-of-sintel/ 4/ ). 3DWorld magazine (2011-02-09) [24] Blender Animation system refresh project (http:/ / wiki. blender. org/ index. php/ BlenderDev/ AnimationUpdate). Wiki.blender.org. Retrieved on 2012-07-06. [25] Modifiers (http:/ / wiki. blender. org/ index. php/ Blenderdev/ Modifiers). Wikiblender.org. Retrieved on 2012-07-06. [26] New Particle options and Guides (http:/ / www. blender. org/ development/ release-logs/ blender-240/ new-particle-options-and-guides/ ). Blender.org. Retrieved on 2012-07-06. [27] GLSL Pixel and Vertex shaders (http:/ / www. blender. org/ development/ release-logs/ blender-241/ glsl-pixel-and-vertex-shaders/ ). Blender.org. Retrieved on 2012-07-06. [28] Subsurf UV Mapping (http:/ / www. blender. org/ development/ release-logs/ blender-241/ subsurf-uv-mapping/ ). Blender.org. Retrieved on 2012-07-06. [31] Blenderartists.org (http:/ / www. blenderartists. org/ forum/ ). Blenderartists.org. Retrieved on 2012-07-06. [32] Blenderart.org (http:/ / blenderart. org/ ). Blenderart.org. Retrieved on 2012-07-06. [33] http:/ / www. blendernation. com/ [34] History (http:/ / www. blender. org/ blenderorg/ blender-foundation/ history/ ). blender.org (2002-10-13). Retrieved on 2012-07-06. [35] Blender in TV Commercials (http:/ / www. studiorola. com/ news/ blender-in-tv-commercials/ ). Studiorola.com (2009-09-26). Retrieved on 2012-07-06. [36] Midstraeti Showreel on the Blender Foundation's official YouTube channel (http:/ / www. youtube. com/ watch?v=TWSAdAO6ynU). Youtube.com (2010-11-02). Retrieved on 2012-07-06. [39] Russian Soda Commercial by ARt DDs (http:/ / www. blendernation. com/ 2010/ 08/ 25/ russian-soda-commercial-by-art-dds/ ). Blendernation.com (2010-08-25). Retrieved on 2012-07-06. [40] Apoteksgruppen – ELW TV Commercial made with Blender (http:/ / vimeo. com/ 21344454). Vimeo.com (2011-03-22). Retrieved on 2012-07-06. [41] Testimonials, [42] Anthonyzierhut.com (http:/ / www. anthonyzierhut. com/ ). Anthonyzierhut.com. Retrieved on 2012-07-06. [44] Digitalgraphics.be (http:/ / www. digitalgraphics. be). Digitalgraphics.be. Retrieved on 2012-07-06. [46] The Secret of Kells’ nominated for an Oscar! (http:/ / web. archive. org/ web/ 20100618021840/ http:/ / www. blendernation. com/ ‘the-secret-of-kells’-nominated-for-an-oscar/ ) blendernation.com (2010-02-04). [48] Peach.blender.org (http:/ / peach. blender. org/ ). Peach.blender.org (2008-10-03). Retrieved on 2012-07-06. [49] Durian.blender.org (http:/ / durian. blender. org/ ). Durian.blender.org. Retrieved on 2012-07-06. [50] How long is the movie? (http:/ / durian. blender. org/ news/ how-long-is-the-movie/ ). Durian.blender.org (2010-04-15). Retrieved on 2012-07-06. [51] Sintel Official Premiere (http:/ / durian. blender. org/ news/ sintel-official-premiere/ ). Durian.blender.org (2010-08-16). Retrieved on 2012-07-06. [52] Sintel The Game announcement (http:/ / blenderartists. org/ forum/ showthread. php?t=186893). Blenderartists.org. Retrieved on 2012-07-06. [53] Sintel The Game website (http:/ / sintelgame. org/ ). Sintelgame.org. Retrieved on 2012-07-06.

Further reading • Van Gumster, Jason (2009). Blender For Dummies. Indianapolis, Indiana: Wiley Publishing, Inc. p. 408. ISBN 978-0-470-40018-0. • "Blender 3D Design, Spring 2008" (http://ocw.tufts.edu/Course/57). Tufts OpenCourseWare. Tufts University. 2008. Retrieved July 23, 2011. • "Release Logs" (http://www.blender.org/development/release-logs/). Blender.org. Blender Foundation. Retrieved July 23, 2011.

230

Blender

External links • • • •

Official website (http://www.blender.org/) Blender Artists Community (http://www.blenderartists.org/forum/) BlenderNation: Blender news site (http://www.blendernation.com/) Blender (http://www.dmoz.org/Computers/Software/Graphics/3D/Rendering_and_Modelling/Blender//) at the Open Directory Project • BlenderArt Magazine: A bi-monthly Blender magazine for Blender learners (http://www.blenderart.org/) • Blender NPR: Dedicated to Stylize and Non-Photorealistic Rendering (http://blendernpr.org)

Brazil R/S Brazil Rendering System was a proprietary commercial plugin for 3D Studio Max and Autodesk VIZ. Steve Blackmon and Scott Kirvan started developing Brazil R/S while working as the R&D team of Blur Studio, and formed the company SplutterFish to sell and market Brazil. It was capable of photorealistic rendering using fast ray tracing and global illumination. It was used by computer graphics artists to generate content for print, online content, broadcast solutions and feature films. Some major examples are Star Wars Episode III: Revenge of the Sith,[1] Sin City,[2] Superman Returns[3] and The Incredibles.[4] Splutterfish was acquired by Caustic Graphics in 2008,[5] which was later acquired by Imagination Technologies in December 2010.[6] Imagination Technologies announced Brazil's end-of-life, effective May 14, 2012.[7]

References External links • SplutterFish (http://www.splutterfish.com/)

231

BRL-CAD

232

BRL-CAD

Original author(s) Mike Muuss Developer(s)

Army Research Laboratory

Initial release

1984

Stable release

7.22.0 / June 26, 2012

Operating system

Cross-platform

Type

CAD

License

BSD, LGPL

Website

www.brlcad.org

[1]

BRL-CAD is a constructive solid geometry (CSG) solid modeling computer-aided design (CAD) system. It includes an interactive geometry editor, ray tracing support for graphics rendering and geometric analysis, computer network distributed framebuffer support, scripting, image-processing and signal-processing tools. The entire package is distributed in source code and binary form. Although BRL-CAD can be used for a variety of engineering and graphics applications, the package's primary purpose continues to be the support of ballistic and electromagnetic analyses. In keeping with the Unix philosophy of developing independent tools to perform single, specific tasks and then linking the tools together in a package, BRL-CAD is basically a collection of libraries, tools, and utilities that work together to create, raytrace, and interrogate geometry and manipulate files and data. In contrast to many other 3D modelling applications, BRL-CAD uses Constructive Solid Geometry rather than Boundary Representation.[2] This means BRL-CAD can "study physical phenomena such as ballistic penetration and thermal, radiative, neutron, and other types of transport"[3] The BRL-CAD libraries are designed primarily for the geometric modeler who also wants to tinker with software and design custom tools. Each library is designed for a specific purpose: creating, editing, and raytracing geometry, and image handling. The application side of BRL-CAD also offers a number of tools and utilities that are primarily concerned with geometric conversion, interrogation, image format conversion, and command-line-oriented image manipulation.

BRL-CAD

233

BRL-CAD data flow structure

History In 1979, the U.S. Army Ballistic Research Laboratory (BRL) (now the United States Army Research Laboratory) expressed a need for tools that could assist with the computer simulation and engineering analysis of combat vehicle systems and environments. When no existing computer-aided design (CAD) package was found to be adequate for this purpose, BRL software developers—led by Mike Muuss—began assembling a suite of utilities capable of interactively displaying, editing, and interrogating geometric models. This suite became known as BRL-CAD. Development on BRL-CAD as a package subsequently began in 1983; the first public release was made in 1984. BRL-CAD became an open source project on December, 2004.

Lead developer Mike Muuss works on the XM-1 tank in BRL‑CAD at a PDP‑11/70 terminal, circa 1980.

The BRL-CAD source code repository is believed to be the oldest public version-controlled codebase in the world that's still under active development, dating back to 1983-12-16 00:10:31 UTC.[4]

BRL-CAD

References [1] http:/ / www. brlcad. org [3] BRL-CAD overview on their wiki (http:/ / brlcad. org/ wiki/ Overview#Why_CSG_Modeling. 3F)

External links • Official website (http://brlcad.org) • Future ideas (http://brlcad.org/~sean/ideas.html) • BRL-CAD (http://sourceforge.net/projects/brlcad/) on SourceForge.net

Free support • #brlcad on irc.freenode.net (irc://irc.freenode.net/brlcad) • Mailing lists (http://sourceforge.net/mail/?group_id=105292) • Support requests (http://sourceforge.net/tracker/?group_id=105292&atid=640803)

Commercial support • SURVICE Engineering, Inc. (http://www.survice.com) at www.BRLCAD.com (http://www.brlcad.com)

234

Form-Z

235

Form-Z form·Z Developer(s)

AutoDesSys

Stable release

6.6 / February 2008

Operating system Microsoft Windows, Mac OS X Type

3D computer graphics

License

Proprietary

Website

www.formz.com

[1]

form·Z is a computer-aided (CAD) design tool developed by AutoDesSys for all design fields that deal with the articulation of 3D spaces and forms and which is used for 3D modeling, drafting, animation and rendering.

Overview form·Z is a general-purpose [2] solid and surface modeler with an extensive set of 2D/3D form manipulating and sculpting capabilities. It is a design tool for architects, landscape architects, urban designers, engineers, animators, illustrators and movie makers, industrial and interior designers and all other design areas. form·Z can be used on Windows as well as on Macintosh computers and in addition to English it is also available in German, Italian, Spanish, French, Greek, Korean and Japanese.

Modeling In general, form·Z allows design in 3D or in 2D, using numeric or interactive graphic input through either line or smooth shaded drawings (OpenGL)among drafting, modeling, rendering, and animation platforms. Key modeling features include Boolean solids to generate complex composite objects; the ability to create curved surfaces from a variety of splines, including NURBS and Bézier/Coons patches and mechanical or organic forms using metaformz, nurbz, patches, subdivisions, displacements, or skinning; specialty tools such as Terrain models, Platonic solids, geodesic spheres, double line wall objects, staircases, helixes, screws, and bolts. In addition form·Z modeling supports methods for transforming and morphing 3D shapes and allows the production of both animated visualizations of scenes and the capture of 3D shapes as they morph into other forms, introducing modeling methods that explore 3D forms beyond traditional means. Technical output oriented modeling allows users to refine the design with double precision CAD accuracy to full structural detail for 3D visualization for the production of 2D construction drawings, 3D printing, rapid prototyping, and CNC milling and offers information management of bills of materials and spreadsheet support for construction documents.

Animation form·Z offers a seamlessly integrated animation environment, where objects, lights, cameras, and surface styles (colors) can be animated and transformed over time. The animation features are object-centric and are applied as modeling operations, which, in addition to supporting the production of animated visualizations, they also support dynamic modeling and the creation of forms that go significantly beyond the repertoire of conventional modeling tools. This offers a powerful avenue for design explorations.

Form-Z

Rendering RenderZone Plus provides photorealistic rendering with global illumination based on final gather (raytrace), ambient occlusion, and radiosity, for advanced simulation of lighting effects and rendering techniques, which result in renderings with the most realism, as the illumination of a scene takes into account the accurate distribution of light in the environment. Consequently excellent results are achieved in a short time, with little effort to set up and easy to control. Key rendering features include multiple light types (distant (sun), cone, point, projector, area, custom, line, environment, and atmospheric) whereas the environment and atmospheric lights, which may be considered advanced light types, are especially optimized for global illumination. Both procedural and pre-captured textures are offered and can be mapped onto the surfaces of objects using six different mapping methods: flat, cubic, cylindrical, spherical, parametric, or UV coordinates. Decals can be attached on top of other surface styles to produce a variety of rendering effects, such as labels on objects, graffiti on walls, partially reflective surfaces, masked transparencies, and more. State of the art shaders are used to render surfaces and other effects. A surface style is defined by up to four layers of shaders, which produce color, reflections, transparency, and bump effects. They can be applied independently or can be correlated. Libraries with many predefined materials are included and can be easily extended and customized. Also available is a sketch rendering mode that produces non photorealistic images, which appear as if they were drawn by manual rendering techniques, such as oil painting, water color, or pencil hatches.

Third Party Rendering Plugins Maxwell Maxwell is a rendering engine developed by NextLimit based on the physics of real light. The Maxwell plugin allows for the assignment of Maxwell materials to form·Z objects. Once the materials are established, the plugin transfers the scene including objects, lights and the viewing parameters to Maxwell for rendering.

form•Z on the small and big screen In addition to its widespread use in the architecture and 3D design worlds, form•Z and RenderZone Plus are also extensively used in Hollywood - in all production stages, in and behind the scenes (set design pre-production and construction, miniature model design, pre-vis animation, CG special effects and post-production, etc.). The following thread [3] mentions the practical use of form•Z in almost all blockbuster movies of the last decade, including successful TV productions. Additional movie references: Pirates of the Caribbean [4] Victor Martinez (Solaris, Minority Report, Cat in the Hat, Transformers, etc.) [5] Richard Reynolds (Lecturer @ AFI, Planet of the Apes, Mission to Mars, Pearl Harbor, etc.) [6] Oliver Scholl (Time Machine, Independence Day, Mission to Mars, Stealth, Jumper, etc.) [7] Stage design: John Troxtel [8]

236

Form-Z

237

Literature Pierluigi Serraino: History of form Z, Birkhäuser, 2002, ISBN 3-7643-6563-3

External links • AutoDesSys, the publishers of form•Z [9] • form•Z User Gallery [10] • form.Z Videos [11] • form•Z User Forum [12]

Notes [1] [2] [3] [4] [5] [6] [7]

http:/ / www. formz. com http:/ / blog. novedge. com/ 2007/ 05/ an_interview_wi_2. html http:/ / www. formz. com/ forum2/ messages/ 16/ 40392. html?1262203691 http:/ / www. cgw. com/ Publications/ CGW/ 2007/ Volume-30-Issue-7-July-2007-/ Ship-Shape. aspx http:/ / www. formz. com/ gallery2/ gallery. php?A=206& L=M http:/ / www. formz. com/ gallery2/ gallery. php?A=231& L=R http:/ / www. oliverscholl. com/ portfolio/ Selects. html#3

[8] http:/ / www. formz. com/ gallery2/ gallery. php?A=255& L=T [9] http:/ / www. autodessys. com [10] http:/ / www. formz. com/ gallery/ user_page. php [11] http:/ / www. formz. com/ products/ formz/ formzVideo_320. php?startMovie=formZ_intro_p1_320. flv [12] http:/ / www. formz. com/ forum/ discus41/ discus. cgi?pg=topics

Holomatix Rendition

238

Holomatix Rendition Holomatix Rendition Developer(s)

Holomatix Ltd

Stable release

1.0.458 / 8th October 2010

Operating system Windows, Mac OS X, Linux Type

Raytracer

Holomatix Rendition was a raytracing renderer, which was broadly compatible with mental ray. Its rendering method was similar to that of FPrime in that it displayed a continuously refined rendered image until final production quality image was achieved. This differs to traditional rendering methods where the rendered image is built up block-by-block. It was developed by Holomatix Ltd, based in London, UK. As of December 2011, the Rendition product has been retired and is no longer available or being updated. The product is no longer mentioned on the developer's web site, either. The successor is SprayTrace.

Key application Features • Realtime (or progressive) rendering engine • Based on mental ray shader and lighting model • Supports 3rd-party shaders compiled for mental ray

Rendering Features As it used the same shader and lighting model as mental ray, Rendition supported the same rendering and ray tracing features as mental ray, including: • • • • • •

Final Gather Global Illumination (through Photon Mapping) Polygonal and Parametric Surfaces (NURBS, Subdivision) Displacement Mapping Motion Blur Lens Shaders

Supported platforms • Autodesk Maya, up to and including 2011 SAP • Autodesk 3ds Max, up to and including 2010 • SoftImage XSI, up to and including 2011, but not 2011 SP1

Supported Operating Systems • Microsoft Windows on x86 and x64 architectures • Apple Mac OS X on x86 architectures • Linux on x86, x64 architectures The renderer came in both 32-bit and 64-bit versions.

Holomatix Rendition

External links • Holomatix Rendition home page (WayBack machine archived page - page no longer served by Holomatix) [1] • Interview discussing Rendition with Holomatix CTO on 3d-test.com [2] • SprayTrace home page (the successor to Rendition) [3]

References [1] http:/ / web. archive. org/ web/ 20110712213838/ http:/ / www. holomatix. com/ products/ rendition/ about/ [2] http:/ / www. 3d-test. com/ interviews/ holomatix_2. htm [3] http:/ / www. spraytrace. com/

Imagine Imagine was the name of a cutting-edge 3D modeling and ray tracing program, originally for the Amiga computer and later also for MSDOS and Microsoft Windows. It was created by Impulse, Inc. It used the .iob extension for its objects. Imagine was a derivative of the software TurboSilver, which was also for the Amiga and written by Impulse. The Windows version of the program was abandoned when Impulse dropped out of the 3D software market; but the Amiga version is still maintained and sold by CAD Technologies [1]. The Windows and DOS versions have been made available in full along with other freely distributed products such as Organica at the fansite Imagine 3D [2] which also has a forum, gallery and downloads section.

External links • Program for reading IOB files [3] • Aminet Imagine traces [4] • Imagine 3D fan site [2]

References [1] [2] [3] [4]

http:/ / www. imaginefa. com http:/ / www. imagine3d. org http:/ / www. pygott. demon. co. uk/ prog. htm http:/ / aminet. net/ pix/ imagi

239

Indigo Renderer

240

Indigo Renderer

A photorealistic image rendered with Indigo. Developer(s)

Glare Technologies

Stable release

3.4 (November 5, 2012) [±]

[1]

Operating system Linux, Mac OS X and Microsoft Windows Type

Rendering system

License

Proprietary commercial software

Website

www.indigorenderer.com

[2]

A render demonstrating Indigo's realistic light simulation

Indigo Renderer is a 3D rendering software that uses unbiased rendering technologies to create photo-realistic images. In doing so, Indigo uses equations that simulate the behaviour of light, with no approximations or guesses taken. By accurately simulating all the interactions of light, Indigo is capable of producing effects such as: • • • • •

Depth of field, as when a camera is focused on one object and the background is blurred Spectral effects, as when a beam of light goes through a prism and a rainbow of colours is produced Refraction, as when light enters a pool of water and the objects in the pool seem to be “bent” Reflections, from subtle reflections on a polished concrete floor to the pure reflection of a silvered mirror Caustics, as in light that has been focused through a magnifying glass and has made a pattern of brightness on a surface

Indigo uses methods such as Metropolis light transport (MLT), spectral light calculus, and virtual camera model. Scene data is stored in XML or IGS format.

Indigo Renderer Indigo features Monte-Carlo path tracing, bidirectional path tracing and MLT on top of bidirectional path tracing, distributed render capabilities, and progressive rendering (image gradually becomes less noisy as rendering progresses). Indigo also supports subsurface scattering and has its own image format (.igi). Indigo was originally released as freeware until the 2.0 release, when it became a commercial product.

References [1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Latest_stable_software_release/ Indigo_Renderer& action=edit [2] http:/ / www. indigorenderer. com

External links • Official website (http://www.indigorenderer.com) • Glare Technologies website (http://www.glaretechnologies.com/)

241

Kerkythea

242

Kerkythea Kerkythea

Kerkythea is capable of rendering photorealistic caustics and global illumination. Developer(s)

Ioannis Pantazopoulos

Stable release

Kerkythea 2008 Echo / October 18, 2008

Operating system

Microsoft Windows, Linux, Mac OS X

Type

3D Graphics Software

License

Freeware

Kerkythea is a standalone rendering system that supports raytracing and Metropolis light transport, uses physically accurate materials and lighting, and is distributed as freeware. CurrentlyWikipedia:Manual of Style/Dates and numbers#Chronological items, the program can be integrated with any software that can export files in obj and 3ds formats, including 3ds Max, Blender, LightWave 3D, SketchUp, Silo and Wings3D. The developer ceased developing Kerkythea to focus on the development of a commercial renderer named Thea Render.[citation needed]

History Kerkythea started development in 2004 and released its first version on April 2005. Initially it was only compatible with Microsoft Windows, but an updated release on October 2005 made it Linux compatible. It is nowWikipedia:Manual of Style/Dates and numbers#Chronological items also available for Mac OS X. In May 2009 it was announced that the development team started a new commercial renderer, although Kerkythea will be updated and it will stay free, which is a relief considering that the site has now been down for two months with only apathetic excuses from the admins, who still refuse to allow anyone to set up a mirror site to make the files available.

Exporters There are 6 official exporters for Kerkythea. BlenderBlend2KT Exporter to XML format 3D Studio Max3dsMax2KT 3dsMax Exporter GMaxGMax2KT GMax Exporter SketchUpSU2KT SketchUp Exporter SU2KT Light Components

Kerkythea

Features Supported 3D File Formats • • • •

3DS Format OBJ Format XML (internal) Format SIA (Silo) Format (partially supported)

Supported Image Formats • All supported by FreeImage library (JPEG, BMP, PNG, TGA and HDR included) Supported Materials • • • • • • •

Matte Perfect Reflections/Refractions Blurry Reflections/Refractions Translucency (SSS) Dielectric Material Thin Glass Material Phong shading Material

• • • •

Ward Anisotropic Material Anisotropic Ashikhmin Material Lafortune Material Layered Material [Additive Combination of the Above with use of Alpha Maps]

Supported Shapes • Triangulated Meshes • Sphere • Planes Supported Lights • • • • • • • •

Omni Light Spot Light Projector Light Point Diffuse Area Diffuse Point Light Spherical Soft Shadows Ambient Lighting Sky Lighting [Physical Sky, SkySphere Bitmap (Normal or HDRI)]

Supported Textures • • • •

Constant Colors Bitmaps (Normal and HDRI) Procedurals [Perlin Noise, Marble, Wood, Windy, Checker, Wireframe, Normal Ramp, Fresnel Ramp] Any Weighted or Multiplicative Combination of the Above

Supported Features • Bump Mapping • Normal mapping • Clip Mapping • Bevel Mapping (an innovative KT feature!) • Edge Outlining

243

Kerkythea • • • • • • • • •

Depth of field Fog Isotropic Volume Scattering Faked Caustics Faked Translucency Dispersion Anti-aliasing [Texture Filtering, Edge Antialiasing] Selection Rendering Surface and Material Instancing

Supported Camera Types • Planar Projection [ Pinhole, Thin lens ] • Cylindrical Pinhole • Spherical Pinhole Supported Rendering Techniques • Classic Ray Tracing • Path Tracing (Kajiya) • Bidirectional Path Tracing (Veach & Guibas) • • • • • •

Metropolis Light Transport (Kelemen, Kalos et al.) Photon mapping (Jensen) [mesh maps, photon maps, final gathering, irradiance caching, caustics] Diffuse Interreflection (Ward) Depth Rendering Mask Rendering Clay Rendering

Application Environment • • • • • •

OpenGL Real-Time Viewer [basic staging capabilities] Integrated Material Editor Easy Rendering Customization Sun/Sky Customization Script System Command Line Mode

External links • Official website [1] (Down for two months now, and apparently in no rush to get it working again.) • Kerkythea's Forum [2] • Kerkythea's Wiki [3]

References [1] http:/ / www. kerkythea. net/ joomla/ [2] http:/ / www. kerkythea. net/ phpBB2/ index. php [3] http:/ / wikihost. org/ wikis/ kerkythea/ wiki/ start

244

LightWave 3D

245

LightWave 3D LightWave 3D

Developer(s)

NewTek, Inc.

Stable release

11.5 / January 31, 2013

Operating system Amiga, IRIX, Mac OS X, Windows Type

3D computer graphics

License

Proprietary

Website

http:/ / www. lightwave3d. com

LightWave 3D is a 3D computer graphics software program developed by NewTek. The latest release of LightWave runs on Windows and Mac OS X.

Overview LightWave is a software package used for rendering 3D images, both animated and static. It includes a rendering engine that supports such advanced features as realistic reflection and refraction, radiosity, and caustics. The 3D modeling component supports both polygon modeling and subdivision surfaces. The animation component has features such as reverse and forward kinematics for character animation, particle systems and dynamics. Programmers can expand LightWave's capabilities using an included SDK which offers LScript scripting (a proprietary scripting language) and common C language interfaces.

History In 1988, Allen Hastings created a rendering and animation program called Videoscape, and his friend Stuart Ferguson created a complementary 3D modeling program called Modeler, both sold by Aegis Software. NewTek planned to incorporate Videoscape and Modeler into its video editing suite, Video Toaster. Originally intended to be called "NewTek 3D Animation System for the Amiga", Hastings later came up with the name "LightWave 3D", inspired by two contemporary high-end 3D packages: Intelligent Light and Wavefront. In 1990, the Video Toaster suite was released, incorporating LightWave 3D, and running on the Commodore Amiga computer. LightWave 3D has been available as a standalone application since 1994, and version 9.3 runs on both Mac OS X and Windows platforms. Starting with the release of version 9.3, the Mac OS X version has been updated to be a Universal Binary. The last known standalone revision for the Amiga was Lightwave 5.0, released in 1995. Shortly after the release of the first PC version, NewTek discontinued the Amiga version, citing the platform's uncertain future. LightWave was used to create special effects for the Babylon 5, Star Trek: Voyager, Space: Above and Beyond and seaQuest DSV science fiction television series; the program was also utilized in the production of Titanic as well as the recent Battlestar Galactica TV series, Sin City, Star Trek, 300 and Star Wars movies. The short film 405 was produced by two artists from their homes using Lightwave. In the Finnish Star Trek parody Star Wreck: In the Pirkinning, most of the visual effects were done in LightWave by Finnish filmmaker Samuli Torssonen, who produced the VFX work for the feature film Iron Sky. The film Jimmy Neutron: Boy Genius was made entirely in

LightWave 3D Lightwave 6 and messiah:Studio. In 2007, the first feature film to be 3d animated completely by one person without the typical legion of animators made its debut, Flatland the Film by Ladd Ehlinger Jr. It was animated entirely in Lightwave 3D 7.5 and 8.0. In its ninth version, the market for LightWave ranges from hobbyists to high-end deployment in video games, television and cinema. NewTek shipped a 64-bit version of LightWave 3D as part of the fifth free update of LightWave 3D 8, and was featured in a keynote speech by Bill Gates at WinHEC 2005. On Feb 4 2009, NewTek announced "LightWave CORE" its next-generation 3D application via a streamed live presentation to 3D artists around the world. It features a highly customizable and modernized user interface, Python scripting integration that offers realtime code and view previews, an updated file format based on the industry standard Collada format, substantial revisions to its modeling technologies and a realtime iterative viewport renderer. It will also be the first Lightwave product to be available on the Linux operating system. CORE was eventually cancelled as a standalone product and NewTek announced that the CORE advancements would become part of the ongoing LightWave platform, starting with LightWave 10. On February 20, 2012 NewTek began shipping LightWave 11 Software, the latest version of its professional 3D modeling, animation, and rendering software.[1] LightWave 11 incorporates many new features, such as instancing, flocking and fracturing tools, flexible Bullet Dynamics, Pixologic Zbrush support, and more. LightWave 11 is used for all genres of 3D content creation-from film and broadcast visual effects production, to architectural visualization, and game design.[2][3][4]

Modeler and Layout LightWave is composed of two separate programs: Modeler and Layout. Each program is specifically designed to provide a dedicated workspace for specific tasks. When the two programs are running simultaneously, a third process called the Hub can be used to automatically synchronize data. Layout contains the animation system and the renderer which provides the user with several options including ray tracing options, multithreading, global illumination, and output parameters. Modeler, as the name implies, includes all of the modeling features used to create the 3d models that are used in the animation and rendering component. This differs from most 3D computer graphics packages which normally integrate the renderer and the modeler. A long-standing debate in the LightWave user community has consisted of whether or not to integrate Modeler and Layout into a single program. In response to this, NewTek has begun an integration process by including several basic modeling tools with Layout. There is also a command line-based network rendering engine named Screamernet which can be used to distribute rendering tasks across a large number of networked computers. This is used to reduce the overall time that it takes to render a single project by having the computers each rendering a part of the whole project in parallel. Screamernet includes all the features of the rendering engine that is integrated in Layout but without an interactive user interface.

Features Dynamics Lightwave is equipped with all the required dynamics such as hard body, soft body and cloth. Hard body dynamics equips the user to simulate effects like rockslides, building demolitions and sand effects, using realistic forces like gravity and collisions. Soft body equips the user with a tool that can simulate jelly or jiggling fat on overweight characters. This can also be applied to characters for a dynamic hair effect. Cloth can be applied to clothing for characters. This can also be used for hair to simulate more realistic hair movement. The CORE subsystem of Lightwave 11 includes a new rigid-body dynamics engine called Bullet.

246

LightWave 3D

Hypervoxels Hypervoxels are a means to render different particle animation effects. Different modes of operation have the ability to generate appearances that mimic: • Blobby metaballs for things like water or mercury, including reflection or refraction surface settings • Sprites which are able to reproduce effects like fire or flocking birds • Volume shading for simulating clouds or fog type effects

Material shaders Lightwave comes with a nodal texture editor that comes with a collection of special-purpose material shaders. Some of the types of surface for which these shaders have been optimized include: • • • •

general-purpose subsurface scattering materials for materials like wax or plastics realistic skin, including subsurface scattering and multiple skin layers metallic, reflective, materials using energy conservation algorithms transparent, refractive materials including accurate total internal reflection algorithms

Nodes With LW 9, Newtek added Node editors to the Surface Editor and Mesh Displacement parts of LightWave. They also however release the Node SDK with the software, so any developer can add their own Node Editors via plug-ins, and a few have done so, notably Denis Pontonnier, who created free to download node editors and many other utility nodes for all of the sdk classes in LightWave. This now means users can utilise nodes for modifying images and renders, procedural textures, modifying the shape of hypervoxels, controlling motions of objects, driving animation channels, and use things like particles and other meshes to drive these functions. This has greatly enhanced the abilities of standalone LightWave. The node areas of LightWave continue to expand, with volumetric lights now controllable with nodes.

LScript LScript is one of LightWave's scripting languages. It provides a comprehensive set of prebuilt functions you can use when scripting how LightWave behaves.

Python With LW 11, Newtek added Python support as an option for custom scripting.

Bullet Physics From LW 11, Newtek have added Bullet support.

Lightwave SDK The SDK (Software Development Kit) provides a set of C classes for writing native plugins for Lightwave.

247

LightWave 3D

Film and television programmes using Lightwave A more comprehensive list can be found at the Lightwave website.[5] Some notable highlights are: • • • • • • • • • • • • • •

Jurassic Park (1993 Visual Effects Academy Award) Babylon 5 (1993 Visual Effects Emmy Award) seaQuest DSV (1993–1996) Battlestar Galactica (2007, 2008 Visual Effects Emmy Award) Frank Herbert's Dune (2001 Visual Effects Emmy Award) Frank Herbert's Children of Dune (2003 Visual Effects Emmy Award) Jimmy Neutron: Boy Genius (2002 Academy Award nominee) The Adventures of Jimmy Neutron: Boy Genius (spinoff TV series of the film Jimmy Neutron: Boy Genius, 2002–2006) Lost (2005 Visual Effects Emmy Award; 2004–2010) Stargate SG-1 (Emmy Award nominee; 1997–2007) Star Trek: Deep Space Nine [6] (1993–1999) Star Trek: Enterprise (Emmy Award nominee) [7] (2001–2005) Star Trek: Voyager (1999, 2001 Visual Effects Emmy Award) [8] Titanic (1997 Visual Effects Academy Award)

• • • • • • • • • • • • • •

The X-Files (2000 Visual Effects Emmy Award) Pan's Labyrinth [9] (2006) Avatar (2010 Visual Effects and Art Direction Academy Awards)[10][11] Invader Zim (2001) Finding Nemo (2003) 24 (2001–2010) 300[12] (2007) Iron Man[13] (2008) The Outer Limits (1995–2002) Animal Armageddon (2009 documentary TV series created 100% in LightWave 3D) Ni Hao, Kai-Lan[14] (2008–present) V[15] (2009–2011) Iron Sky (2006–2012)[16] The Walking Dead (TV series) (2010-present)

Licensing Prior to being made available as a stand-alone product in 1994, LightWave required the presence of a Video Toaster in an Amiga to run. Until version 11.0.3[17][18], LightWave licenses were bound to a hardware dongle (e.g. Safenet USB or legacy parallel port models). Without a dongle LightWave would operate in "Discovery Mode" which severely restricts functionality. One copy of LightWave supports distributed rendering on up to 999 nodes.

References [1] [2] [3] [4] [5]

NewTek Ships LightWave™ 11 Software (http:/ / www. newtek. com/ newtek-now/ press/ 25-lightwave-news/ 465. html) LightWave 11 - Features List (https:/ / www. lightwave3d. com/ new_features/ ) LightWave official gallery (https:/ / www. lightwave3d. com/ community/ gallery/ ) caption is not allowed "Lightwave projects list" (http:/ / www. newtek. com/ lightwave/ projects. php), Newtek.com. Retrieved on 2008-07-18.

[6] http:/ / www. ex-astris-scientia. org/ database/ cgi. htm [7] http:/ / www. ex-astris-scientia. org/ database/ cgi. htm [8] http:/ / www. ex-astris-scientia. org/ database/ cgi. htm

248

LightWave 3D [9] http:/ / www. fxguide. com/ article395. html [10] http:/ / www. twin-pixels. com/ software-used-making-of-avatar/ [11] http:/ / www. cgtantra. com/ index. php?option=com_content& task=view& id=394& Itemid=35 [12] http:/ / www. newtek. com/ lightwave/ featured_300. php [13] http:/ / news. creativecow. net/ story/ 860343 [14] http:/ / www. newtek. com/ lightwave/ newsletter. php?pageNum_monthlynews=1& totalRows_monthlynews=16#nick [15] http:/ / www. fxguide. com/ article583. html [16] Iron Sky Signal E21 - Creating the Visual Effects (http:/ / www. youtube. com/ watch?v=czpYwqV22p4& feature=player_embedded). Energiaproductions YouTube channel. At 3:45. [17] http:/ / forums. newtek. com/ archive/ index. php/ t-129707. html [18] https:/ / www. lightwave3d. com/ buy/

External links • • • •

LightWave's official site (http://www.lightwave3d.com/) NewTek's official site (http://www.newtek.com/) LightWave 3D Community (http://forums.newtek.com/) – Active User Community. Liberty3D 3D Community Focused on LightWave3D and complimentary tools (http://www.liberty3d.com/ forums/) – Active User Community.

249

LuxRender

250

LuxRender LuxRender

A screenshot of Luxrender 0.7 Rendering a Desert Eagle []

Developer(s)

Jean-Philippe Grimaldi, Jean-Francois Romang, David Bucciarelli, Ricardo Lipas Augusto, Asbjorn Heid and others.

Stable release

1.2

[1]

/ February 24, 2013

Operating system Cross-platform Type

3D computer graphics

License

GPLv3

Website

www.luxrender.net

[2]

LuxRender is a free and open source software rendering system for physically correct image synthesis. The program runs on GNU/Linux, Mac OS X, and Microsoft Windows.

Overview LuxRender features only a 3D renderer; it relies on other programs (3D modeling programs) to create the scenes to render, including the models, materials, lights and cameras. This content can then be exported from the application it was created in for rendering using LuxRender. Fully functional exporters are available for Blender, DAZ Studio and Autodesk 3ds Max; partially functional ones for Cinema 4D, Maya, SketchUp and XSI.[3] After opening the exported file, the only thing LuxRender will do is render the scene. You can however tweak various post processing settings from the graphical interface of the program.[4]

History LuxRender is based on PBRT [5], a physically based ray tracing program.[] Although very capable and well structured, PBRT focuses on academic use and is not easily usable by digital artists. As PBRT is licensed under the GPL, it was possible to start a new program based on PBRT's source code. With the blessings of the original authors, a small group of programmers took this step in September 2007. The new program was named LuxRender and was to focus on artistic use. Since its initial stage, the program has attracted interest of various programmers around the

LuxRender

251

world.[] On 24 June 2008, the first official release was announced.[6] This was the first release that is considered to be usable for the general public.

Features The main features of LuxRender as of version 0.8 include:[][] • Biased and unbiased rendering: Users can choose between physical accuracy (unbiased) and speed (biased). • Full spectral rendering: Instead of the RGB colour spectrum, full spectra are used for internal calculations. • Hierarchical procedural and image based texture system: Procedural and image based textures can be mixed in various ways, making it possible to create complex materials. • Displacement mapping and subdivision: Based on procedural or image textures, object surfaces can be transformed.

Rendering of a school interior with LuxRender. Modelled in Blender.

• Network and co-operative rendering: Rendering time can be reduced by combining the processing power of multiple computers. IPv6 is also supported. • Perspective (including shift lens), orthographic and environment cameras • HDR output: Render output can be saved in various file formats, including .png, .tga and .exr. • Instances: Instancing significantly saves system resources, in particular memory consumption by reusing mesh data in duplicated objects. • Built in post-processing: While rendering, you can add post processed effects like bloom, glare, chromatic aberration and vignetting. • Motion blur, depth of field and lens effects: True motion blur, both for the camera and individual objects, and physically accurate Lens Effects, including Depth Of Field. • Light groups: By using light groups, one can output various light situations from a single rendering, or make adjustments to the balance between light sources in real time. • Tone mapping • Image denoising • Fleximage (virtual film): Allows you to pause and continue renders. The current state of the render can be written to a file, so that any system can continue the render at a later moment. • GPU acceleration for path tracing when sampling one light at a time. • Film response curves to emulate traditional cameras color response (some curve are for black&white films too). • Volumetric rendering using Homogeneous volumes by defining an interior, and exterior volume. • Subsurface Scattering

LuxRender

Planned/Implemented Features The new features of LuxRender for version 1.0 include:[][] • Improved speeds for Hybrid Path GPU Rendering: The Path GPU Rendering Engine have various speed and stability improvements. • New Hybrid Bidirectional Rendering: A GPU Accelerated version of the LuxRender Bi-Directional Path Tracer is in development. However, it does not currently support all of LuxRender's traditional Bi-Directional Path tracing features yet. • Networking: Improvements to LuxRender's networking mode. • A new layered material: Allows the layering of multiple materials on one object. • A new glossy coating material: Allows a glossy material to be placed on an object, coating the underneath material. • New features and developments updated here: LuxRender Development Blog [7]

References [1] http:/ / www. luxrender. net/ forum/ viewtopic. php?f=12& t=9653 [2] http:/ / www. luxrender. net/ [3] http:/ / www. luxrender. net/ wiki/ Exporter_Status [4] http:/ / www. luxrender. net/ wiki/ index. php?title=Luxrender_Manual [5] http:/ / www. pbrt. org [7] http:/ / www. luxrender. net/ en_GB/ development_blog

External links • Official website (http://www.luxrender.net/) • LuxRender coverage on cgindia.org (http://www.cgindia.org/2007/10/ introducing-luxrender-free-unbiased-3d.html) • Examples of Luxrender output (http://www.luxrender.net/forum/gallery2.php)

252

Manta Interactive Ray Tracer

253

Manta Interactive Ray Tracer Manta is a highly portable interactive ray tracing (graphics) environment designed to be used on both workstations and super computers and is distributed under the MIT license. It was originally developed at the University of Utah.

References • James Bigler, Abraham Stephens, and Steven G. Parker. (2006) "Design for Parallel Interactive Ray Tracing Systems". In proceedings of the IEEE Symposium on Interactive Ray Tracing. (http://ieeexplore.ieee.org/xpl/ freeabs_all.jsp?arnumber=4061561) • Abraham Stephens, Solomon Boulos, James Bigler, Ingo Wald, and Steven G. Parker. (2006) "An Application of Scalable Massive Model Interaction using Shared Memory Systems". In proceedings of the Eurographics Symposium on Parallel Graphics and Visualization. (http://www.cs.utah.edu/~boulos/papers/manta-egpgv06. pdf)

External links • Official page of the Manta interactive ray tracer (http://mantawiki.sci.utah.edu/manta/index.php/Main_Page)

Maxwell Render Maxwell Render

Sample Maxwell Render image output by Benjamin Brosdaux Developer(s)

Next Limit Technologies

Initial release

26 April 2006

Stable release

2.7 / 19 June 2012

Operating system Linux (on x86-64) Mac OS X (on IA-32 and PowerPC) Microsoft Windows (on IA-32 and x86-64) Available in

English

Type

Raytracer

License

Proprietary commercial software

Website

www.maxwellrender.com

[1]

Maxwell Render

254

Image rendered in Maxwell Render

Maxwell Render is a software package that aids in the production of photorealistic images from computer 3D model data; a 3D renderer. It was introduced as an early alpha on December 2004 (after 2 years of internal development) and it utilized a global illumination (G.I.) algorithm based on a metropolis light transport variation.

Overview Maxwell Render was among the first widely available implementations of unbiased rendering and its G.I. algorithm was linked directly to a physical camera paradigm to provide a simplified rendering experience wherein the user was not required to adjust arbitrary illumination parameter settings, as was typical of scanline renderers and raytracers of the time.[citation needed] Maxwell Render was developed by Madrid-based Next Limit Technologies, which was founded in 1998 by engineers Victor Gonzalez and Ignacio Vargas.

Software components • • • • • •

Maxwell Render (core render engine) Maxwell Studio Material Editor Maxwell FIRE (Fast Interactive Rendering) Network components (Maxwell Monitor, Manager and Node) Plugins (modeling software specific)

History The first alpha was released on 4 December 2004; the first release candidate on 2 December 2005.[2] The latter marked also the first appearance of Maxwell Studio and Material Editor. Further releases were • • • • • • • • • •

Version 1.0 (26 April 2006) Version 1.1 (4 July 2006) Version 1.5 (23 May 2007) Version 1.6 (22 November 2007) Version 1.7 (19 June 2008) Version 2.0 (23 September 2009) Version 2.1 (18 July 2010) Version 2.5 (13 December 2010) Version 2.6 (2 November 2011) Version 2.7 (19 June 2012)

The current software version is 2.7

Maxwell Render

Integration with other software • • • • • • • • • • • • • • • •

3ds max (with Maxwell Fire) ArchiCAD Autodesk VIZ Bonzai 3D Cinema 4D (with Maxwell FIRE) Form-Z Houdini (with Maxwell FIRE) Lightwave Maya Microstation Modo (with Maxwell FIRE) Rhino3D SketchUp Softimage XSI (with Maxwell FIRE) SolidWorks After Effects

• Nuke • Photoshop

References [1] http:/ / www. maxwellrender. com [2] Next Limit, Maxwell Render announcements (http:/ / www. maxwellrender. com/ forum/ viewforum. php?f=1)

External links • • • • • •

Official Website (http://www.maxwellrender.com) Next Limit Technologies (http://www.nextlimit.com) (Parent Company) Official Forum (http://www.maxwellrender.com/forum/index.php) Maxwell Render Resources (http://resources.maxwellrender.com/) Maxwell Render Tutorials (http://think.maxwellrender.com/) Maxwell Render on Facebook (http://www.facebook.com/pages/Maxwell-Render-The-Light-Simulator/ 66133283904) • Maxwell Render on Twitter (http://twitter.com/MaxwellRender)

255

Mental ray

256

Mental ray Mental Ray Original author(s) Mental Images Developer(s)

Nvidia

Stable release

3.11

Preview release

3.11

Type

Renderer

License

Proprietary

Website

www.nvidia-arc.com/products/nvidia-mental-ray/

[1]

Mental Ray (stylized as mental ray) is a production-quality rendering application developed by Mental Images (Berlin, Germany). Mental Images was bought in December 2007 by NVIDIA. As the name implies, it supports ray tracing to generate images. Mental Ray has been used in many feature films, including Hulk, The Matrix Reloaded & Revolutions, Star Wars Episode II: Attack of the Clones, The Day After Tomorrow and Poseidon. [2][3]

Features The primary feature of Mental Ray is the achievement of high performance through parallelism on both multiprocessor machines and across render farms. The software uses acceleration techniques such as scanline for primary visible surface determination and binary space partitioning for secondary rays. It also supports caustics and physically correct simulation of global illumination employing photon maps. Any combination of diffuse, glossy (soft or scattered), and specular reflection and transmission can be simulated. Mental Ray was designed to be integrated into a third-party application using an API or be used as a An image rendered using Mental Ray which demonstrates global standalone program using the .mi scene file format for illumination, photon maps, depth of field, ambient occlusion, glossy reflections, soft shadows and bloom batch-mode rendering. Up to this moment there are many programs integrating this renderer such as Autodesk Maya, 3D Studio Max, AutoCAD, Cinema 4D and Revit, Softimage|XSI, Side Effects Software's Houdini, SolidWorks and Dassault Systèmes' CATIA. Most of these software front-ends provide their own library of custom shaders (described below). However assuming these shaders are available to mental ray, any mi file can be rendered, regardless of the software that generated it. Mental Ray is fully programmable and infinitely variable, supporting linked subroutines also called shaders written in C or C++. This feature can be used to create geometric elements at runtime of the renderer, procedural textures, bump and displacement maps, atmosphere and volume effects, environments, camera lenses, and light sources.

Mental ray

257

Supported geometric primitives include polygons, subdivision surfaces, and trimmed free-form surfaces such as NURBS, Bézier, and Taylor monomial. Phenomena consist of one or more shader trees (DAG). A phenomenon looks like regular shader to the user, and in fact may be a regular shader, but generally it will contain a link to a shader DAG, which may include the introduction or modification of geometry, introduction of lenses, environments, and compile options. The idea of a Phenomenon is to package elements and hide complexity. In 2003, Mental Images was awarded an Academy Award for their contributions to the mental ray rendering software for motion pictures.

An image of diamond rendered using Mental Ray in CATIA V5R19 Photo Studio.

Notes [1] http:/ / www. nvidia-arc. com/ products/ nvidia-mental-ray/ [2] " mental images Software Developers Receive Academy Award (http:/ / www. mentalimages. com/ news/ detail/ article/ / mental-image-22. html)". Mental Images Press Release, April 23, 2011 [3] " Large as Life: Industrial Light & Magic Looks to mental ray to Create "Poseidon" (http:/ / www. mentalimages. com/ news/ detail/ article/ / large-as-lif. html)". Mental Images Press Release, April 23, 2011

Further reading • Driemeyer, Thomas: Rendering with Mental Ray, SpringerWienNewYork, ISBN 3-211-22875-6 • Driemeyer, Thomas: Programming mental ray, SpringerWienNewYork, ISBN 3-211-24484-0 • Kopra, Andy: Writing mental ray Shaders: A perceptual introduction, SpringerWienNewYork, ISBN 978-3-211-48964-2

External links • • • • •

Mental Ray home page (http://www.nvidia-arc.com/products/nvidia-mental-ray.html) Mental Ray user wiki (http://www.mymentalray.com/wiki/index.php/Mental_ray_cookbook) Mental Ray user blog (http://elementalray.wordpress.com/) mental ray and iray for CINEMA 4D (http://www.m4d.info) by at² GmbH MetaSL Material Library (http://materials.mentalimages.com/)

Modo

258

Modo modo

Developer(s)

Luxology, LLC

Stable release

701

[1]

/ March 25, 2013

Operating system Windows , Linux , Mac OS X Type

3D computer graphics

License

Proprietary

Website

www.luxology.com/modo/

[2]

modo is a polygon and subdivision surface modeling, sculpting, 3D painting, animation and rendering package developed by Luxology, LLC. The program incorporates features such as n-gons and edge weighting, and runs on Microsoft Windows, Linux and Mac OS X platforms.

History modo was created by the same core group of software engineers that previously created the pioneering 3D application LightWave 3D, originally developed on the Amiga platform and bundled with the Amiga-based Video Toaster workstations that were popular in television studios in the late 1980s and early 1990s. They are based in Mountain View, California. In 2001, senior management at NewTek (makers of LightWave) and their key LightWave engineers disagreed regarding the notion for a complete rewrite of LightWave's work-flow and technology.[3] Newtek's Vice President of 3D Development, Brad Peebler, eventually left Newtek to form Luxology, and was joined by Allen Hastings and Stuart Ferguson (the lead developers of Lightwave), along with some of the LightWave programming team members. After more than three years of development work, modo was demonstrated at Siggraph 2004 and released in September of the same year. In April 2005, the high-end visual effects studio Digital Domain integrated modo into their production pipeline. Other studios to adopt modo include Pixar, Industrial Light & Magic, Zoic Studios, id Software, Eden FX, Studio ArtFX, The Embassy Visual Effects, Naked Sky Entertainment and Spinoff Studios. At Siggraph 2005, modo 201 was pre-announced. This promised many new features including the ability to paint in 3D (à la ZBrush, BodyPaint 3D), multi-layer texture blending, as seen in LightWave, and, most significantly, a rendering solution which promised physically-based shading, true lens distortion, anisotropic reflection blurring and built-in polygon instancing. modo 201 was released on May 24, 2006. modo 201 was the winner of the Apple Design Awards for Best Use of Mac OS X Graphics for 2006. In October 2006, modo also won "Best 3D/Animation Software" from MacUser magazine. In January 2007, modo won the Game Developer Frontline Award for "Best Art Tool". modo 202 was released on August 1, 2006. It offered faster rendering speed and several new tools including the ability to add thickness to geometry. A 30 day full-function trial version of the software was made available. modo was recentlyWikipedia:Manual of Style/Dates and numbers#Chronological items used in the production of the feature films Stealth, Ant Bully, and Wall*E.

Modo In March 2007, Luxology released modo 203 as a free update. It included new UV editing tools, faster rendering and a new DXF translator. The release of modo 301 on September 10, 2007 added animation and sculpting to its toolset. The animation tools include being able to animate cameras, lights, morphs and geometry as well as being able to import .mdd files. Sculpting in modo 301 is done through mesh based and image based sculpting (vector displacement maps) or a layered combination of both. modo 302, was released on April 3, 2008 with some tool updates, more rendering and animation features and a physical sky and sun model. modo 302 was a free upgrade for existing users. modo 303 was skipped in favor of the development of modo 401. modo 401 shipped on June 18, 2009. This release has many animation and rendering enhancements and is newly available on 64-bit Windows. On October 6, 2009, modo 401 SP2 was released followed by modo 401 SP3 on January 26, 2010 and SP5 on July 14th of the same year.[4] modo 501 shipped on December 15, 2010. This version was the first to run on 64-bit Mac OS X. It contains support for Pixar Subdivision Surfaces, faster rendering and a visual connection editor for creating re-usable animation rigs. modo 601 shipped on Feb 29, 2012. This release offers additional character animation tools, dynamics, a general purpose system of deformers, support for retopology modeling and numerous rendering enhancements. modo 701 shipped on Mar 25, 2013. This is the current version and offers audio support, a Python API for writing plugins, additional animation tools and layout, more tightly integrated dynamics, and a procedural particle system along with other rendering enhancements such as render proxy and environment importance sampling.

Workflow modo's workflow differs substantially from many other mainstream 3D applications. While Maya and 3ds Max stress using the right tool for the job, modo artists typically use a much smaller number of basic tools and combine them to create new tools using the Tool Pipe and customizable action centers and falloffs.

Action Centers modo allows an artist to choose the "pivot point" of a tool or action in realtime simply by clicking somewhere. Thus, modo avoids making the artist invoke a separate "adjust pivot point" mode. In addition, the artist can tell modo to derive a tool's axis orientation from the selected or clicked on element, bypassing the needs for a separate "adjust tool axis" mode.

Falloffs Any tool can be modified with customizable falloff, which modifies its influence and strength according to geometric shapes. Radial falloff will make the current tool affect elements in the center of a resizable sphere most strongly, while elements at the edges will be barely affected at all. Linear falloff will make the tool affect elements based on a gradient that lies along a user-chosen line, etc.

3D painting modo allows an artist to paint directly onto 3D models and even paint instances of existing meshes onto the surface of an object. The paint system allows users to use a combination of tools, brushes and inks to achieve many different paint effects and styles. Examples of the paint tools in modo are airbrush, clone, smudge, and blur. These tools are paired with your choice of "brush" (such as soft or hard edge, procedural). Lastly, you add an ink, an example of which is image ink, where you paint an existing image onto a 3D model. Pressure sensitive tablets are supported. The results of painting are stored in a bitmap and that map can be driving anything in modo's Shader Tree. Thus you

259

Modo

260

can paint into a map that is acting as a bump map and see the bumps in real-time in the viewport.

Renderer modo's renderer is multi-threaded and scales nearly linearly with the addition of processors or processor cores. That is, an 8-core machine will render a given image approximately eight times as fast as a single-core machine with the same per-core speed. modo runs on up to 32 cores and offers the option of network rendering. In addition to the standard renderer, which can take a long time to run with a complex scene on even a fast machine, modo has a progressive preview renderer which renders to final quality if left alone. modo's user interface allows you to configure a work space that includes a preview render panel, which renders continuously in the background, restarting the render every time you change the model. This gives a more accurate preview of your work in progress as compared to the typical hardware shading options. In practice, this means you can do fewer full test renders along the way toward completion of a project. The preview renderer in modo 401 offers progressive rendering, meaning the image resolves to near final image quality if you let it keep running. modo material assignment is done via a shader tree that is layer-based rather than node-based. modo's renderer is a physically based ray-tracer. It includes features like caustics, dispersion, stereoscopic rendering, fresnel effects, subsurface scattering, blurry refractions (e.g. frosted glass), volumetric lighting (smokey bar effect), and Pixar-patented Deep Shadows.

Select features • • • • • • • •

N-gon modeling and rendering (subdivided polygons with >4 points) Tool Pipe for creating customized tools Edges and Edge Weighting User specified navigation controls for zoom, pan Macros Scripting (Perl, Python, Lua) Customizable User Interface Extensive file input and output including X3D file export

Key modeling features • • • • • • • • • • •

Mesh Instancing Mesh Paint Tool Retopology modelling N-Gon SDS 1-Click Macro Recording High-Speed OpenGL Navigation Extensive Falloff System Including Path and Lasso Complete Input Remapping of Mouse and Keyboard Smooth UV Interpolation on SDS Meshes Soft and lazy selections Tool Pipe – tool customization

Modo

261

Key Sculpting Features • • • • • • •

Multi-Res sculpting based on Pixar Subdivision surfaces Mesh-based sculpting Image-based sculpting Emboss tool Image ink (sculpt with image) Brushes and brush editor/browser Spline-based strokes are supported

Key painting & texturing features • • • • • • •

Advanced Procedural Textures Control micropolygon tessellation via any one or combination of multiple texture layers Real-Time Bump Map Painting Procedural Painting Parametric ink leverages 3D data to modulate attributes Control painting tools with modeling falloffs Jitter Nozzle

• Image Based Brushes and Inks • Shader tree • Particle painting

Key Animation features • • • • • • • • • • • •

Animate virtually any item's properties (geometry, camera, lights) Graph editor with animation curve manipulation Layerable deformers Time system can be frames, seconds, SMPTE or film code Morph target animation Reads MDD files from other animation systems Track View Full-body Inverse kinematics Channel linking Channel modifiers Dynamic parenting Skeleton creation and mesh binding

Key rendering features • • • • • • • •

Global Illumination Physical Sun and Sky Advanced Procedural Textures Micropolygon tessellation Skin and Hair shaders Displacement Rendering Interactive Render Preview Orthographic Rendering

• IEEE Floating Point Accuracy • Transparency (can vary with Absorption Distance)

Modo • • • • • • • • • • • • • • •

262 Subsurface scattering Anisotropic Blurred Reflections Instance Rendering Render Baking to Color and Normal Maps True Lens Distortion Physically Based Shading Model Fresnel effects Motion Blur Volumetrics Depth of Field IES (photometric) light support Walkthrough mode provides steady GI solution over range of frames Network Rendering Numerous render outputs Render Passes

modo once included imageSynth, a Plug-in for creating seamless textures in Adobe Photoshop CS1 or later. This bundle ended with the release of modo 301. Luxology has announced that the imageSynth plugin for Photoshop has been retired.[5]

Books • • • • • • • • •

The Official Luxology modo Guide by Dan Ablan ISBN 1-59863-068-7 (October 2006) Le Mans C9 Experience by Andy Brown (video-based modo tutorials) (January 2007) Sports Shoe Tutorials by Andy Brown (video-based modo tutorials) (March 2007) Wrist Watch Tutorials by Andy Brown (video-based modo tutorials) (April 2007) modo 301 Signature Courseware DVD by Dan Ablan (October 2007) Seahorse (sculpting) Tutorial by Andy Brown (video-based modo tutorials) (August 2007) The Alley Tutorial by Andy Brown (game asset creation) (October 2007) modo in Focus Tutorials by Andy Brown (November 2007) Introductory videos and 30 day trial version Real World modo: The Authorized Guide: In the Trenches with modo by Wes McDermott (Paperback) (September 2009)

References [1] http:/ / www. luxology. com/ modo/ technicalspecifications/ index. aspx [2] http:/ / www. luxology. com/ modo/ [3] "Modo – What Lightwave Should Have Become." (http:/ / forums. luxology. com/ discussion/ topic. aspx?id=18008) Luxology.com (http:/ / forums. luxology. com). Accessed February 2012. [4] (January 26, 2010.) @luxology on Twitter (http:/ / twitter. com/ Luxology/ status/ 8253045571). Accessed February 2012.

Additional sources • Cohen, Peter (June 10, 2005). "Luxology modo ready for Intel switch" (http://www.macworld.com/article/ 45268/2005/06/modo.html). Macworld.com (http://www.macworld.com). Retrieved February 22, 2012. • Cohen, Peter (October 8, 2007). "Luxology licenses Pixar graphics tech" (http://www.macworld.com/article/ 60420/2007/10/luxology.html). Macworld.com (http://www.macworld.com). Retrieved February 22, 2012. • Sheridan Perry, Todd (August 11, 2008). "Luxology's Modo 302" (http://www.animationmagazine.net/ tech_reviews/luxologys-modo-302/). Animation Magazine (http://www.animationmagazine.net). Retrieved February 22, 2012.

Modo

External links • Modo (http://www.luxology.com/modo/) on Luxology.com • Luxology's Modo 501 at GDC 2011 (http://software.intel.com/en-us/videos/ luxologys-modo-501-at-gdc-2011) – from Intel.com (http://software.intel.com)

OptiX NVIDIA OptiX (officially, OptiX Application Acceleration Engine) is a real time ray tracing engine for CUDA-based video cards such as the GeForce, Quadro, and Tesla series. According to NVIDIA, OptiX is designed to be flexible enough for "procedural definitions and hybrid rendering approaches." Aside from computer Graphics rendering, Optix also helps in optical & acoustical design, radiation research, and collision analysis.

External links • NVIDIA OptiX Application Acceleration Engine [1]

References [1] http:/ / www. nvidia. com/ object/ optix. html

263

PhotoRealistic RenderMan

264

PhotoRealistic RenderMan PhotoRealistic RenderMan Developer(s) Pixar Animation Studios Type

Rendering system

License

Proprietary commercial software

Website

renderman.pixar.com/view/renderman

[1]

PhotoRealistic RenderMan, or PRMan for short, is a proprietary photorealistic RenderMan-compliant renderer. It primarily uses the Reyes algorithm but is also fully capable of doing ray tracing and global illumination. PRMan is produced by Pixar Animation Studios and used to render all of their in-house 3D animated movie productions. It is also available as a commercial product licensed to third parties, sold as part of a bundle called RenderMan Pro Server, a RenderMan-compliant rendering software system developed by Pixar based on their own interface specification. RenderMan for Maya is a full version of PRMan that is designed to be completely integrated with the Maya high-end 3D computer graphics software package; however, it is still in its infancy.[citation needed]

Awards RenderMan is often used in creating digital visual effects for the Hollywood blockbuster movies of today such as Titanic, the Star Wars prequels, and The Lord of the Rings. As part of the 73rd Scientific and Technical Academy Awards ceremony presentation on March 3, 2001, The Academy of Motion Picture Arts and Sciences’ Board of Governors honored Ed Catmull, Loren Carpenter, and Rob Cook, with an Academy Award of Merit (Oscar) "for significant advancements to the field of motion picture rendering as exemplified in Pixar’s RenderMan." This was the first Oscar awarded to a software package for its outstanding contributions to the field.

External links • Pixar's official PRMan website [2]

References [1] http:/ / renderman. pixar. com/ view/ renderman [2] http:/ / renderman. pixar. com/ products/ tools/ rps. html

Picogen

265

Picogen picogen

Developer(s)

Sebastian Mach

Stable release

0.3 / July 20, 2010

Written in

C++

Operating system GNU/Linux, Windows Platform

Cross-platform Sourcecode

Type

Rendering system, 3d graphics software

License

GPL, Version 3, or newer

Website

http:/ / picogen. org

Picogen is a rendering system for the creation and rendering of artificial terrain, based on ray tracing. It is free software.

Overview While the primary purpose of picogen is to display realistic 3D terrain, both in terms of terrain formation and image plausibility, it also is a heightmap-creation tool,[1] in which heightmaps are programmed in a syntax reminiscent of LISP.[2]

A canyon landscape with snow-like shader.

The shading system is partially programmable.[3]

Example features • Whitted-Style Ray Tracer for quick previews • Rudimentary path tracer for high quality results • Partial implementation of Preetham's Sun-/Skylight Model [4] • Procedural Heightmaps, though before rendering they are tesselated An alpine landscape.

Picogen

266

Frontends Currently there is a frontend to picogen, called picogen-wx (based on wxWidgets). It is encapsulated from picogen and thus communicates with it on command-line level. Picogen-wx provides several panels to design the different aspects of a landscape, e.g. the Sun/Sky- or the Terrain-Texture-Panel. Each panel has its own preview window, though each preview window can be reached from any other panel. Landscapes can be loaded and saved through an own, simple XML-based file format, and images of arbitrary size (including antialiasing) can be saved.

External links • Project Website [5] • picogens [[DeviantArt [6]]-Group-Page]

References [1] Introduction to mkheightmap (http:/ / picogen. org/ wiki/ index. php?title=Introduction_to_mkheightmap) The heightmap panel.

[2] [3] [4] [5] [6]

Height Language Reference (http:/ / picogen. org/ wiki/ index. php?title=Height_Slang_Reference) Shaders in picogen (http:/ / ompf. org/ forum/ viewtopic. php?f=6& t=1050) A Practical Analytical Model for Daylight, Preetham, et al. (http:/ / www. cs. utah. edu/ vissim/ papers/ sunsky/ ) http:/ / picogen. org/ http:/ / picogen. deviantart. com/

Pixie

267

Pixie Pixie Developer(s)

Okan Arikan et Al.

Stable release

2.2.6 / 15 June 2009

Operating system Windows, Mac OS X, Linux Type

3D computer graphics

License

GPL and LGPL

Website

www.renderpixie.com

[1]

Pixie is a free (open source), photorealistic raytracing renderer for generating photorealistic images, developed by Okan Arikan in the Department of Computer Science at The University of Texas At Austin. It is RenderMan-compliant (meaning it reads conformant RIB, and supports full SL shading language shaders) and is based on the Reyes rendering architecture, but also support raytracing for hidden surface determination. Like the proprietary BMRT, Pixie is popular with students learning the RenderMan Interface, and is a suitable replacement for it. Contributions to Pixie are facilitated by SourceForge and the Internet where it can also be downloaded free of charge as source code or precompiled. It compiles for Windows (using Visual Studio 2005), Linux and on Mac OS X (using Xcode or Unix-style configure script). Key features include: • • • • • • • • •

64-bit Capable Fast multi-threaded execution. Possibility to distribute the rendering process to several machines. Motion blur and depth of field. Programmable shading (using RenderMan Shading Language) including full displacement support. Scalable, multi-resolution raytracing using ray differentials. Global illumination. Support for conditional RIB. Point cloud baking and 3D textures.

Pixie is developed by Okan Arikan and George Harker.

External links • • • • •

Home page [1] Pixie Wiki [2] Okan Arikan's home page [3] Blender - Open Source 3D Creator [4] Rib Mosaic - Blender Rib Export [5]

Pixie

268

References [1] [2] [3] [4] [5]

http:/ / www. renderpixie. com/ http:/ / www. renderpixie. com/ pixiewiki/ Main_Page http:/ / www. cs. utexas. edu/ ~okan/ http:/ / www. blender. org/ http:/ / sourceforge. net/ projects/ ribmosaic/

POV-Ray

269

POV-Ray POV-Ray

Developer(s)

The POV-Team

Stable release

3.6.2 / June 1, 2009

Preview release

3.7 Release Candidate 7 / February 23, 2013

Written in

C++

[1] [2]

Operating system Cross-platform Type

Ray tracer

License

POV-Ray License

Website

www.povray.org

[3]

[4]

The Persistence of Vision Raytracer, or POV-Ray, is a ray tracing program available for a variety of computer platforms. It was originally based on DKBTrace, written by David Kirk Buck and Aaron A. Collins. There are also influences from the earlier Polyray raytracer contributed by its author Alexander Enzmann. POV-Ray is freeware with the source code available.

History Sometime in the 1980s, David Kirk Buck downloaded the source code for a Unix raytracer to his Amiga. He experimented with it for a while, eventually deciding to write his own raytracer, named DKBTrace after his initials. He posted it to the "You Can Call Me Ray" bulletin board system in Chicago, thinking others might be interested in it. In 1987, Aaron A. Collins downloaded DKBTrace and began working on an x86-based port of it. He and David Buck collaborated to add several more features. When the program proved to be more popular than anticipated, they could not keep up with demand for more features. Thus, in July 1991 David turned over the project to a team of programmers working in the GraphDev forum on CompuServe. At the same time, he felt that it was inappropriate to use his initials on a program he no longer maintained. The name "STAR" (Software Taskforce on Animation and Rendering) was considered, but eventually the name became the "Persistence of Vision Raytracer", or "POV-Ray" for short.[5] POV-Ray was the first ray tracer to render an image in orbit, rendered by Mark Shuttleworth inside the International Space Station.[6] Features of the application and a summary of its history are discussed in an interview with David Kirk Buck and Chris Cason on episode 24 of FLOSS Weekly.[7]

POV-Ray

270

Features POV-Ray has matured substantially since it was created. Recent versions of the software include the following features: • A Turing-complete scene description language (SDL) that supports macros and loops.[8] • Library of ready-made scenes, textures, and objects • Support for a number of geometric primitives and constructive solid geometry • Several kinds of light sources • Atmospheric effects such as fog and media (smoke, clouds) • Reflections, refractions, and light caustics using photon mapping

Glass scene rendered in POV-Ray, demonstrating radiosity, photon mapping, focal blur, and other photorealistic capabilities.

• Surface patterns such as wrinkles, bumps, and ripples, for use in procedural textures and bump mapping • Radiosity • Image format support for textures and rendered output, including TGA, PNG, JPEG (only input) among others • Extensive user documentation One of POV-Ray's main attractions is its large collection of third party support. A large number of tools, textures, models, scenes, and tutorials can be found on the web. It is also a useful reference for those wanting to learn how ray tracing and related geometry and graphics algorithms work.

Current version The current official version of POV-Ray is 3.6. Some of the main features of this release: • • • •

Extends UV mapping to more primitives. Adds 16 and 32 bit integer data to density file. Various bugfixes and speed-ups. Improved 64-bit compatibility.

Beta-testing of version 3.7 is underway as of July 2008. The main improvement over 3.6 will be SMP support to allow the renderer to take advantage of multiple processors. Additionally, support has been added for HDRI, including the OpenEXR and Radiance file formats, and improved bounding using BSP trees. In July 2006, Intel Corporation started using the beta version to demonstrate their new dual-core Conroe processor due to the efficiency of the 3.7 beta's SMP implementation.

POV-Ray

271

Primitives POV-Ray, in addition to standard geometric shapes like tori, spheres and heightfields, supports mathematically defined primitives such as the isosurface (a finite approximation of an arbitrary function), the polynomial primitive (an infinite object defined by a 15th order or lower polynomial), the julia fractal (a 3-dimensional slice of a 4-dimensional fractal), the superquadratic ellipsoid (intermediate between a sphere and a cube), and the parametric primitive (using equations that represent its surface, rather than its interior). POV-Ray internally represents objects using their mathematical definitions; all POV-Ray primitive objects can be described by mathematical functions. This is different from many 3D computer modeling packages, which typically use triangle meshes to compose all objects. This fact provides POV-Ray with several advantages and disadvantages over other rendering / modeling systems. POV-Ray primitives are more accurate than their polygonal counterparts. Objects that can be described in terms of spheres, planar surfaces, cylinders, tori and the like are perfectly smooth and mathematically accurate in POV-Ray renderings, whereas polygonal artifacts may be visible in mesh-based modeling software. POV-Ray primitives are also simpler to define than most of their polygonal counterparts. In POV-Ray, a sphere is described simply by its center and radius; in a mesh-based environment, a sphere must be described by a multitude of small polygons.

Venn diagram of 4 spheres created with CSG. The source is on the description page.

Some dice rendered in POV-Ray. CSG, refraction and focal blur are demonstrated.

On the other hand, primitive-, script-based modeling is not always a practical method to create objects such as realistic characters or complex man-made artifacts like cars. Those objects have to be created in mesh-based modeling applications such as Wings 3D or Blender and then converted to POV-Ray's own mesh format.

Examples of the scene description language The following is an example of the scene description language used by POV-Ray to describe a scene to render. It demonstrates the use of a background colour, camera, lights, a simple box shape having a surface normal and finish, and the transforming effects of rotation.

POV-Ray image output based on the script to the left

POV-Ray #version 3.6; //Includes a separate file defining a number of common colours #include "colors.inc" global_settings { assumed_gamma 1.0 } //Sets a background colour for the image (dark grey) background { color rgb } //Places a camera //direction : Sets, among other things, the field of view of the camera //right: Sets the aspect ratio of the image //look_at: Tells the camera where to look camera { location direction 1.5*z right x*image_width/image_height look_at } //Places a light source //color : Sets the color of the light source (white) //translate : Moves the light source to a desired location light_source { color rgb translate } //Places another light source //color : Sets the color of the light source (dark grey) //translate : Moves the light source to a desired location light_source { color rgb translate } //Sets a box //pigment : Sets a color for the box ("Red" as defined in "colors.inc") //finish : Sets how the surface of the box reflects light //normal : Sets a bumpiness for the box using the "agate" in-built model //rotate : Rotates the box box { , texture { pigment { color Red } finish { specular 0.6 } normal { agate 0.25 scale 1/2 } } rotate } The following script fragment shows the use of variable declaration, assignment, comparison and the while loop construct:

272

POV-Ray

POV-Ray image output based on the script to the left

#declare the_angle = 0; #while (the_angle < 360) box { texture { pigment { color Red } finish { specular 0.6 } normal { agate 0.25 scale 1/2 } } rotate the_angle } #declare the_angle = the_angle + 45; #end

Modeling The POV-Ray program itself does not include a modeling feature; it is essentially a pure renderer with a sophisticated model description language. To accompany this feature set, third parties have developed a large variety of modeling software, some specialized for POV-Ray, others supporting import and export of its data structures. A number of POV-Ray compatible modelers are linked from Povray.org: Modelling Programs [9].

Software Development and maintenance Official modifications to the POV-Ray source tree are done and/or approved by the POV-Team. Most patch submission and/or bug reporting is done in the POV-Ray newsgroups on the [nntp://news.povray.org/ news.povray.org] news server (with a Web interface also available [10]). Since POV-Ray's source is available there are unofficial forks and patched versions of POV-Ray available from third parties; however, these are not officially supported by the POV-Team. Official POV-Ray versions currently do not support shader plug-ins.[11] Some features, like radiosity and splines are still in development and may be subject to syntactical change.

273

POV-Ray

Platform support POV-Ray is distributed in compiled format for Macintosh, Windows and Linux. Support for Intel Macs is not available in the Macintosh version, but since Mac OS X is a version of Unix the Linux version can be compiled on it. POV-Ray also could be ported to any platform which has a compatible C++ compiler. People with Intel Macs can use the fork MegaPOV though, as that is compiled as universal binary.[citation needed] The beta 3.7 versions with SMP support, however, are still available only for Windows and Linux.

Licensing POV-Ray is distributed under the POV-Ray License, which permits free distribution of the program source code and binaries, but restricts commercial distribution and the creation of derivative works other than fully functional versions of POV-Ray. Although the source code is available for modification, due to specificWikipedia:Please clarify restrictions, it is not open source according to the OSI definition of the term. One of the reasons that POV-Ray is not licensed under the free software GNU General Public License (GPL), or other open source licenses, is that POV-Ray was developed before the GPL-style licenses became widely used; the developers wrote their own license for the release of POV-Ray, and contributors to the software have worked under the assumption that their contributions would be licensed under the POV-Ray License. A complete rewrite of POV-Ray ("POV-Ray 4.0") is currently under discussion, which would use a more liberal license, most likely GPL v3.[12]

References [3] [4] [5] [6] [7] [8]

http:/ / www. povray. org/ povlegal. html http:/ / www. povray. org/ POV-Ray: Documentation: 1.1.5 The Early History of POV-Ray (http:/ / www. povray. org/ documentation/ view/ 3. 6. 0/ 7/ ) Reach for the stars (http:/ / www. oyonale. com/ iss. php) The TWiT Netcast Network with Leo Laporte (http:/ / twit. tv/ floss24) Paul Bourke: Supershape in 3D (http:/ / local. wasp. uwa. edu. au/ ~pbourke/ geometry/ supershape3d/ ) are examples of POV-Ray images made with very short code [9] http:/ / www. povray. org/ resources/ links/ 3D_Programs/ Modelling_Programs/ [10] http:/ / news. povray. org/ groups/ [11] for such an implementation, see e.g., http:/ / www. aetec. ee/ fv/ vkhomep. nsf/ pages/ povman2

External links • POV-Ray homepage (http://www.povray.org/) • POV-Ray (http://www.dmoz.org/Computers/Software/Graphics/3D/Animation_and_Design_Tools/ POV-Ray/) at the Open Directory Project

274

Radiance

275

Radiance Radiance Developer(s)

Greg Ward

Stable release

4.1 (2011-11-01) [±]

Preview release

Non [±]

Written in

C

[1]

[2]

Operating system Unix, Linux, Mac OS X, Windows Available in

?

License

Project-specific open source

Website

[3]

Radiance is a suite of tools for performing lighting simulation originally written by Greg Ward.[] It includes a renderer as well as many other tools for measuring the simulated light levels. It uses ray tracing to perform all lighting calculations, accelerated by the use of an octree data structure. It pioneered the concept of high dynamic range imaging, where light levels are (theoretically) open-ended values instead of a decimal proportion of a maximum (e.g. 0.0 to 1.0) or integer fraction of a maximum (0 to 255 / 255). It also implements global illumination using the Monte Carlo method to sample light falling on a point. Greg Ward started developing Radiance in 1985 while at Lawrence Berkeley National Laboratory. The source code was distributed under a license forbidding further redistribution. In January 2002 Radiance 3.4 was relicensed under a less restrictive license. One study found Radiance to be the most generally useful software package for architectural lighting simulation. The study also noted that Radiance often serves as the underlying simulation engine for many other packages.[4]

HDR image format Radiance defined an image format for storing HDR images, now described as RGBE image format. Since it was the first (and for a long time the only) HDR image format, this format is supported by many otherWikipedia:Avoid weasel words software packages.[citation needed] The file starts with the signature '#?RADIANCE' and then several lines listing the commands used to generate the image. This information allows the renderer rpict to continue a partially completed render (either manually, or using the rad front-end). There are also key=value declarations, including the line 'FORMAT=32-bit_rle_rgbe'. After this is a blank line signifying the end of the header. A single line describes the resolution and pixel order. As produced by the Radiance tools this always takes the form of '-Y height +X width'. After this line follows the binary pixel data. Radiance calculates light values as floating point triplets, one each for red, green and blue. But storing a full double precision float for each channel (8 bytes × 3 = 24 bytes) is a burden even for modern systems. Two stages are used to compress the image data. The first scales the three floating point values to share a common 8-bit exponent, taken from the brightest of the three. Each value is then truncated to an 8-bit mantissa (fractional part). The result is four bytes, 32 bits, for each pixel. This results in a 6:1 compression, at the expense of reduced colour fidelity. The second stage performs run length encoding on the 32-bit pixel values. This has a limited impact on the size of most rendered images, but it is fast and simple.

Radiance

276

Scene description format A radiance scene is made from one or more object files. The .rad format is a simple text file. It can specify individual geometric objects, as well as call programs by starting a line with an exclamation point '!'. When specifying geometry the first line is modifier type name The following three lines contain parameters starting with an integer specifying the number of parameters. The parameters need not be on the same line, they can be continued on multiple lines to aid in readability. Modifiers create materials and can be chained together, one modifying the next. For example: myball.rad chrome sphere ball 0 0 4 0 0 10

10

This can then be arrayed in another file using the xform program (described later): scene.rad void metal chrome 0 0 5 0.8 0.8 0.9 0.0

0.8

!xform -a 5 -t 20 0 0 myball.rad This creates a chrome material and five chrome spheres spaced 20 units apart along the X-axis. Before a scene can be used, it must be compiled into an octree file ('.oct') using the oconv tool. Most of the rendering tools (see below) use an octree file as input.

Tools The Radiance suite includes over 50 tools. They were designed for use on Unix and Unix-like systems. Many of the tools act as filters, taking input on standard input and sending the processed result to standard output. These can be used on the Unix command line and piped to a new file, or included in Radiance scene files ('.rad') themselves, as shown above.

Geometry manipulation Several radiance programs manipulate Radiance scene data by reading from either a specified file or their standard input, and writing to standard output. • xform allows an arbitrary number of transformations to be performed on a '.rad' file. The transformations include translation, rotation (around any of the three axes), and scaling. It also can perform multi-dimensional arraying. • replmarks replaces certain triangles in a scene with objects from another file. Used for simplifying a scene when modelling in a 3D modeller.

Radiance

Generators Generators simplify the task of modelling a scene, they create certain types of geometry from supplied parameters. • • • • • •

genbox creates a box. genrprism extrudes a given 2D polygon along the Z-axis. genrev creates a surface of revolution from a given function. genworm creates a worm given four functions - the (x, y, z) coordinates of the path, and the radius of the worm. gensurf creates a tesselated surface from a given function. gensky creates a description for a CIE standard sky distribution.

Geometry converters Radiance includes a number of programs for converting scene geometry from other formats. These include: • nff2rad converts NFF objects to Radiance geometry. • obj2rad convert Wavefront .obj files to Radiance geometry. • obj2mesh convert Wavefront .obj files to a Radiance compiled mesh. This can then be included in a scene using the recently added mesh primitive. More efficient than using obj2rad and includes texture coordinates.

Rendering • rpict is the renderer, producing a Radiance image on its standard output. • rvu is an interactive renderer, opening an X11 window to show the render in progress, and allowing the view to be altered. • rtrace is a tool for tracing specific rays into a scene. It reads the parameters for these rays on its standard input and returns the light value on standard output. rtrace is used by other tools, and can even be used to render images on its own by using the vwray program to generate view rays to be piped to it. • dayfact is an interactive script to compute luminance values and daylight factors on a grid. • findglare takes an image or scene and finds bright sources that would cause discomforting glare in human eyes. • mkillum takes a surface (e.g. a window or lamp shade) and computes the lighting contribution going through it. This data is then used by the illum material modifier to make lighting from these secondary sources more accurate and efficient to compute.

Image manipulation and analysis • pfilt filters an image. The common technique to achieve anti-aliased images is to render several times larger than the desired size, and then filter the image down using pfilt. • pcompos composites images, either with anchor coordinates or by adding several images on top of another. • pcond conditions images. Can simulate a number of effects of the human visual response e.g. defocusing dark areas, veiling due to glare, and colour loss due to mesopic or scotopic vision in low light. • pinterp interpolates between two images provided they both have z buffers. Uses rtrace to fill in gaps. Is used to speed up the rendering speed of simple animations. • ximage is an image viewer for viewing HDR Radiance images. It can adjust the simulated exposure and apply some of the human visual effects of pcond.

277

Radiance

Integration • rad is a front-end which reads a '.rif' file describing a scene and multiple camera views. Previously, make and a makefile were used in a similar role. rad coordinates oconv, mkillum, rpict/rview and other programs to render an image (or preview) from the source scene file(s). • trad is a GUI front-end to rad using Tcl/Tk. • ranimate is a front-end which coordinates many programs to generate virtual walk-through animations i.e. the camera moves but the scene is static.

References [1] [2] [3] [4]

http:/ / en. wikipedia. org/ w/ index. php?title=Template:Latest_stable_software_release/ Radiance& action=edit http:/ / en. wikipedia. org/ w/ index. php?title=Template:Latest_preview_software_release/ Radiance& action=edit http:/ / radsite. lbl. gov/ radiance/ Geoffrey G. Roy, A Comparative Study of Lighting Simulation Packages Suitable for use in Architectural Design, Murdoch University, October 2000

Sources • Greg Ward Larson and Rob Shakespeare, Rendering with Radiance, Morgan Kaufmann, 1998. ISBN 1-55860-499-5

External links • • • •

Radiance homepage (http://radsite.lbl.gov/radiance/HOME.html) Radiance online (http://www.radiance-online.org/) Rendering with Radiance online (http://radsite.lbl.gov/radiance/book/) Anyhere Software--Greg Ward's consulting firm (http://anyhere.com/)

278

Real3D

279

Real3D Realsoft 3D Developer(s)

Realsoft Oy

Initial release

1990

Stable release

7.0.41 / 2010

Written in

C

Operating system IRIX, Linux, Microsoft Windows, Mac OS X Available in

EN

Type

3D Software

License

Proprietary

Website

http:/ / www. realsoft. fi

Realsoft 3D is a modeling and raytracing application created by Realsoft Graphics Oy. Originally called Real 3D, it was developed for the Amiga computer and later also for Linux, Irix, Mac OS X and Microsoft Windows. It was initially written in 1983 on Commodore 64 by two Finnish brothers Juha and Vesa Meskanen. The development of Real 3D was really started in 1985, as Juha Meskanen started his studies at the Lahti University of Applied Sciences, Finland. Juha's brother Vesa joined the development and jumped out from his university career to start the Realsoft company in 1989.

Versions history Real 3D v1 The first commercial Real 3D version was released on Amiga. It used IFF REAL for its objects. It featured constructive solid geometry, support for smooth curved quadric surfaces and a ray-tracer for photo realistic rendering.[1] 1.2 was released in 1990, was already distributed in several European countries. 1.3 was released early 1991. 1.4 was released in December 1991. It was the last version derived from the original Real3D design. Despite of small version number increments, v1, v1.2 and v 1.3 were all major releases, with new manuals and packaging.

Real 3D v2 It was released in 1993. Version 2 was redesigned with a new source code base. It introduced ground breaking features - booleans, CSG primitives, b-spline curves, physically based animation simulation, morph based animation techniques and phenomenal rendered output. It took full advantage of the multi tasking abilities of the Amiga - allowing the user to continue editing a scene on another window while it rendered. Microsoft Windows version was released in 1994.

Real3D

Real 3D v3 Version 3.5 was released in 1996. It was the last version based on v2 architecture. Realsoft had started a new major development project around 1994. The project involved a complete software rewrite, new object oriented source code, platform independent design, modularity, and adoption of several other state-of–art development methods. Amiga version 3.5, which is also the last version for this system, is now freely available by AmiKit.[2]

Realsoft 3D v4 Version 4 was released year 2000. Beginning with this release, Realsoft renamed the product to Realsoft 3D. It was released on multiple platforms, including Microsoft Windows, Linux and SGI Irix. 4.1 trial version was released in 2001-02-21.[3] Retail version was released in 2001-05-25.[4] 4.2 upgrade was released in 2001-07-19.[5] Linux (Intel, AMD) and SGI Irix (MIPS) versions were released in 2001-07-24.[6] 4.5 was released in 2002-10-23,[7] which introduced for example caustics and global illumination rendering features. Linux version was released in 2002-11-03.[8] Irix version of 4.5/SP1 (build 26.41b) was released in 2003-06-03.[9]

Realsoft 3D v5 Windows version was released in 2004-11-15.[10] It expanded the feature set in all program areas. It was also an important visual leap forward supporting full-color 32 bit icons. Service pack 1 (5.1) for Windows was released in 2005-02-03.[11] Service Pack 2 (5.2) for Windows was released in 2005-10-02.[12] Service Pack 3 (5.3) for Windows was released in 2006-11-01.[13] Mac OS X version was released in 2006-12-18.[14] Service Pack 4 (5.4) was released in 2007-05-01.[15] Irix version was released in 2007-12-16.[16]

Realsoft 3D v6 Windows version was released in 2007-12-18.[17] It introduced powerful parametric tools for plant modeling and building construction. 64-bit platforms were supported. Service pack 1 (6.1) for Windows was released in 2008-05-15.[18] Linux version was released in 2008-07-01.[19] Mac OS X version was released in 2008-11-11.[20]

280

Real3D

281

Realsoft 3D v7 It was announced in 2008-12-13, with projected release date between 2nd or 3rd quarter of 2009. It was released on December 7, 2009 for 32-bit and 64-bit Microsoft Windows.

Receptions Jeff Paries of Computer Graphics World claimed version 4 is an excellent addition to your toolbox at a reasonable price.[21]

References [1] [2] [3] [4] [5] [6] [7] [8] [9]

Realsoft 3D - History (http:/ / www. realsoft. com/ daemon/ History. htm) AmiKit Add-ons (http:/ / amikit. amiga. sk/ add-ons. htm) Realsoft 3D V.4.1 trial version released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-02-25) Realsoft 3D V.4.1 released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-05-08) Realsoft 4.2 upgrade available for registered users (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-21) Realsoft 3D v.4.2 released on Linux and SGI Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-29) Realsoft 3D v.4.5 shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-10-29) Realsoft 3D v.4.5 for Linux released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-12-31) New version of Realsoft 3D for Irix available (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2003-08-08)

[10] Realsoft 3D V.5 for Windows Shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2004-12-22) [11] Service Pack 1 for V5 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-04-06) [12] Service Pack 2 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-12-23) [13] Service Pack 3 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2006-12-18) [14] Realsoft 3D for Mac OSX released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-01) [15] Realsoft 3D V5.1 Service Pack 4 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-04) [16] Realsoft 3D V5 Service Pack 4 for Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-17) [17] Realsoft 3D Version 6 for Windows Platforms Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-18) [18] SP1 for Version 6 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-06-11) [19] Realsoft 3D v6 for Linux Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-11-11) [20] Realsoft 3D Version 6 for Mac OS X (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-12-23) [21] Realsoft 3D Version 4 (http:/ / www. cgw. com/ ME2/ dirmod. asp?sid=& nm=& type=Publishing& mod=Publications::Article& mid=8F3A7027421841978F18BE895F87F791& tier=4& id=25D8808747CB446EB79361C510813D50)

External links • • • • • •

Official website (http://www.realsoft.com) Render Daemon, additional resources Realsoft 3D (http://www.realsoft.com/daemon/index.html) Forum, Image gallery (http://www.realsoft.org) info site (http://www.realsoft.info) Aminet Real3D traces (http://aminet.net/pix/real3) Realsoft 3D Wiki (http://rs3dwiki.the-final.info)

Realsoft 3D

282

Realsoft 3D Realsoft 3D Developer(s)

Realsoft Oy

Initial release

1990

Stable release

7.0.41 / 2010

Written in

C

Operating system IRIX, Linux, Microsoft Windows, Mac OS X Available in

EN

Type

3D Software

License

Proprietary

Website

http:/ / www. realsoft. fi

Realsoft 3D is a modeling and raytracing application created by Realsoft Graphics Oy. Originally called Real 3D, it was developed for the Amiga computer and later also for Linux, Irix, Mac OS X and Microsoft Windows. It was initially written in 1983 on Commodore 64 by two Finnish brothers Juha and Vesa Meskanen. The development of Real 3D was really started in 1985, as Juha Meskanen started his studies at the Lahti University of Applied Sciences, Finland. Juha's brother Vesa joined the development and jumped out from his university career to start the Realsoft company in 1989.

Versions history Real 3D v1 The first commercial Real 3D version was released on Amiga. It used IFF REAL for its objects. It featured constructive solid geometry, support for smooth curved quadric surfaces and a ray-tracer for photo realistic rendering.[1] 1.2 was released in 1990, was already distributed in several European countries. 1.3 was released early 1991. 1.4 was released in December 1991. It was the last version derived from the original Real3D design. Despite of small version number increments, v1, v1.2 and v 1.3 were all major releases, with new manuals and packaging.

Real 3D v2 It was released in 1993. Version 2 was redesigned with a new source code base. It introduced ground breaking features - booleans, CSG primitives, b-spline curves, physically based animation simulation, morph based animation techniques and phenomenal rendered output. It took full advantage of the multi tasking abilities of the Amiga - allowing the user to continue editing a scene on another window while it rendered. Microsoft Windows version was released in 1994.

Realsoft 3D

Real 3D v3 Version 3.5 was released in 1996. It was the last version based on v2 architecture. Realsoft had started a new major development project around 1994. The project involved a complete software rewrite, new object oriented source code, platform independent design, modularity, and adoption of several other state-of–art development methods. Amiga version 3.5, which is also the last version for this system, is now freely available by AmiKit.[2]

Realsoft 3D v4 Version 4 was released year 2000. Beginning with this release, Realsoft renamed the product to Realsoft 3D. It was released on multiple platforms, including Microsoft Windows, Linux and SGI Irix. 4.1 trial version was released in 2001-02-21.[3] Retail version was released in 2001-05-25.[4] 4.2 upgrade was released in 2001-07-19.[5] Linux (Intel, AMD) and SGI Irix (MIPS) versions were released in 2001-07-24.[6] 4.5 was released in 2002-10-23,[7] which introduced for example caustics and global illumination rendering features. Linux version was released in 2002-11-03.[8] Irix version of 4.5/SP1 (build 26.41b) was released in 2003-06-03.[9]

Realsoft 3D v5 Windows version was released in 2004-11-15.[10] It expanded the feature set in all program areas. It was also an important visual leap forward supporting full-color 32 bit icons. Service pack 1 (5.1) for Windows was released in 2005-02-03.[11] Service Pack 2 (5.2) for Windows was released in 2005-10-02.[12] Service Pack 3 (5.3) for Windows was released in 2006-11-01.[13] Mac OS X version was released in 2006-12-18.[14] Service Pack 4 (5.4) was released in 2007-05-01.[15] Irix version was released in 2007-12-16.[16]

Realsoft 3D v6 Windows version was released in 2007-12-18.[17] It introduced powerful parametric tools for plant modeling and building construction. 64-bit platforms were supported. Service pack 1 (6.1) for Windows was released in 2008-05-15.[18] Linux version was released in 2008-07-01.[19] Mac OS X version was released in 2008-11-11.[20]

283

Realsoft 3D

Realsoft 3D v7 It was announced in 2008-12-13, with projected release date between 2nd or 3rd quarter of 2009. It was released on December 7, 2009 for 32-bit and 64-bit Microsoft Windows.

Receptions Jeff Paries of Computer Graphics World claimed version 4 is an excellent addition to your toolbox at a reasonable price.[21]

References [1] [2] [3] [4] [5] [6] [7] [8] [9]

Realsoft 3D - History (http:/ / www. realsoft. com/ daemon/ History. htm) AmiKit Add-ons (http:/ / amikit. amiga. sk/ add-ons. htm) Realsoft 3D V.4.1 trial version released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-02-25) Realsoft 3D V.4.1 released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-05-08) Realsoft 4.2 upgrade available for registered users (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-21) Realsoft 3D v.4.2 released on Linux and SGI Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2001-07-29) Realsoft 3D v.4.5 shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-10-29) Realsoft 3D v.4.5 for Linux released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2002-12-31) New version of Realsoft 3D for Irix available (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2003-08-08)

[10] Realsoft 3D V.5 for Windows Shipping (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2004-12-22) [11] Service Pack 1 for V5 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-04-06) [12] Service Pack 2 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2005-12-23) [13] Service Pack 3 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2006-12-18) [14] Realsoft 3D for Mac OSX released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-01) [15] Realsoft 3D V5.1 Service Pack 4 (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-05-04) [16] Realsoft 3D V5 Service Pack 4 for Irix (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-17) [17] Realsoft 3D Version 6 for Windows Platforms Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2007-12-18) [18] SP1 for Version 6 Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-06-11) [19] Realsoft 3D v6 for Linux Released (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-11-11) [20] Realsoft 3D Version 6 for Mac OS X (http:/ / www. realsoft. com/ cgi-bin/ news/ print?gid=1& date=2008-12-23) [21] Realsoft 3D Version 4 (http:/ / www. cgw. com/ ME2/ dirmod. asp?sid=& nm=& type=Publishing& mod=Publications::Article& mid=8F3A7027421841978F18BE895F87F791& tier=4& id=25D8808747CB446EB79361C510813D50)

External links • • • • • •

Official website (http://www.realsoft.com) Render Daemon, additional resources Realsoft 3D (http://www.realsoft.com/daemon/index.html) Forum, Image gallery (http://www.realsoft.org) info site (http://www.realsoft.info) Aminet Real3D traces (http://aminet.net/pix/real3) Realsoft 3D Wiki (http://rs3dwiki.the-final.info)

284

Sunflow

285

Sunflow Sunflow

A raytraced rendered image produced by Sunflow. Developer(s)

Christopher Kulla

Stable release

0.07.2 / February 9, 2007

Operating system

Cross-platform

Type

Ray tracer

License

MIT

Website

[2]

[1]

Sunflow is an open source global illumination rendering system written in Java. The current status of the project is unknown, but the last announcement on the program's official page was made in 2007.

References [1] http:/ / sourceforge. net/ projects/ sunflow/ [2] http:/ / sunflow. sourceforge. net/

External links • Sunflow Rendering System website (http://sunflow.sourceforge.net/) • Sunflow Forum (http://sunflow.sourceforge.net/phpbb2/) • Sunflow documentation wiki (http://sfwiki.geneome.net/)

TurboSilver

286

TurboSilver TurboSilver was one of the original 3D raytracing software packages available for the Amiga and for personal computers in general. It was first revealed by its creator Impulse at the October 1986 AmiEXPO. Beaten to the line by Sculpt 3D which was released in July 1986. November 1987 saw the release of version 2.0. Version 3.0 was released in January 1988. Impulse created a replacement for it, named Imagine in 1991.[citation needed]

References Chronology of Amiga Computers [1] Ken Polsson

References [1] http:/ / pctimeline. info/ amiga/ index. htm

V-Ray V-Ray

Render created using V-Ray for Rhinoceros 3D, demonstrating the advanced effects V-Ray is capable of, such as refraction and caustics. Developer(s)

Chaos Group

Stable release

2.0 / December 6, 2010

Operating system

Linux, Mac OS X and Microsoft Windows

Type

Rendering system

License

Proprietary commercial software

Website

www.chaosgroup.com

[1]

Folded paper: SketchUp drawing rendered using V-Ray, demonstrating shading and global illumination

V-Ray

Render created using V-Ray for Rhinoceros 3D, demonstrating the advanced effects V-Ray is capable of, such as reflection, depth of field, and the shape of the aperture (in this case, a hexagon)

V-Ray is a rendering engine that is used as an extension of certain 3D computer graphics software. The core developers of V-Ray are Vladimir Koylazov and Peter Mitev of Chaos Software production studio established in 1997, based in Sofia, Bulgaria. It is a rendering engine that uses advanced techniques, for example global illumination algorithms such as path tracing, photon mapping, irradiance maps and directly computed global illumination. The use of these techniques often makes it preferable to conventional renderers which are provided standard with 3d software, and generally renders using these technique can appear more photo-realistic, as actual lighting effects are more realistically emulated. V-Ray is used in the film and video game industries. It is also used extensively in making realistic 3D renderings for architecture.

References [1] http:/ / www. chaosgroup. com/

• Francesco Legrenzi, V-Ray - The Complete Guide, 2008 • Markus Kuhlo and Enrico Eggert, Architectural Rendering with 3ds Max and V-Ray: Photorealistic Visualization, Focal Press, 2010

External links • V-Ray Essentials Crash Course - Free Tutorial (http://www.cgmeetup.net/home/v-ray-essentials-crash-course/ ) • V-Ray - Free Video Tutorials (http://www.cgmeetup.net/home/tutorials/chaos-group-vray/) • V-Ray Help (http://www.spot3d.com) • Chaos Group Home Page (http://www.chaosgroup.com) • V-Ray at rhino3d.com (http://www.rhino3d.com/resources/display.asp?language=en&listing=870) • A Closer Look At VRAY Architectural Review of V-Ray (http://www.cgarchitect.com/news/Reviews/ Review007_1.asp) • ASGVIS Makers of V-Ray for Rhino and V-Ray for SketchUp (http://www.asgvis.com) • VRAYforC4D - the website of V-Ray for Cinema4d, made by LAUBlab KG (http://www.vrayforc4d.com) • V-Ray/Blender development. (http://vray.cgdo.ru/)

287

YafaRay

288

YafaRay YafaRay Developer(s)

YafaRay developers

Stable release

0.1.1 / June 23, 2009

Written in

C++

Operating system Cross-platform Type

Raytracer

License

LGPL

Website

www.yafaray.org

[1]

YafaRay is a free, open source ray tracing program that uses an XML scene description language. It has been integrated into the 2.49 version of the 3D modelling software Blender, but requires an exporter for the redesigned 2.5 version of Blender. It is licensed under the GNU Lesser General Public License (LGPL).

History YafaRay's predecessor YafRay (Yet Another Free Raytracer) was A YafaRay rendering of piston engine parts modelled written by Alejandro Conty Estévez, and was first released in July in Blender. 2002. The last version of YafRay was YafRay 0.0.9, released in 2006. Due to limitations of the original design, the raytracer was completely rewritten by Mathias Wein. The first stable version of the new raytracer, YafaRay 0.1.0, was released in October 2008.

Features Rendering Global Illumination YafaRay uses global illumination to produce realistically lit renderings of 3D scenes, using Montecarlo-derived approximations. Skydome Illumination This illumination system is based mainly on light coming from an emitting sky, taking into account the soft shadows calculations also involved. The illumination can be obtained from a high dynamic range image. Caustics YafaRay uses photon mapping that allows for caustic (light distortion produced by reflection or transmission such as through a burning-glass). For simulating translucent materials there is also a subsurface scattering shader under development. Depth of field The effects of a focus depth of field can be reproduced using this feature. With a point in the scene fixed, further objects will be out of focus.

YafaRay

289

Blurry reflections If a surface is not a perfect reflector, distortion arises in the reflected light. This distortion will grow bigger as the reflecting object is taken further away. YafaRay can simulate this phenomenon.

Architecture Modular framework YafaRay features a modular structure, with a kernel with which the rest of the render elements connect: scene loader, lights and shaders. This together with an API, allows development of rendering plug-ins, for using YafaRay from any program or 3D suite. Supported suites include Blender, Wings 3D and Aztec. Cross-platform YafaRay has been fully developed using C++. This makes for good portability and there are precompiled binaries for the most common platforms: GNU/Linux, Windows 9x/XP/2000, Mac OS X and Irix. YafaRay can be used as a stand-alone render engine, using its own scene description format. This way it can be used from the command line directly, by a script, etc. There are also provisions for parallel or distributed rendering.

External links • • • •

Official website [2] Material Library [3] Material Search [4] Tutorial on How to use Yafaray Material [5]

References [1] [2] [3] [4] [5]

http:/ / www. yafaray. org/ http:/ / www. yafaray. org http:/ / yafaray. manojky. net/ legacy/ index_Rate. php http:/ / yafaray. manojky. net http:/ / tutorials. manojky. net/ yafaray/ YafarayMaterialUse. pdf

Article Sources and Contributors

Article Sources and Contributors 3D computer graphics  Source: http://en.wikipedia.org/w/index.php?oldid=565452063  Contributors: 041744, 28421u2232nfenfcenc, ALoopingIcon, Abhishekvaza, Acroterion, Adailide, Akumdara, Al Lemos, Alanbly, Allens, Alpha Centaury, AndrewTJ31, Anthere, Antilived, Apparition11, ArtashesKarapetyan, Aryoadeh, Asifsra, Awghrah, Azunda, Barbaking, Beetstra, Bernard mcboil, Bnewbies, Bobo192, Breezeonhold, Calabraxthis, Car9999, Charles Gaudette, Chithrapriya, ChrisGualtieri, Ckatz, Cmedling, Cpl Syx, D0762, DARTH SIDIOUS 2, DBigXray, Dancter, Davester78, David C, Ddjd911, Dedeche, Derekleeholder, Dicklyon, Diyar se, EJF, ESkog, Eagleal, Edgeloop, Eekerz, El C, Eliz81, Emesee, Epbr123, Erianna, ErkinBatu, Evster88, Excirial, FlyingPenguins, Frecklefoot, Gabbs1, Garde, GeMet, Germancorredorp, Giftlite, Girolamo Savonarola, Greg L, Grishick, Guillom, HolgerK, Hu12, Hyju, INVERTED, Ideal gas equation, Imroy, Indon, Iokseng, Isaac, Iskander HFC, JForget, Jack Phoenix, Jackaranga, Jagged 85, Jdonati, Jusdafax, Jvs.cz, Kain Nihil, Katalaveno, Katieh5584, Kennedy311, Kingpin13, Kinkwan, Kozuch, Krawi, Kri, Laurusnobilis, Lightmouse, LilHelpa, Liquidsnakex, Lolbill58, Loren.wilton, LpGod, Luckofbuck, M.J. Moore-McGonigal PhD, P.Eng, MER-C, Magioladitis, Martarius, Masgatotkaca, Matthew Yeager, MaxRipper, Maxis ftw, Mayafishing, Mchristen, Mdd, Mdebets, Mefio, Mendaliv, Mercury, Michael Hardy, Mild Bill Hiccup, Mmxx, MrOllie, MyMii, NJA, Naturehead, NawlinWiki, Nigholith, Nk.sheridan, Novusspero, Obsidian Soul, Oicumayberight, Ost316, OwenX, Philip Trueman, Pie Bird, Pinky deamon, Poss, Prashanthns, Pseudomonas, Quincy2010, R'n'B, Randomran, Read-write-services, RekishiEJ, Remag Kee, Richard Sneyd, RiyaKeralore, Robin S, Rockruler, Ronz, Salam32, Satya 82, Scott Martin, Serioussamp, Sheikh wadood, Shell Kinney, Shereth, Siddhant, SiobhanHansa, Skhedkar, Soumyasch, SpunkyBob, Srleffler, Strangnet, Superbeecat, Svsraju, THEN WHO WAS PHONE?, Tedder, The Thing That Should Not Be, The undertow, TheBendster, TheDJ, TheRealFennShysa, Titodutta, Tomas444, Tommy2010, Tot12, Trevorgoodchild, Tsalat, TwinsMetsFan, Twsx, Useight, Weetoddid, Winchelsea, Winston365, Woohookitty, Xeolyte, Yamamoto Ichiro, Yintan, Yousaf465, Yui sama, Zidonuke, Zundark, 349 anonymous edits Rendering  Source: http://en.wikipedia.org/w/index.php?oldid=563065606  Contributors: 16@r, ALoopingIcon, AVM, Aaronh, Achraf52, Adailide, Ahy1, Al Hart, Alanbly, Altenmann, Alvin Seville, AnnaFrance, Asephei, AxelBoldt, Azunda, Ben Ben, Benbread, Benchaz, Bendman, Bjorke, Blainster, Boing! said Zebedee, Bpescod, Bryan Derksen, Cgbuff, Chalst, Charles Matthews, Chris the speller, CliffC, Cmdrjameson, Conversion script, Corti, Crahul, DVdm, Das-g, Dave Law, David C, Dedeche, Deli nk, Dhatfield, Dhilvert, Dicklyon, Doradus, Doubleyouyou, Downwards, Dpc01, Dutch15, Dzhim, Ed g2s, Edcolins, Eekerz, Eflouret, Egarduno, Erudecorp, Favonian, FleetCommand, Fm2006, Frango com Nata, Fredrik, Fuhghettaboutit, Funnylemon, Gamer3D, Gary King, GeorgeBills, GeorgeLouis, Germancorredorp, Gku, Gordmoo, Gothmog.es, Graham87, Graue, Gökhan, HarisM, Harshavsn, Howcheng, Hu, Hu12, Hxa7241, Imroy, Indon, Interiot, Iskander HFC, Janke, Jaraalbe, Jeweldesign, Jheald, Jimmi Hugh, Jmencisom, Joyous!, Kayamon, Kennedy311, Kimse, Kri, Kruusamägi, LaughingMan, Ledow, Levork, Lindosland, Lkinkade, M-le-mot-dit, Maian, Mani1, Martarius, Mav, MaxRipper, Maximilian Schönherr, Mblumber, Mdd, Melaen, Michael Hardy, MichaelMcGuffin, Minghong, Mkweise, Mmernex, Nbarth, New Age Retro Hippie, Obsidian Soul, Oicumayberight, Onopearls, Paladinwannabe2, Patrick, Paul A, Phil Boswell, Phresnel, Phrood, Pinbucket, Piquan, Pit, Pixelbox, Pongley, Poweroid, Ppe42, Pqnd Render, RJHall, Ravedave, Reedbeta, Rich Farmbrough, Rilak, Ronz, Sam Hocevar, Seasage, Shawnc, SiobhanHansa, Slady, Solarra, Spitfire8520, Sterrys, Stj6, Sverdrup, Tesi1700, The Anome, TheProject, Tiggerjay, Tomruen, Urocyon, Veinor, Vervadr, Wapcaplet, Wik, Wikiedit555, Wikispaghetti, William Burroughs, Wmahan, Wolfkeeper, Xugo, 248 anonymous edits Ray casting  Source: http://en.wikipedia.org/w/index.php?oldid=557411482  Contributors: *Kat*, AnAj, Angela, Anticipation of a New Lover's Arrival, The, Astronautics, Barticus88, Brazucs, Cgbuff, D, Damian Yerrick, DaveGorman, David Eppstein, Djanvk, DocumentN, Dogaroon, DrewNoakes, Eddynumbers, Eigenlambda, Ext9, Exvion, Finlay McWalter, Firsfron, Garde, Gargaj, Geekrecon, GeorgeLouis, HarisM, Hetar, Iamhove, Iridescent, J04n, Jagged 85, JamesBurns, Jlittlet, Jodi.a.schneider, Kayamon, Kcdot, Korodzik, Kris Schnee, LOL, Lozzaaa, Mikhajist, Modster, NeD80, Ortzinator, Pinbucket, RJHall, Ravn, Reedbeta, Rich Farmbrough, RzR, Tesi1700, TheBilly, ThomasHarte, TimBentley, Tjansen, Verne Equinox, WmRowan, Wolfkeeper, Yksyksyks, 48 anonymous edits Ray tracing  Source: http://en.wikipedia.org/w/index.php?oldid=559028560  Contributors: 0x394a74, 8ty3hree, Abmac, Abstracte, Al Hart, Alanbly, Altenmann, Andreas Kaufmann, Anetode, Anonymous the Editor, Anteru, Arnero, ArnoldReinhold, Arthena, Badgerlovestumbler, Bdoserror, Benindigo, Bjorke, Blueshade, Brion VIBBER, Brlcad, C0nanPayne, Cadience, Caesar, Camtomlee, Carrionluggage, Cdecoro, Chellmuth, Chrislk02, Claygate, Coastline, CobbSalad, ColinSSX, Colinmaharaj, Conversion script, Cowpip, Cozdas, Cybercobra, D V S, Daniel.Cardenas, Darathin, Davepape, Davidhorman, Delicious carbuncle, Deltabeignet, Deon, Devendermishra, Dhatfield, Dhilvert, Diannaa, Dicklyon, Dllu, Domsau2, DrBob, Ed g2s, Elizium23, Erich666, Etimbo, FatalError, Femto, Fgnievinski, ForrestVoight, Fountains of Bryn Mawr, Fph, Furrykef, GDallimore, GGGregory, Geekrecon, Giftlite, Gioto, Gjlebbink, Gmaxwell, GoingBatty, Goodone121, Graphicsguy, Greg L, GregorB, Gregwhitfield, H2oski2liv, Henrikb4, Hertz1888, Hetar, Hugh2414, Imroy, Ingolfson, InternetMeme, Iskander HFC, Ixfd64, Japsu, Jawed, Jdh30, Jeancolasp, Jesin, Jim.belk, Jj137, Jleedev, Jodi.a.schneider, Joke137, JonesMI, Jpkoester1, Juhame, Jumping cheese, K.brewster, Kolibri, Kri, Ku7485, KungfuJoe1110, Kvng, Lasneyx, Lclacer, Levork, Luke490, Lumrs, Lupo, Martarius, Mate2code, Mattbrundage, Michael Hardy, Mikiemike, Mimigu, Minghong, MoritzMoeller, Mosquitopsu, Mun206, Nerd65536, Niky cz, NimoTh, Nneonneo, Nohat, O18, OnionKnight, Osmaker, Paolo.dL, Patrick, Penubag, Pflatau, Phresnel, Phrood, Pinbucket, Pjvpjv, Pmsyyz, Powerslide, Priceman86, Qef, R.cabus, RDBury, RJHall, Randomblue, Ravn, Rcronk, Reedbeta, Regenspaziergang, Requestion, Rich Farmbrough, RubyQ, Rusty432, Ryan Postlethwaite, Ryan Roos, Samjameshall, Samuelalang, Scs, Sebastian.mach, SebastianHelm, Shen, Simeon, Sir Lothar, Skadge, SkyWalker, Slady, Soler97, Solphusion, Soumyasch, Spiff, Srleffler, Stannered, Stevertigo, TakingUpSpace, Tamfang, Taral, The Anome, The machine512, TheRealFennShysa, Themunkee, Thumperward, Timo Honkasalo, Timrb, Tired time, ToastieIL, Tom Morris, Tom-, ToolmakerSteve, Toxygen, Tuomari, Ubardak, Uvainio, VBGFscJUn3, Vanished user 342562, Vendettax, Versatranitsonlywaytofly, Vette92, VitruV07, Viznut, Voidxor, Wapcaplet, Washboardplayer, Wavelength, Whosasking, WikiWriteyWeb, Wikiedit555, Wrayal, Yonaa, Zeno333, Zfr, 말틴, 297 anonymous edits 3D projection  Source: http://en.wikipedia.org/w/index.php?oldid=547797986  Contributors: AManWithNoPlan, Aekquy, Akilaa, Akulo, Alfio, Allefant, Altenmann, Angela, Aniboy2000, Baudway, BenFrantzDale, Berland, Bgwhite, Bloodshedder, Bobbygao, BrainFRZ, Bunyk, Canthusus, Charles Matthews, Cholling, Chris the speller, Ckatz, Cpl Syx, Ctachme, Cyp, Datadelay, Davidhorman, Deom, Dhatfield, Dratman, Ego White Tray, Flamurai, Froth, Furrykef, Gamer Eek, Giftlite, Heymid, Jaredwf, Jovianconflict, Kevmitch, Lincher, Luckyherb, Marco Polo, Martarius, MathsIsFun, Mdd, Michael Hardy, Michaelbarreto, Miym, Mrwojo, Nbarth, Oleg Alexandrov, Omegatron, Paolo.dL, Patrick, Pearle, PhilKnight, Pickypickywiki, Plowboylifestyle, PsychoAlienDog, Que, R'n'B, RJHall, Rabiee, Raven in Orbit, Remi0o, RenniePet, Rjwilmsi, RossA, Sandeman684, Sboehringer, Schneelocke, Seet82, SharkD, Sietse Snel, Skytiger2, Speshall, Stephan Leeds, Stestagg, Tamfang, TimBentley, Tristanreid, Tyler, Unigfjkl, Van helsing, Zanaq, 109 anonymous edits Light  Source: http://en.wikipedia.org/w/index.php?oldid=563137614  Contributors: 100110100, 1234sam, 165.123.179.xxx, 28421u2232nfenfcenc, 2over0, 5ko, 66mat66, 8ung3st, A More Perfect Onion, A. di M., A8UDI, AAA!, AED, AJim, Aadgray, Abdullais4u, Academic Challenger, Access Denied, ActivExpression, Afasmit, AgadaUrbanit, Agüeybaná, Ahoerstemeier, Aitias, Ajraddatz, Ajtc14, Akamad, Aks818guy, Al-khowarizmi, Alan Liefting, Alansohn, Alex Bakharev, AlexiusHoratius, Aliasmk, Alipson, All Is One, AllyUnion, Almafeta, Alphachimp, Altenmann, Amaltheus, Amble, Anas Salloum, Andre Engels, Andrejj, Andres, Andy85719, Angelofdeath275, Angrysockhop, Animum, AnonGuy, Anonymi, Anonymous Dissident, Antandrus, Antonielly, Antony11031989, AnyFile, Arbor to SJ, Arcturus, Arjunatgv, Army1987, Art LaPella, Arthena, Asama, Ashmoo, Athypique, Atif.t2, Atlant, Audriusa, AugPi, Avnjay, Avoided, AxelBoldt, Ayman, BD2412, BURDA, Bact, Basharh, Batmanand, Beeblebrox, Bekus, Beland, Belovedfreak, Benjiboi, Betacommand, Bettafish2hamsters, Bfigura's puppy, Bill52270, Black dragon lucifer, Blainster, Blindwolf88, Blobglob, Bloodcited, Bludshotinc, Bobblewik, Bobcat64, Boblikesrob, Bobo192, Boing! said Zebedee, Bomac, Bonadea, BoneyKing, Bongwarrior, Borb, Bout2gohuntin, Brews ohare, BrightStarSky, Brion VIBBER, Bryan Derksen, Bullzeye, Butterfly reflections, Bykgardner, C4100, COMPFUNK2, CRACKER66, CT Cooper, Cadiomals, Cadwaladr, Calland10, Caltas, Calvin 1998, Can't sleep, clown will eat me, Canadian-Bacon, Candy853, Capricorn42, Captain Disdain, Cardinality, CarlowGraphics, Catgut, Causa sui, CelticJobber, ChaNy123, Chamal N, Chris 73, Chrismith, Christian Roess, Chub, Ciceronl, Clarince63, ClockworkLunch, Closedmouth, Cmichael, Comet Tuttle, CommonsDelinker, Comtraya, Conrad.Irwin, Conversion script, Coolio98, Courcelles, Craig Pemberton, Craklyn, Crazyboom2, Crazycomputers, Crculver, Cst17, Curps, Cybercobra, Cyfal, D.H, DARTH SIDIOUS 2, DMacks, DV8 2XL, DVD R W, DVdm, Dan Guan, Danny B-), Dante Alighieri, Darknesscrazyman, Darrendeng, Darth Panda, David R. Ingham, David Stewart, DavidBrahm, Davidkazuhiro, Dawnseeker2000, Dbachmann, DeadEyeArrow, Debresser, Deglr6328, Deli nk, Delldot, Demmy, Der Falke, DerHexer, Dgrant, Dhanisha, Dicklyon, Dischdeldritsch, Discospinster, Djr32, Dkroll2, Dmbstudio, Doc glasgow, DocWatson42, Dogcutter, Dougofborg, Dougweller, Doulos Christos, Dpbsmith, DrBob, DragonflySixtyseven, Dragos 85, Drdonzi, Dtgriscom, Dukeofnewmexico, Dvratnam, Dysepsion, EMan32x, EamonnPKeane, EdgarCarpenter, Edmund Patrick, El C, El cangri386, ElectronicsEnthusiast, Elonka, Emezei, Emperorbma, Engunneer, Enormousdude, Enviroboy, Epbr123, Ericyang1337, Esanchez7587, Ethanthepimp, Evanh2008, Evercat, Everyking, Evil saltine, Evolvo365247, Ewlyahoocom, Exert, Existentialistcowboy, FJPB, Faithlessthewonderboy, Falcon8765, Faradayplank, Farosdaughter, Favonian, Feezo, Fieldday-sunday, Fir0002, First Harmonic, Flaw600, Flewis, Flod logic, Floquenbeam, Flowerpotman, Fluri, Flyingbird, FrankSier, Fred Bauder, Fred Condo, Fredbauder, Frehley, Frencheigh, Fresheneesz, Frymaster, Fusionmix, GB fan, GDallimore, GHe, Gago16, Gaia Octavia Agrippa, Gail, Gaius Cornelius, Galoubet, Gartree5, Gary King, Gatorgirl7563, Gawaxay, Gene Nygaard, Geoffrey Gibson, GhostPirate, GianniG46, Giftlite, Gilliam, Gioto, Glane23, Glassneko, Glenn, Gogo Dodo, Gpvos, GraemeL, Grafen, GreenWood86, Greenrd, Guelao, Gurch, H8jd5, HCPotter, Hadal, Hallenrm, Hammer Raccoon, Hamsterlopithecus, Hankwang, Happy5214, Hard Sin, Hardyplants, Harp, Headbomb, HeartOnShelf, Heavenskid, Heislegend9111, Hellspawn123, Henrygb, Heron, Hertz1888, Hgetnet, Hja, Hmains, Hoobladoobla, Horsten, House13, Huntthetroll, Hut 8.5, Hwangrox99, Hyper oxtane, Hyperdeath, IRWolfie-, Iamaditya, Ibliz, Icairns, Igloowiki, Ijustam, Ilexministrator, Inspector Soumik, Into The Fray, Intranetusa, Invincible Ninja, Irbisgreif, Iridescent, It Is Me Here, Itachi8009, Ixfd64, Izapuff, J.delanoy, J8079s, JForget, JFreeman, JTN, JacekW, Jackfork, Jacobolus, Jagged 85, JavaJake, JawsBrody, JayC, Jaybag91, Jayman65, Jaysweet, Jb dodo, Jdedmond, Jeff G., JeremyA, Jersyko, Jhsbulldog, JinJian, Jmccormick927, Jmlk17, JoeSeale, Joearmacost, John of Reading, JohnCub, JohnsonL623, Jojit fb, Jonbean13, Jonlegere, Jontyla, Joyous!, Jssa1995, Julesd, Juliancolton, Junglecat, Jusjih, Justbabychub, Justinritter, Jw079232, Jwinius, Jwoodger, KCinDC, KFP, KGasso, KJS77, Kabuzol, Kane5187, Kansas Sam, Kaobear, KaragouniS, Karan bob, Karanbob 123456789, Karatekidd10, Karol Langner, Keilana, Kevin Langendyk, Kevin Rector, Kevmitch, Khazar2, Khuz, King of Hearts, Kingturtle, KnowledgeOfSelf, Kostisl, Kowey, Koyaanis Qatsi, Kpjas, Kraftlos, Krellis, Kristen Eriksen, Ksaranme, Kubigula, Kukini, Kungfuadam, Kuru, Kusma, Kuyabribri, Kzollman, L33th4x0rguy, LFaraone, LMANSH, LOL, Lankiveil, Lars Washington, Laurascudder, Laurinavicius, Leafyplant, LeaveSleaves, Lee Daniel Crocker, Legoktm, Lennartgoosens, Les boys, Lestrade, Leuko, Levineps, Lhimec, Liam Skoda, Lights, Lights2, Ligulem, Lilfiregal, Linus M., Lir, LittleOldMe, Loathsome Dog, Looscan, Looxix, LordLincoln8494, Loren.wilton, LorenzoB, Lotje, Lowellian, Lradrama, LucasVB, Lucyin, Ludwigs2, Lukester94, Luna Santin, Lunakeet, Lupin, M C Y 1008, M.O.X, MER-C, Ma8thew, Mac, Macy, Majorly, Malo, Man vyi, Manic19, Manofsteel32, Marek69, Marez512, Marie Poise, Marsbarz, Martin Hogbin, Masterofpsi, Mattblythe, Mattbr, Mattbuck, Matthiaspaul, Matticus78, MattieTK, Mattw91, Maurice Carbonaro, Maxberners, Mblumber, McNoddy, Medlat, Melchoir, Meno25, Merovingian, Merpre, Michael Hardy, Micro01, Mikael Häggström, Mike McGovern, Mike Rosoft, Mikeryz, MimirZero,

290

Article Sources and Contributors Minesweeper, Minnir45, Mj fan1995, Mkamensek, Mr.tennisman37, Mspraveen, Mulberry, Mxn, Mygerardromance, NAHID, NBS, NMChico24, NS96091, Narayanan20092009, NattraN, NawlinWiki, Neopergoss, Netkinetic, Netoholic, Neutrality, Nihiltres, Nishantsah, Niuhaibiao, Njaelkies Lea, Nk, Nmacu, Nn123645, Nneonneo, NonNobisSolum, Novangelis, Nthndude23, NuclearWarfare, Numbo3, Oblivious, Ohnoitsjamie, Oli Filth, Oliver202, Oliverdl, Olivier, Omegatron, Opelio, Oracleofottawa, Orderud, Outpt, OwenX, Oxymoron83, Pac72, Paine Ellsworth, Pak21, Panairjdde, Paolo.dL, ParisianBlade, Paste, PatelRahul, Patrick, Paul August, Paulwesterman, Pcbene, Pearle, Pens98, Penubag, Pepper, PercyWaldram, Persian Poet Gal, PeteShanosky, PeterisP, Peterlewis, Petri Krohn, Pharaoh of the Wizards, Phiddipus, PhilT2, Phileas, Philip Trueman, Philosophus, Piano non troppo, Pichpich, Pinethicket, Pinkgothic, Pip2andahalf, Pizza Puzzle, Poeloq, Politepunk, Popsup, Prari, Previously ScienceApologist, Promethean, Pseudomonas, Pumeleon, Pumpknhd, Qaz, Quietust, QuiteUnusual, Qxz, R'n'B, RDBury, RG2, RJHall, RL0919, Radon210, Raining girl, Raisusi, Rajayuv, Rajkiandris, RaseaC, Ravidreams, Razimantv, Razorflame, Rcsprinter123, Rdema13, Reach Out to the Truth, Reatlas, Rebornsoldier, RedHillian, Reinyday, Rememberway, Renaissancee, RetiredUser2, Reubentg, RexNL, Rich Farmbrough, Rich257, Ricky81682, Ripepette, Rjstott, Rjwilmsi, Rnn2walls, Robbie Cook, Robert P. O'Shea, Rod57, Rojomoke, Ronhjones, Ronius, Rotational, Roybb95, Rrburke, Rror, Runningonbrains, Ruy Pugliesi, Ryan032, Ryancormack, Ryulong, SEWilco, SJK, Saebjorn, Salamurai, Sallicio, Salsb, Sam Korn, Sam Li, Sannse, Saralmao, Saros136, Sayden, Sayeth, Sbharris, ScaldingHotSoup, Scetoaux, SchfiftyThree, SchreiberBike, Sciurinæ, Sct72, Seabhcan, Seaphoto, Sengkang, Serendipodous, Sergeymk, Setreset, Sfvace, Shadow one eight seven, Shahin-e-iqbal, Shalom Yechiel, Shanes, Shashank artemis fowl, Shell Kinney, Shoaler, Shooosh, Shubinator, Simetrical, Simon D M, Sirkad, Sjsclass, Skater, Slgcat, Slmader, Smalljim, Smarty02, Smokris, Snowmanmelting, Snowolf, SobaNoodleForYou, Someone else, Sonicsuns, Sosonat, Sphilbrick, SpikeToronto, Splarka, SpuriousQ, SqueakBox, Srguy, Srleffler, Stanleymilgram, Starkrm, Startstop123, Stephanwehner, Stephenb, Stephentucker, Steve Quinn, SteveMcCluskey, Stevenmitchell, Stevertigo, Stiangk, Stillnotelf, Stone.samuel, Stormoffire, Strait, Sunray, Superduperboard, Sverdrup, Synchronism, THF, Tandrum, Tanthalas39, Tanvir Ahmmed, TarkusAB, Tbhotch, Teapeat, Technion, TehJayden, Tellyaddict, Tempodivalse, The Anome, The Firewall, The High Fin Sperm Whale, The Man in Question, The Photon, The Rogue Penguin, The Sanctuary Sparrow, The Thing That Should Not Be, The high and mighty, The only one who knew, The undertow, The wub, TheMindsEye, Thechamelon, Thehelpfulone, Thematicunity, Theroadislong, Thingg, Think outside the box, Thisthat2011, Thrindel, Tide rolls, TigerShark, Timb66, Tinclon, Tiptoety, Tiscando, Titoxd, Tkynerd, Tlabshier, Tlork Thunderhead, Tombomp, Tomchiukc, Tommy2010, Tony Fox, Tonyfaull, Travis002, Trelvis, Trevyn, Trieste, Triwbe, Ttonyb1, Tweesdad, TwoOneTwo, Tyco.skinner, Ulric1313, Uncle Dick, Uncle G, Uncle Milty, Unionhawk, Unyoyega, Urania3, UtherSRG, Utsav.schurmans, VMS Mosaic, Vacio, Vanished user 39948282, Vanished user 9i39j3, VanishedUserABC, Vary, Versus22, Vinne2, Viskonsas, Vivelequebeclibre, Voidxor, Vrenator, Vsmith, W1k1p3diaf0rt3hw1n, WLU, Waggers, Waninge, Wavelength, Waycool27, Wayne Slam, Wereon, Wernher, West London Dweller, Where next Columbus?, WhisperToMe, Whitemanburntmyland, Wiki alf, WikiAntPedia, Wikibobspider, Wikidudeman, Wikieditor06, Wikiwkiz, Wile E. Heresiarch, William Avery, Willking1979, WillowW, Wimt, Wingray2, Wisemannn, Wknight94, WpZurp, Wperdue, X!, XJaM, Xaonon, Xompanthy, Yamamoto Ichiro, Yavor18, Yosri, Yourself1, Yuckfoo, Z.E.R.O., Zeimusu, Zidonuke, Zundark, Zvn, Ævar Arnfjörð Bjarmason, Þjóðólfr, 1900 anonymous edits Radiance  Source: http://en.wikipedia.org/w/index.php?oldid=555496886  Contributors: Aboalbiss, Arnero, Bryan Derksen, Chjoaygame, Ciphers, CliffX, Drdan14, Dugwiki, Extransit, Gene Nygaard, Helicopter34234, HuntingTarg, Icairns, Imroy, Inks.LWC, Jarekt, Jdpipe, Joyous!, Kawasaki, KelleyCook, Kmarinas86, Kri, Matthiaspaul, Michael Hardy, Natural Philosopher, Neelix, Nikhilkrgvr, Nono64, Noobycheese, NorsemanII, PAR, Prolog, Qutezuce, Rabuckles, Reddi, SCEhardt, Shakiestone, Srleffler, TedPavlic, TerokNor, Tesi1700, Texliebmann, Towerman, Vaughan Pratt, Waleswatcher, Wigie, WinstonSmith, Xcelerate, ZooL SmoK, 47 anonymous edits Photometry  Source: http://en.wikipedia.org/w/index.php?oldid=543746757  Contributors: 1337pino, Arjuncm3, Avjoska, BenFrantzDale, Benregn, Biederman, Bobblewik, Chris Howard, Christopherlin, DJ Clayworth, Daniel C, Dcirovic, Dicklyon, Dpbsmith, Dufbug Deropa, Epbr123, Grafen, Hu12, Icairns, JeanneRS11, LilHelpa, Matt909, Matthiaspaul, Michael Hardy, Mtidei, Neelix, Nono64, Pflatau, Phe, Physicist, RA0808, Rich Farmbrough, Rnt20, Srleffler, Sunfish, Tooga, Topbanana, Ultinate, Wascally wabbit, Waxigloo, Welsh, Wikicat, Wikifulchemist, WizardPioneer, Wlodzimierz, Wtshymanski, XJaM, Zizonus, 68 anonymous edits Shadow  Source: http://en.wikipedia.org/w/index.php?oldid=564993368  Contributors: 16@r, 28421u2232nfenfcenc, 9Nak, A little insignificant, Aarktica, Abtract, Acole.cable, Acroterion, AdjustShift, AgnosticPreachersKid, Ahoerstemeier, Al.locke, Alai, Alansohn, Albaspa, Alcadies, Alexh2462, Amanda53, AnEyeSpy, AndersL, Andycjp, Antandrus, Antidermis2319, Anwar saadat, Ar-wiki, Ariel., Arima, Art LaPella, Artur1917, AstroHurricane001, Azure, BackToThePast, Big Bird, BillC, Bobo192, Bongwarrior, Brendan Moody, Brian0918, Brianga, Burgermann, Bvdbijl, CGP, CHJL, COMPFUNK2, Calvin 1998, CanadianLinuxUser, Canderson7, Casliber, Celcom, Cenarium, Cgs, CharonM72, Chase22134, Chitrapa, ChrisGualtieri, Ckatz, Clark89, Closedmouth, CommonsDelinker, CrankyScorpion, Cyclopia, D. Recorder, Daniel C. Boyer, DarkAudit, Darkdreams101, Delirium, Deor, Dhscommtech, Dino, Discospinster, Download, Drasek Riven., Drmies, Dugo, E0steven, Edward130603, Edward321, Eekerz, Ekabhishek, El C, Eleassar, Elfast, Ellywa, Eltanin, Enviroboy, Epbr123, Etan J. Tal, Eu.stefan, Everyking, Evilmaster23, Ewacobra, Ewulp, Excirial, Falsedef, Fascinating Universe, Ferko7, Ferrari2503, FlyingToaster, FocalPoint, Foobaz, Footballfan190, Foxj, Frank, Freakmighty, Gazimoff, GianniG46, Gobbleswoggler, Godheval, Gscshoyru, Habap, HalJor, Han Solar de Harmonics, Hasek is the best, Hcs III, Hurricanehink, Huw Powell, Héous, IAMHIDDEN, IRP, Igordebraga, Ihan, Imohano, J.delanoy, JFreeman, Jackelfive, Jalimartin, Jennavecia, Jfriedl, Johnbojaen, Johnleemk, Jonas John, Jondel, Jor, Jryan1990, Juliancolton, Justice Marshall, KatharRobanne, Kazkaskazkasako, Kcordina, Keegan, Khazar, KinAlpha, King of Hearts, Kingpin13, Koven.rm, Kubigula, Kuru, Lars Washington, Lee M, Lenticel, Lessinor, Lightblade, LongLiveMusic, Loopygrumpkins, LotiBradley, Lotje, Lowellian, MONGO, Malter, MarcoTolo, Mariod505, Maris stella, Marjoleinkl, Materialscientist, Mathew5000, Matt Deres, Mbz1, Michael Hardy, Midway, MissieBritty, Msleeman, Mufka, Munita Prasad, NERIUM, Natmaster, NawlinWiki, NeoChaosX, Nuno Tavares, Ohnoitsjamie, OlEnglish, Oneismany, Oshwah, Oxymoron83, Paddle3298, Patrick, Paul 012, Pawel Dabrowski, Pentakillyourboo, PhJ, Piano non troppo, Pieoncar, Pinethicket, Pokrajac, Pol098, Poobread, Purityofspirit, RJHall, Radorocks, Raelhoulsten, Rajeevmass, RattleMan, Rau J, Rayneyday, RazielZero, Reaperofthefates, Reatlas, Redthoreau, RexNL, Rich Farmbrough, Rizzardi, Rnt20, Robocracy, Royboycrashfan, S19991002, SJP, Safalra, Saihtam, Samurai Mark, Samw, Scarian, SchfiftyThree, Schnäggli, Sciurinæ, Scrullyrumpus, Seaphoto, Sesshomaru, Seth Ilys, Shadoman, Shadow apple, Shadow boat187, Shadow darkrai, Shadow grin, Shadow grunt, Shadow kid187, Shadow lacelot, Shadow majora, Shadow master187, Shadow nightmare, Shadow one eight seven, Shadow oni, Shadow orange, Shadow phonex, Shadow rage, Shadow tornado, Shadow187 messanger187, Shadow708, ShadowHammer187, ShadowUmbrella, Shadowark, Shadowcoarage123, Shadowjams, Shadowofhi, Shadowpower187, Shadows rage, Shadowshock, Shadowx5206, Shonkhor, Sickly Raven, SilkTork, Singhalawap, Skizzik, Smack, SnowFire, SoCalSuperEagle, Solidsnake78, Someguy1221, Sotoberg, Spiel496, Spiritia, Spitfire, Srleffler, Staxringold, Stephenb, Strabismus, Stwalkerster, Tattoodwaitress, TeaDrinker, TenPoundHammer, Teorth, TexasAndroid, Thapthim, That Which Lurks1, The Thing That Should Not Be, The Utahraptor, Thebassguy4u, Thegreatmonkey, Thegreenj, Thickoboy, Thingg, Thirtysilver, ThomasAndrewNimmo, Thumani Mabwe, Tide rolls, Toxicated kitty, Tslocum, Turtle21432, Ugur Basak, Uirauna, Ukexpat, UltimatePyro, Urutapu, Ustas, UtherSRG, Vegas Bleeds Neon, Versus22, Vsmith, Vzbs34, Wars, White Mage Cid, Willking1979, Wisterlane, Wizardman, Woogee, Wornima, Writ Keeper, XxToukoxx, Yudiweb, Zenoseiya, Zoe, Владимир Шеляпин, 517 anonymous edits Umbra  Source: http://en.wikipedia.org/w/index.php?oldid=559649425  Contributors: 2602:304:B3E9:7FE0:35B8:362:3BC5:EA05, 4twenty42o, Alansohn, Allen McC., Altonbr, Auric, Barraki, Baseball Watcher, Belarius, Blue Crest, Brcp, Cailil, Caltas, Capricorn42, Chowbok, Corykoski, Curps, D6, Dabroz, DanMS, DePiep, DeadEyeArrow, Dlohcierekim, Euryalus, Fangfufu, Fastfission, Gary King, George100, Gianfranco, GianniG46, Gilliam, Glacialfox, Graham87, Hellbus, Hyperchicken, Ixfd64, Janquark, Jimmy BOY 69, Jncraton, Jni, Joepgray, Joseph Solis in Australia, JustAddPeter, Ka Faraq Gatri, Kdkeller, Kelvinsong, Kierkkadon, Lir, Logan, Magioladitis, Manurup1997, Martarius, Michael Hardy, Michaelmarian, Mikael Häggström, Mindcooler, Mirv, Moe Epsilon, N2e, Nizamarain, OlEnglish, Omicronpersei8, POds, Patrick, Pedant, Pernoctator, Phaedrus420, Philipp Wetzlar, Piledhigheranddeeper, Qarnos, Quantpole, Quarl, Radagast83, RasputinAXP, Reconsider the static, Retodon8, Robofish, Rparle, Samw, Sanderk, Sandox, Sbmehta, SheeEttin, Shirik, Sibruk, Silence, Skela, Skizzik, Smb1001, Soliloquial, Southen, Spaceanduniverse, Srleffler, SteveHopson, Stewdeus, Stovetopcookies, Takua X, Tamfang, The Illusive Man, There are no names left, Thingg, Tide rolls, Titodutta, Tommy2010, Tomruen, Travis.Thurston, Utcursch, Vasiľ, Versus22, Vicenarian, Virtlink, Vivaldi, Voidxor, WadeSimMiser, Wimt, Wireless Keyboard, Yamamoto Ichiro, Yaronf, Zandperl, ‫ﻣﺎﻧﻲ‬, 273 anonymous edits Distance fog  Source: http://en.wikipedia.org/w/index.php?oldid=541207032  Contributors: AED, Abdull, Altamel, Before My Ken, Can't sleep, clown will eat me, CyberSkull, D'Agosta, Damian Yerrick, Dhatfield, Eeky, Freikorp, Graft, Gunmetal Angel, Hmains, Knacker ITA, LukeSurl, Mdeza, Metamorf, Navstar, Nightscream, RJHall, Reedbeta, Robert P. O'Shea, Secret Saturdays, SimonP, Sterrys, T-tus, WolfenSilva, 21 anonymous edits Shading  Source: http://en.wikipedia.org/w/index.php?oldid=564078129  Contributors: Acdx, Adambro, Al Hart, AndreNatas, AvicAWB, Bhagwati 2103, Bobo192, Bomazi, Cholling, Cold Season, DanielPharos, Darkdigger, David C, Dendodge, Dhatfield, Discospinster, DopefishJustin, Download, Eekerz, Eep², Eigenlambda, Elliskev, Enviroboy, Fireplace, GSlicer, Gogo Dodo, Guy M, Halfarock, Ibbn, J.delanoy, Jamelan, Jester5x5, Jim1138, Katalaveno, Kri, Lotje, Mac, Mathdude101, Me6620, Meatbites, Miracle Pen, Moe Epsilon, NAHID, NewEnglandYankee, NichoEd, Nick Number, Octane, Oicumayberight, Patrick, R'n'B, Regford, Riana, Santh12345, Sceptre, SchfiftyThree, Shopnolover, SiPlus, Stephenchou0722, Sterrys, StoatBringer, SyntaxError55, Tempodivalse, The Thing That Should Not Be, Tiddly Tom, Valenciano, Wavelength, Wgsimon, Why Not A Duck, Wiz3kid, Woland37, Woohookitty, 129 anonymous edits Diffuse reflection  Source: http://en.wikipedia.org/w/index.php?oldid=546648940  Contributors: AManWithNoPlan, Adoniscik, Andrevruas, Apyule, Bluemoose, Casablanca2000in, Deor, Dhatfield, Dicklyon, Eekerz, Falcon8765, Flamurai, Francs2000, GianniG46, Giftlite, Grebaldar, Incnis Mrsi, Jeff Dahl, JeffBobFrank, JohnOwens, Jojhutton, Lesnail, Linnormlord, Logger9, Marcosaedro, Materialscientist, Matt Chase, Mbz1, Mirrorreflect, Owen214, Patrick, PerryTachett, RJHall, Rajah, Rjstott, Scriberius, Seaphoto, Shonenknifefan1, Srleffler, Superblooper, Waldir, Wikijens, 㓟, 47 anonymous edits Lambertian reflectance  Source: http://en.wikipedia.org/w/index.php?oldid=555495242  Contributors: Adoniscik, AnOddName, Bautze, BenFrantzDale, DMG413, Deuar, Dufbug Deropa, Eekerz, Fefeheart, GianniG46, Girolamo Savonarola, Jtsiomb, KYN, Kri, Littlecruiser, Marc omorain, Martin Kraus, PAR, Pedrose, Pflatau, Radagast83, Sanddune777, Seabhcan, Shadowsill, SirSeal, Srleffler, Thumperward, Venkat.vasanthi, Xavexgoem, Δ, 21 anonymous edits Gouraud shading  Source: http://en.wikipedia.org/w/index.php?oldid=564321491  Contributors: Acdx, Akhram, Asiananimal, Bautze, Blueshade, Brion VIBBER, Chasingsol, Crater Creator, Csl77, DMacks, Da Joe, Davehodgson333, David Eppstein, Dhatfield, Dicklyon, Eekerz, Furrykef, Gargaj, Giftlite, Hairy Dude, Jamelan, Jaxl, JelloB, Jon186, Jpbowen, Karada, Kocio, Kostmo, Kri, MP, Mandra Oleka, Martin Kraus, Michael Hardy, Mrwojo, N4nojohn, Nayuki, Olivier, Pne, Poccil, RDBury, RJHall, Rainwarrior, Rjwilmsi, RoyalFool, Russl5445, SMC, Scepia, SchuminWeb, Sct72, Shoaibsaikat, Shyland, SiegeLord, Solon.KR, The Anome, Thenickdude, Thumperward, Wxidea, Yzmo, Z10x, Zom-B, Zundark, 47 anonymous edits

291

Article Sources and Contributors Oren–Nayar reflectance model  Source: http://en.wikipedia.org/w/index.php?oldid=564998014  Contributors: Arch dude, Artaxiad, Bautze, CodedAperture, Compvis, Dhatfield, Dicklyon, Divya99, Eekerz, Eheitz, GianniG46, Ichiro Kikuchi, JeffBobFrank, Jwgu, Martin Kraus, Meekohi, ProyZ, R'n'B, Srleffler, StevenVerstoep, Woohookitty, Yoshi503, Zoroastrama100, 22 anonymous edits Phong shading  Source: http://en.wikipedia.org/w/index.php?oldid=565585331  Contributors: ALoopingIcon, Abhorsen327, Alexsh, Alvin Seville, Andreas Kaufmann, Asiananimal, Auntof6, Bautze, Bignoter, BluesD, CALR, ChristosIET, Ciphers, Connelly, Csl77, Dhatfield, Dicklyon, Djexplo, Eekerz, Everyking, Eyreland, Gamkiller, Gargaj, GianniG46, Giftlite, Gogodidi, Gwen-chan, Hairy Dude, Heavyrain2408, Hymek, Instantaneous, Jamelan, Jaymzcd, Jedi2155, Karada, Kleister32, Kotasik, Kri, Litherum, Loisel, Martin Kraus, Martin451, Mdebets, Michael Hardy, Mild Bill Hiccup, N2e, Pinethicket, Preator1, RJHall, Rainwarrior, Rjwilmsi, Sigfpe, Sin-man, Sorcerer86pt, Spoon!, Srleffler, StaticGull, T-tus, Thddo, Tschis, TwoOneTwo, WikHead, Wrightbus, Xavexgoem, Z10x, Zundark, 70 anonymous edits Blinn–Phong shading model  Source: http://en.wikipedia.org/w/index.php?oldid=563524445  Contributors: 2A01:E35:8BA0:11E0:88F6:9E90:54B1:5480, Bautze, ChristosIET, Connelly, Csl77, Dhatfield, Dicklyon, Eekerz, Eik Corell, Eyreland, Fuzzypeg, Gargaj, Giftlite, Goffrie, Ichbinder, Jtsiomb, Kri, Madhero88, Martin Kraus, Mecanismo, Mikhail Ryazanov, Palfrey, Rainwarrior, Thrau, Архад, 21 anonymous edits Specular reflection  Source: http://en.wikipedia.org/w/index.php?oldid=557683953  Contributors: 0860685, Al Hart, Awaytoseeit, BenFrantzDale, Bento00, Brandmeister, Bykgardner, Charles Matthews, Closedmouth, Constructive editor, Craig Pemberton, DRosenbach, Danh, Davehi1, Domitori, Doremítzwr, Edcolins, Enormousdude, Fgnievinski, GianniG46, Grebaldar, GregorB, Hymek, JRice, Jamelan, Jcc2011, Joaquin008, JohnOwens, Joël DESHAIES, Kbrose, Kevin12xd, Kku, Komap, Kri, MarcoTolo, Martynas Patasius, Maxedesa, Maximus Rex, Maxronnersjo, Nickshanks, Ocanthus, Patrick, Pfctdayelise, Pflatau, Philiptory, Plamka, PoccilScript, R'n'B, RMFan1, Reedbeta, Rjstott, Rmcgibbo, Setreset, Sfan00 IMG, Sho Uemura, Somno, Srleffler, SteinAlive, TheParanoidOne, Thorseth, Tide rolls, Traxs7, Whpq, Zenibus, ZooFari, 63 anonymous edits Specular highlight  Source: http://en.wikipedia.org/w/index.php?oldid=553971900  Contributors: Altenmann, Bautze, BenFrantzDale, Cgbuff, Connelly, Dhatfield, Dicklyon, ERobson, Eekerz, Ettrig, Jakarr, Jason Quinn, JeffBobFrank, Jtlehtin, Jwhiteaker, KKelvinThompson, KlappCK, Kri, Lapinplayboy, Michael Hardy, Mindmatrix, Mmikkelsen, Nagualdesign, Niello1, Plowboylifestyle, RJHall, Reedbeta, Ti chris, Tommy2010, Versatranitsonlywaytofly, Wizard191, 38 anonymous edits Retroreflector  Source: http://en.wikipedia.org/w/index.php?oldid=565508539  Contributors: Ancheta Wis, Antrim Kate, ArDavP, Astanhope, Audin, BenWilliamson, Bidgee, Bkengland, Bobo192, Bryan Derksen, Bubba73, CMG, Calvarez, Chetvorno, Christopherlin, Cmglee, CommonsDelinker, CompRhetoric, Congruence, Cplakidas, Curps, Delicious carbuncle, DineshAdv, DrBob, Dziban303, Egmason, Foobar, Fotaun, Frank Shearar, Friendly person, Funandtrvl, Fuzzy Logic, GRAHAMUK, Gurch, Guy M, Hasseli, Heron, Hetar, Hgrosser, HiDrNick, Hooperbloob, Iheartlife, Ixfd64, John Cline, Kasaalan, Kkmurray, Knowledge Seeker, Krfx, Kristian Ovaska, Krumpking2001, Kundor, L.Aceman, Lambiam, Laserman, Lee M, Lews Therin, Limulus, Mark Arsten, Mauls, Mayur, Mercenario97, Michael Frind, Michael Hardy, Mikeblas, NABinc, Nemesis of Reason, Neumeier, Night Gyr, Njbob, Nono64, Pakaran, Pascal666, Patrick, Paul Pot, PaulPPP, Ph33rspace, PierreAbbat, Prillen, Quistnix, R'n'B, RCMoeur, RPaschotta, Rjstott, Rsquid, SCEhardt, Sardanaphalus, Sbyrnes321, Shirt58, SpaceFlight89, Spazure, Srleffler, Steorra, SuperHamster, Tarchon, Tide rolls, Tim1988, Typ932, Una Smith, Van helsing, Varnav, Verne Equinox, Wavelength, WaysToEscape, Wikicheng, William Allen Simpson, Wingman4l7, Yossiea, Zanudaaa, Zeveljuice, Zoney, Zureks, 92 anonymous edits Texture mapping  Source: http://en.wikipedia.org/w/index.php?oldid=544277745  Contributors: 16@r, ALoopingIcon, Abmac, Achraf52, Al Fecund, Alfio, Annicedda, Anyeverybody, Arjayay, Arnero, Art LaPella, AstrixZero, AzaToth, Barticus88, Besieged, Biasoli, BluesD, Blueshade, Canadacow, Cclothier, Chadloder, Collabi, CrazyTerabyte, Daniel Mietchen, DanielPharos, Davepape, Dhatfield, Djanvk, Donaldrap, Dwilches, Eekerz, Eep², Elf, EoGuy, Fawzma, Furrykef, GDallimore, Gamer3D, Gbaor, Gerbrant, Giftlite, Goododa, GrahamAsher, Gut informiert, Helianthi, Heppe, Imroy, Isnow, JIP, Jackoutofthebox, Jagged 85, Jesse Viviano, Jfmantis, JonH, Kaneiderdaniel, Kate, KnowledgeOfSelf, Kri, Kusmabite, LOL, Luckyz, M.J. Moore-McGonigal PhD, P.Eng, MIT Trekkie, ML, Mackseem, Martin Kozák, MarylandArtLover, Mav, MaxDZ8, Michael Hardy, Michael.Pohoreski, Micronjan, Neelix, Novusspero, Obsidian Soul, Oicumayberight, Ouzari, Palefire, Plasticup, Pvdl, Qutezuce, RJHall, Rainwarrior, Rich Farmbrough, Ronz, SchuminWeb, Sengkang, Simon Fenney, Simon the Dragon, SiobhanHansa, Solipsist, SpunkyBob, Srleffler, Stephen, Svick, T-tus, Tarinth, TheAMmollusc, Tompsci, Toonmore, Twas Now, Vaulttech, Vitorpamplona, Viznut, Wayne Hardman, Widefox, Willsmith, Ynhockey, Zom-B, Zzuuzz, 120 anonymous edits Bump mapping  Source: http://en.wikipedia.org/w/index.php?oldid=556072976  Contributors: ALoopingIcon, Adem4ik, Al Fecund, ArCePi, Audrius u, Baggend, Baldhur, BluesD, BobtheVila, Branko, Brion VIBBER, Chris the speller, CyberSkull, Damian Yerrick, Dhatfield, Dionyziz, Dormant25, Eekerz, Engwar, Frecklefoot, GDallimore, GoldDragon, GreatGatsby, GregorB, Greudin, Guyinblack25, H, Haakon, Hadal, Halloko, Hamiltondaniel, Honette, Imroy, IrekReklama, Jats, Kenchikuben, Kimiko, KnightRider, Komap, Loisel, Lord Crc, M-le-mot-dit, Madoka, Martin Kraus, Masem, Mephiston999, Michael Hardy, Mrwojo, Novusspero, Nxavar, Oyd11, Quentar, RJHall, Rainwarrior, Reedbeta, Roger Roger, Sam Hocevar, Scepia, Sdornan, ShashClp, SkyWalker, Snoyes, SpaceFlight89, SpunkyBob, Sterrys, Svick, Tarinth, Th1rt3en, ThomasTenCate, Ussphilips, Versatranitsonlywaytofly, Viznut, WaysToEscape, Werdna, Xavexgoem, Xezbeth, Yaninass2, 60 anonymous edits Bidirectional reflectance distribution function  Source: http://en.wikipedia.org/w/index.php?oldid=564353391  Contributors: Altenmann, Antarktis, Banus, Barticus88, Cdecoro, Charles Matthews, Chris the speller, Deuar, Dhatfield, Dhwoow, Dicklyon, DocWatson42, Drewnoakes, Eheitz, Git2010, Hooperbloob, Ieth wk, Ivan Shmakov, Jakarr, Jaraalbe, Jnnewn, Jurohi, KYN, Kri, Lanser1989, Mark viking, Martin Kraus, Meekohi, Pietaster, Plewis, Pmlineditor, Pruthvi.Vallabh, Richie, Rilak, Selinger, Srleffler, Svick, Tentinator, Waldir, Zacao, 58 anonymous edits Physics of reflection  Source: http://en.wikipedia.org/w/index.php?oldid=564772912  Contributors: 12.55, 2406:3003:201F:280:426C:8FFF:FE19:6381, 2604:2000:D000:2F00:1CF5:6927:A094:FB8F, 9bladesofice, Acdx, AdamL, Afrinebeelut, Ahura21, Aitias, Alansohn, Alexander.stohr, Alexandrov, Alphachimp, Amyytiller, Andres, Andy Marchbanks, Antonrojo, Anujjjj, Art LaPella, B137, Bcjordan, BenFrantzDale, Betacommand, BigHairRef, Blahedo, Blanchardb, BlindEagle, Blue520, Bobo192, Borgx, Bryan Derksen, Can't sleep, clown will eat me, Chaffers, ChildofMidnight, Chill doubt, Chinasaur, Chrismanbearpig, Chzz, Clark89, Cncmaster, Complexica, Conversion script, Covalent, Crazymonkey1123 public, Cwmhiraeth, CzarB, DJ Clayworth, Danny B-), David.Mestel, Dcoetzee, DeadEyeArrow, Denisarona, Download, Dozols, DrBob, ESkog, Efe, Enigmaman, Enochlau, Entheta, Erodium, F-402, Fastily, Fenerty, Flyfeather, Fpahl, Fusionmix, GeoGreg, GianniG46, Giftlite, Gilliam, Glenn, Graham87, Groyolo, Hariva, Hburg, Headbomb, Hemavgms, Heron, Hylas, Ibn Battuta, Icezizim, Inbamkumar86, InvertRect, J.delanoy, James086, Janderk, Jdlh, Jim1138, Jjron, John254, Johntex, Josell2, Jsayre64, Jujutacular, Kamicase, Kf4bdy, Kleopatra, Kri, L Kensington, Leonard G., LeslieSSmith, Lexw, LiborX, Linas, LittleDan, Llamadog903, Logan, Los300espartacos, M1ss1ontomars2k4, Mako098765, Manscher, Mark Arsten, Materialscientist, MattieTK, Mav, Mbz1, Mcfar54, Mentifisto, Michael Hardy, Mschlindwein, MurtaghxMisery, NHRHS2010, NWLTM, Neelix, Nk, Nv8200p, ONUnicorn, Octahedron80, Opabinia regalis, Pajz, Patrick, Pcarter7, Pedrose, Petiatil, Pflatau, Phil Boswell, Philip Trueman, Philopp, Phys, Piano non troppo, Pinethicket, Pjvpjv, PoccilScript, Polyparadigm, Quishi, RadioKirk, Radon210, Raven in Orbit, RedWolf, Remag Kee, RetiredUser2, Rgamble, Richwil, Rjstott, Rjwilmsi, Rsocol, Rtdrury, SGGH, SUL, Salam32, Sankalpdravid, Saperaud, Scwlong, Seraphim, Seraphimblade, Shantavira, Shen, Sirtywell, SkyMachine, Smtchahal, Snigbrook, SoCalSuperEagle, SomeGuy11112, Spikebrennan, Spinningspark, Srleffler, Sub40Hz, SummonerMarc, Syed fudail hashmi, TakuyaMurata, Techguy78, The Nameless, Theresa knott, Thesevenseas, Tide rolls, TimTay, Tourbillon, Trevorcox, Trojancowboy, True Pagan Warrior, Two2Naach, Ultraexactzz, Uncle Milty, Vrenator, Vsmith, Waldir, Waveparticle, Wikipelli, Wknight94, Yintan, Zimmsky, Zoomlines, 464 anonymous edits Rendering reflection  Source: http://en.wikipedia.org/w/index.php?oldid=541710956  Contributors: Al Hart, Chris the speller, Dbolton, Dhatfield, Epbr123, Hom sepanta, Jeodesic, Kri, M-le-mot-dit, PowerSerj, Remag Kee, Rich Farmbrough, Siddhant, Simeon, Srleffler, 5 anonymous edits Reflectivity  Source: http://en.wikipedia.org/w/index.php?oldid=564543289  Contributors: 16@r, Abyssal, Adoniscik, Androstachys, AvicAWB, Bgwhite, Cepheiden, Chris Howard, ChrisHodgesUK, Conscious, Cwmhiraeth, Dan Pangburn, Dmmaus, Dolphin51, Donner60, Escientist, Ettrig, Feureau, FreplySpang, Fryed-peach, GianniG46, Inayity, Jdpipe, Jfmantis, JoanneB, KYN, Knakts, Light current, Little green rosetta, Mah159, Mark viking, Melesse, Metallurgist, Michael Hardy, Nanoguitar, Nicola.Manini, Patrick, Pfeldman, Pflatau, Pierre cb, Quinacrine, Radiojon, Retaggio, Rosarinagazo, Russtms1, Seabhcan, Solipsist, Srleffler, Srnec, Steve Quinn, Thierryyyyyyy, Trinitrix, Ulflund, VoidLurker, Wizard191, 57 anonymous edits Fresnel equations  Source: http://en.wikipedia.org/w/index.php?oldid=561304911  Contributors: 1exec1, Alexancer, Antonysigma, Arnero, Aubrey Jaffer, AugPi, BAScMASc, BenFrantzDale, Betelguese05, Brews ohare, Bryan Derksen, CBM, CamD, Cburke91, CelticJobber, Charles Matthews, ChrisHodgesUK, Confuted, Conversion script, Cyp, Deflective, Denfil, Dicklyon, Diehard2, Dj thegreat, Donarreiskoffer, DrBob, ESCapade, EpicWizard, Eranus, F=q(E+v^B), False vacuum, Fcueto, Fgnievinski, Flinx, Fresheneesz, Galiuss, GeoGreg, GianniG46, Gruu, Gumby600, H2g2bob, ICE77, IMSoP, Inike, JCraw, Jitse Niesen, Jleedev, Johncolton, Jonheslop, Josef Meixner, Juhis, Keenan Pepper, KlappCK, Kohlik17, Kri, Kwamikagami, Landak, Latitudinarian, Laurascudder, Lovibond, LvD, Marie Poise, Mattisse, Mets501, Michael Hardy, Mpolyanskiy, MrOllie, Nayuki, NeilN, Nicola.Manini, Niteshb in, Nsaa, Oleg Alexandrov, Patrick, Pflatau, Ppcollins, Pxma, Quibik, RDBury, Rubicon, Savant13, Sbyrnes321, SoCalSuperEagle, Srleffler, Steve Quinn, Tantalate, The Photon, Ulflund, Warut, Wtshymanski, Zipspeed, Zueignung, Zzyzx11, 82 anonymous edits Transparency and translucency  Source: http://en.wikipedia.org/w/index.php?oldid=565001179  Contributors: 16@r, 2602:306:CDC4:8E00:1DAD:E786:38C0:19B7, A bit iffy, Abdullais4u, Afluegel, Alkivar, Andres, Andybuckle, Andycjp, Anyeverybody, Arthur Rubin, BD2412, BenFrantzDale, Bj norge, Blobglob, Boeseschwarzefee, Borriszinders, Brideshead, Bsod2, Carlog3, CharlesC, Chris the speller, Closedmouth, Cooljamesx1, Csol, Curuinor, Daarznieks, Daniel Mietchen, Dilipb, Discospinster, Donreed, Dresian, Duckbill, DynamoDegsy, Dysmorodrepanis, ELApro, ESkog, Eat bananas, Eruantalon, Ewlyahoocom, Excirial, Exert, Fishnet37222, Fotoshop, Fratrep, Fudoreaper, Gaius Cornelius, Garyaltman, Gene Nygaard, George100, GianniG46, Giftlite, Giraffedata, Grafen, Gwernol, HappyInGeneral, Hugo.arg, INkubusse, IW.HG, InsufficientData, Jake Wartenberg, Jan von Erpecom, Japanese Searobin, Jreferee, Jurema Oliveira, Keenan Pepper, KnightRider, Kubigula, LOL, Larryisgood, Latics, Ld100, Leszek Jańczuk, Leuko, Lexicon, LilHelpa, Lockesdonkey, Logger9, Longhair, LorenzoB, Lowellian, Lumpio-, Lysdexia, M-le-mot-dit, M.nelson, MONODA, Mac, Mariah7046, Marie Poise, Markhole, Martin Hogbin, Materialscientist, Maurice Carbonaro, Michael Hardy, Michał Sobkowski, Mion, Moe Epsilon, MrBell, MuZemike, Neelix, Newone, Nick Number, NuclearWarfare, Originalwana, Paalappoo, Panjasan, Papa Lima Whiskey, Patrick, Paula Pilcher, Pdldoran, Pearle, PhilKnight,

292

Article Sources and Contributors PinchasC, Pol098, Polyamorph, Pontificalibus, Pro bug catcher, Qxz, R'n'B, RHaworth, Ravn, Ray Chason, Remurmur, Rhadamante, Rjwilmsi, Robin S, Sandman2007, Schmloof, Sciroccor, Searchme, Seegoon, Shruti puri, SpaceFlight89, Squids and Chips, Srleffler, Steve Quinn, Stikonas, Stwalkerster, SummonerMarc, Test-tools, TexasAndroid, The Thing That Should Not Be, TheDOTKU, Thegreenj, Thumperward, Tide rolls, TimBentley, Tlabshier, Tom Morris, Twinsday, Una Smith, Vinsci, Vsmith, Wapcaplet, Wavelength, Wikipelli, Woohookitty, ZaqFair, 311 anonymous edits Rendering transparency  Source: http://en.wikipedia.org/w/index.php?oldid=553960735  Contributors: Alansohn, Alvinrune, Beradrian, Blueyoshi321, Bobblehead, Cburnett, Daggerbox, Dendodge, Dhatfield, Duckbill, Haakon, Imroy, Jamesbateman, Java13690, Jusdafax, JustAGal, Kashmiri, Mate2code, Mblumber, Minghong, Mion, NanoTechAdvancement, Neelix, Nihiltres, Notinasnaid, Oneiros, Patrick, PhiLho, Ricardo Cancho Niemietz, Rich Farmbrough, Rollinghills, Sapien2, Septegram, Shlomital, Speck-Made, TheThingy, VMS Mosaic, Verpies, ViolinGirl, Zeimusu, 33 anonymous edits Refraction  Source: http://en.wikipedia.org/w/index.php?oldid=560891959  Contributors: 0x30114, AED, AJVer1, Addshore, Aeronautics, Afluegel, Ahoerstemeier, Ajraddatz, Akshat.saxena21, Alansohn, AlexD, AlexanderWinston, Alice Lucy, Allmightyduck, Amorymeltzer, Andre Engels, Andy Marchbanks, Angry bee, Anjani kumar, Art LaPella, Atlant, Avs5221, AxelBoldt, B.Wind, Badgernet, Bansuriy2k00, Barak Sh, Bcrowell, Beachedwhale34254, Ben pcc, BenFrantzDale, Betacommand, Bigger digger, BillC, Bobo192, Bongwarrior, Borb, Borgx, Brianwillis28, Buffs, Bunnyhop11, Burntsauce, Cabin Tom, Calvin 1998, CanadianPenguin, Capricorn42, Cdang, Cessator, Ceyockey, Chezz444, Chrismanbearpig, Christopher Douglas, Clark89, Complexica, Computerjoe, Conversion script, Coppertwig, Covalent, Cremepuff222, Crowsnest, Danny B-), Debresser, Delamaran, Delldot, Dgrant, Dhaluza, Dhp1080, Discospinster, Doomfish01, Dr Satori, DrBob, Duk, Eeekster, Eleassar777, Erguvan7, Erik Garrison, Etrigan, Everyking, EzykronHD, Falcon8765, Farosdaughter, Fgnievinski, Fir0002, Fire2bird, Fogster, Fred12121212, FreeT, Gary63, Gdarin, Geocachernemesis, Giftlite, Glenn, GoingBatty, GrahamHardy, Gregoryoko, Haemo, Hankwang, Headbomb, Hellfire1708, Heron, Howcheng, II MusLiM HyBRiD II, Isnow, J.delanoy, JDoe3.14, JamesBWatson, Janke, Jebba, Jellicoe, JellyWorld, Jingshen, John Newbury, JohnBlackburne, Johntheadams, Josell2, KatyushaLove, Kazu89, KnowledgeOfSelf, L-H, Lavalamp435, LiborX, Liftarn, Lights, Lilac Soul, Linas, Lizzielizzielizzie, Lskil09, Lugia2453, Lumpy121, Luna Santin, Luuva, MPerel, Madsanders, Magioladitis, Magister Mathematicae, Maksim L., Manuel Anastácio, Mapletip, Martarius, Materialscientist, MathewTownsend, Matticus78, Matusz, Mbz1, Mchan2008, Mereda, Metrax, Michael Hardy, Mikael Häggström, MikeX, Modify, Morel, Mr. Stradivarius, Mrug2, Mufka, Muya, Nayuki, Neurolysis, Nick, Ninetyone, Nk, NoPetrol, Numbo3, Orionus, Paxse, Peter Isotalo, Peter.C, Pevernagie, Pfalstad, Pfeldman, Pflatau, Pharaoh of the Wizards, Philthecow, PierreAbbat, Pinethicket, Pizza Puzzle, Postrach, Potteradict, Pratikpawar.007, Profitoftruth85, Puchiko, RDBury, RFlynn26, Radiant chains, Radical Ghost, Ravedave, Raven in Orbit, Ray Chason, Reach Out to the Truth, RedWolf, RenamedUser2, RexNL, Rhyshuw1, Rigved1988, Rob-bob7-0, Rojomoke, Romanm, Rror, Ruddy9hell, Ruzmutuz, Sandlot21, Sankalpdravid, Schnurrbart, Seaphoto, Shadowjams, Sheapederson, Sinner648, Slowking Man, Sluzzelin, Soler97, SpaceFlight89, Squall1991, Srleffler, Startstop123, Staxringold, StephenBuxton, Sting au, Supaluminal, Superm401, Supermood00d, Sysy, TWCarlson, Tahsin Akalin, Tamicus, Tauʻolunga, The Original Wildbear, TheTommys123, Theresa knott, Thesevenseas, Thingg, Thumperward, Thunderbird2, Tide rolls, Tmcboyle, Tomj, Topbanana, Torii Alexithymia123, Trvsdrlng, Tyrol5, Vamsi 209, Violetriga, Vrenator, Vsmith, WR20, WadeSimMiser, Wereon, William Avery, Wizard191, Wknight94, Woohookitty, Wyatt915, Xionbox, Yamamoto Ichiro, Yidisheryid, Ykhwong, Youssefsan, Zarniwoot, 551 anonymous edits Total internal reflection  Source: http://en.wikipedia.org/w/index.php?oldid=557331370  Contributors: 04arnols, 63.192.137.xxx, A. B., Aasloan, Abstracte, Adriaan, Afluegel, Ajkkjjk52, Akshat.saxena21, Alansohn, Alexrexpvt, Alphaknave, Amore0911, Andrewpmk, Anjsimmo, Anonymous Dissident, Asephei, Axd, Bakilas, BashBrannigan, Bernardmarcheterre, Big Bird, BigDwiki, Blacksqr, Bopomofo, Brienanni, Brn2bme, BryanD, C'est moi, Cadillac, Can't sleep, clown will eat me, Chhe, Cleverca22, Clément 421138, Cntrational, Codeczero, Conversion script, Cybercobra, DMacks, DaDemolisher, Danbold, DerHexer, Dolphin51, DrBob, Drmotherfucker, Dureo, Eastlaw, EditorInTheRye, Egil, El, Eman, EoGuy, Eranus, Esprit Fugace, Excirial, FastLizard4, Fatespeaks, Fgnievinski, Fir0002, Frazzydee, Frungi, FyzixFighter, Gaff, GamedFromEden, Gareth McCaughan, Garyzx, Gauravjuvekar, Gerrit, Giftlite, Goudzovski, Gummer85, HMSSolent, Hans Dunkelberg, Happy-melon, Hellbus, Heron, Hertz1888, Hetar, Ian01, Ida Shaw, Ilikefood, J.delanoy, Jacroe, Japanese Searobin, JeR, Jedikatt, Jim1138, Jmath666, Jmnbatista, Jnestorius, Jordan Elder, Josell2, Kinneyboy90, Kirrages, Koyaanis Qatsi, Kristen Eriksen, Ltsampros2, LucasVB, Luna Santin, MSGJ, Magdaddy101, Magister Mathematicae, Markaci, Mbz1, Metacomet, Michael Hardy, Mikiemike, Miquonranger03, Mpbro, Mpfiz, Mpolyanskiy, MrOllie, N5iln, N7dz, NYKevin, Nancy, Neonumbers, Nerissa-Marie, NewEnglandYankee, Nishantsah, Nohat, ObfuscatePenguin, Ohconfucius, Olli Niemitalo, Onthevirge, Patrick, Penubag, Peter Chastain, Petergans, Pflatau, Philip Trueman, PierreAbbat, Pleasantville, RDBury, RadioFan, Rahulghose, Reywas92, Rich Farmbrough, RichardMills65, Rjwilmsi, Rob-bob7-0, Romanm, Rracecarr, Rror, Ruuddekeijzer, SJP, Sai2020, Salvar, Samw, Saperaud, Shalom Yechiel, Sibian, SirBob42, Skgray123, Slon02, Smjg, Srich32977, Srleffler, Stevvers, Supotmails, Svick, TDogg310, Tarchon, Tarotcards, TeemuN, Teles, Thafritz, The Anome, The Photon, The Thing That Should Not Be, Theresa knott, Thrapper, Thumperward, Tinlad, Tommy2010, Tonymang, Towerman86, Trinitresque, Tyler, UncleDouggie, Vanished User 4517, Vasa suresh naidu, Vianello, Virtualphtn, Vssun, W1kispam1, WWStone, Wannabemodel, Waveguy, Who, Widefox, Wik, Wiki guy752, William Avery, Wimt, Wjejskenewr, YUL89YYZ, Zundark, 416 anonymous edits List of refractive indices  Source: http://en.wikipedia.org/w/index.php?oldid=559784155  Contributors: 4ndyD, A. B., Antonverburg, ArnoldReinhold, Askewmind, Astatine, Audriusa, BenFrantzDale, Billjoie, Bkell, Bobber0001, Bryan Derksen, Bubsir, ChrisHodgesUK, Colonies Chris, Curtis Williams, Darktemplar, Donarreiskoffer, Download, DrBob, Dreish, Dysprosia, EamonnPKeane, Elektron, Ems57fcva, Eric Drexler, Feyre, Fivemack, Fpahl, Francis Flinch, Gene s, Gerrit, Gfutia, GianniG46, Goudzovski, GregorB, Grzes, Headbomb, Intiko, J04n, Joechao, Josell2, Jusdafax, Keenan Pepper, Kenaitpb, KudzuVine, Materialscientist, Michael Hardy, Michi zh, Mister fidget, MisterSheik, Mpolyanskiy, Mr. Vernon, MrOllie, Mravlja Matjaz, Nabla, Nicolas Perrault III, Ohconfucius, Pearly princess, Pflatau, Polar Kin, Ramoul, Rdrosson, Reaverdrop, Redvers, RhysiePysie, Rivo, Samlaw, Schewek, Smjg, Srleffler, Stevenj, Tarquin, Tassedethe, Thricecube, Thumperward, Vhozard, Vsmith, Weihao.chiu, Zundark, 102 anonymous edits Schlick's approximation  Source: http://en.wikipedia.org/w/index.php?oldid=554204968  Contributors: Alhead, AlphaPyro, Anticipation of a New Lover's Arrival, The, AySz88, BenFrantzDale, KlappCK, Kri, Shenfy, Svick, 9 anonymous edits Bidirectional scattering distribution function  Source: http://en.wikipedia.org/w/index.php?oldid=554832345  Contributors: ALoopingIcon, Anerbenartzi, BenFrantzDale, D6, Jurohi, Kri, Mark viking, Mboverload, Mdd, Pbroks13, Pruthvi.Vallabh, Rilak, Twisp, Venny85, 6 anonymous edits Line–sphere intersection  Source: http://en.wikipedia.org/w/index.php?oldid=551304542  Contributors: Blotwell, CBM2, Chaser, David Eppstein, ForrestVoight, Haseldon, Ironmagma, Jheald, Langermatze, Nneonneo, Peabeejay1, RobinLootz, Srleffler, The Anome, Zephyris, 8 anonymous edits Line-plane intersection  Source: http://en.wikipedia.org/w/index.php?oldid=544214348  Contributors: Alansohn, Altenmann, Andreas Kaufmann, BenFrantzDale, Bkerin2, Das-g, Franjesus, Furrykef, Haseldon, Heavy bolter, Heracles31, Ion.lesan, JamesBWatson, Jheald, John of Reading, Linas, LokiClock, LuizBalloti, M-le-mot-dit, Maksim-e, Meaghan, Melchoir, MochaFlux, Mpatel, Od Mishehu, Pbroks13, Pearle, Quitequick, Salix alba, Sopoforic, Svick, Syrthiss, T-rithy, Thejoshwolfe, 48 anonymous edits Point in polygon  Source: http://en.wikipedia.org/w/index.php?oldid=561317841  Contributors: 2001:BF8:200:FF7A:79F9:B749:6051:35F7, AlphaPyro, Altenmann, Amndeep7, Andreim27, BenFrantzDale, Blowski, Bob Stein - VisiBone, Bracchesimo, Charles Matthews, CiaPan, Cloudtowns, Danbernier, DawsDesign, Dcoetzee, Donkervoet, Fredrik, Frencheigh, Gaius Cornelius, Heavyrain2408, Henrik Haftmann, Jarble, Joshmgreen, Kiplingw, LateToTheGame, Marc W. Abel, Melchoir, Michael Hardy, Miym, Nymanjens, Ohnoitsjamie, Protonk, Rich Farmbrough, Roy.emmerich, Sekelsenmat, Shhash, Shreevatsa, Stephen Balaban, StuffOfInterest, Svick, Timwi, Tommy2010, Tomruen, User A1, Михајло Анђелковић, 40 anonymous edits Spatial index  Source: http://en.wikipedia.org/w/index.php?oldid=479402064  Contributors: APh, Alksentrs, Amsguc, Antonrojo, Beland, Binksternet, Chire, Cnorvell, Danim, Devashish90, EastTN, Eastlaw, Edward, Ego White Tray, Fetterless, Forkandwait, Funandtrvl, Fæ, Gary a mason, Greghazel, Guaycuru, JamesBWatson, Jason Quinn, Ling.Nut, Lingliu07, Lordsatri, Malcolma, Miami33139, Michael Kümmling, MilerWhite, Mwtoews, Nawroth, Noonon, Panchitaville, Pascal666, Powerthirst123, Rahst12, Redlands, Robertvan1, ScubbX, Subu690, Sumail, TechnologyTrial, Thinking of England, Tmcw, Tschirl, Vijaymgandhi, 42 anonymous edits Grid  Source: http://en.wikipedia.org/w/index.php?oldid=554728490  Contributors: Antonrojo, Ashroads, Cybercobra, Dr Shorthair, Jidanni, Kmw2700, Kri, Lfstevens, Lunch, Nealmcb, Perry R. Peterson, Perspeculum, Rich Farmbrough, Ridernyc, Tony1212, Utcursch, WordSurd, 19 anonymous edits Octree  Source: http://en.wikipedia.org/w/index.php?oldid=561797583  Contributors: 23u982j32r29j92, Alanf777, Alienskull, Alksub, Andreas Kaufmann, Arthena, AySz88, Balabiot, Bkkbrad, Bryan Derksen, CesarB, Claystu, Cybercobra, Defiantredpill, DragonflySixtyseven, Dysprosia, Eike Welk, Exe, Ffaarr, Fredrik, Furrykef, GeordieMcBain, Giftlite, Gromobir, Indiana State, JaGa, Jacob grace, JosephCatrambone, June8th, Kgaughan, Kierano, KoenDelaere, Kri, Lfstevens, Litherum, MIT Trekkie, MagiMaster, Mdd, Melaen, Noerrorsfound, Nomis80, Nothings, Olsonse, Rahul220, Ruud Koot, Sadangel, Scott Paeth, Scott5114, SimonP, Sscomp2004, TechnologyTrial, Tom Jenkins, Viriditas, WhiteTimberwolf, Wolfkeeper, Wsloand, 92 anonymous edits Global illumination  Source: http://en.wikipedia.org/w/index.php?oldid=565304383  Contributors: Aenar, Andreas Kaufmann, Arru, Beland, Boijunk, Cappie2000, Chris Ssk, ChrisGualtieri, Conversion script, CoolingGibbon, Dhatfield, Dormant25, Dqeswn, Elektron, Elena the Quiet, Favonian, Fractal3, Frap, Fru1tbat, Graphicsguy, H2oski2liv, Henrik, Heron, Hhanke, Imroy, JYolkowski, Jontintinjordan, Jose Ramos, Jsnow, Kansik, Kri, Kriplozoik, Levork, MartinPackerIBM, Maruchan, Mysid, Mystery01, N2f, NicoV, Nightscream, Nihiltres, Nohat, Oldmanriver42, Paperquest, Paranoid, Peter bertok, Petereriksson, Pietaster, Pjrich, Pokipsy76, Pongley, Proteal, Pulle, RJHall, Reedbeta, Shaboomshaboom, Skorp, Smelialichu, Smiley325, Th1rt3en, The machine512, Themunkee, Travistlo, UKURL, Wazery, Welsh, 77 anonymous edits Rendering equation  Source: http://en.wikipedia.org/w/index.php?oldid=564345245  Contributors: Arite, Connelly, DRGrim, Doradus, DrFluxus, Dwhiteman, Evgeny, Grendelkhan, Hxa7241, Jheald, Jurohi, Kri, Quibik, Reedbeta, RiksoLIU, Sterrys, Thinkingman, Timrb, Xcelerate, 24 anonymous edits Distributed ray tracing  Source: http://en.wikipedia.org/w/index.php?oldid=543993361  Contributors: Hmira, Kri, Leonard G., M-le-mot-dit, Oleg Alexandrov, Phrood, Poor Yorick, Reedbeta, Remy B, Rilak, Timrb, Tomyboysawyer, 8 anonymous edits

293

Article Sources and Contributors Monte Carlo method  Source: http://en.wikipedia.org/w/index.php?oldid=564912480  Contributors: *drew, 2001:250:5002:398D:609C:3667:8034:47C6, 2001:DA8:D800:66:DD0:187B:7F0B:E7CB, 2620:0:1000:3001:BE30:5BFF:FEE6:76DF, 6StringJazzer, A.Cython, ABCD, Aardvark92, Adfred123, Aferistas, Agilemolecule, Akanksh, Alanbly, Albmont, AlexBIOSS, AlexandreCam, AlfredR, Alliance09, Altenmann, Amritkar, Andkore, Andrea Parri, Andreas Kaufmann, AndrewKeenanRichardson, Angelbo, Aniu, Apanag, Aspuru, Astridpowers, Atlant, Avalcarce, Avicennasis, Aznrocket, BAxelrod, BConleyEEPE, BD2412, Banano03, Banus, Bduke, Beatnik8983, Behinddemlune, BenFrantzDale, BenTrotsky, Bender235, Bensaccount, Bgwhite, BillGosset, Bkell, Blotwell, Bmaddy, Bobo192, Boffob, Bomazi, Boredzo, Broquaint, Btyner, C628, CRGreathouse, Caiaffa, Canterbury Tail, Charles Matthews, ChicagoActuary, Chip123456, Chris the speller, Chrisriche, Cibergili, Cm the p, Colonies Chris, Compsim, Coneslayer, Cretog8, Criter, Crougeaux, Cybercobra, Cython1, DMG413, DVdm, Damistmu, Darkrider2k6, Datta research, David.conradie, Davnor, Ddcampayo, Ddxc, Deer*lake, Denis.arnaud, Dhatfield, DianeSteele, Digemedi, Dmcq, Dogface, Donner60, Download, Dratman, Drewnoakes, Drsquirlz, Ds53, Duck ears, Duncharris, Dylanwhs, EEMIV, ERosa, Edstat, Edward, EldKatt, Elpincha, Elwikipedista, Eudaemonic3, Ezrakilty, Fastfission, Fintor, Flammifer, Fritsebits, Frozen fish, Furrykef, G716, Gareth Griffith-Jones, Giftlite, Gilliam, Glosser.ca, Goudzovski, GraemeL, GrayCalhoun, Greenyoda, Grestrepo, Gruntfuterk, Gtrmp, Gökhan, Hanksname, Hawaiian717, Headbomb, Herath.sanjeewa, Here, Hokanomono, Hu12, Hubbardaie, Hugh.medal, ILikeThings, IanOsgood, Inrad, Ironboy11, Itub, J.Dong820, J00tel, Jackal irl, Jacobleonardking, Jamesscottbrown, Janpedia, Jason Davies, JavaManAz, Jayjaybillings, Jeffq, JesseAlanGordon, Jitse Niesen, Joey0084, John, John Vandenberg, JohnOwens, Jorgecarleitao, Jorgenumata, Jsarratt, Jugander, Jwchong, Jérôme, K.lee, KSmrq, KaHa242, Karol Langner, Keith.a.j.lewis, Kenmckinley, Kibiusa, Kimys, Knordlun, Kri, Kroese, Kummi, Kuru, Lambyte, Lee Sau Dan, LeoTrottier, Lerdthenerd, Levin, Lexor, Lhynard, LizardJr8, LoveMonkey, M-le-mot-dit, Magioladitis, Malatesta, Male1979, ManchotPi, Marcofalcioni, Marie Poise, Mark Foskey, Martinp, Martombo, Masatran, Mathcount, Mattj2, MaxHD, Maxal, Maxentrope, Maylene, Mblumber, Mbmneuro, Mbryantuk, Melcombe, Michael Hardy, MicioGeremia, Mikael V, Misha Stepanov, Mlpearc, Mnath, Moink, MoonDJ, MrOllie, Mtford, Nagasaka, Nagualdesign, Nanshu, Narayanese, Nasarouf, Nelson50, Nosophorus, Nsaa, Nuno Tavares, Nvartaniucsd, Oceans and oceans, Ohnoitsjamie, Oli Filth, Oneboy, Orderud, OrgasGirl, Ott2, P99am, Paul August, PaulieG, PaulxSA, Pbroks13, Pcb21, Pdelmoral, Pete.Hurd, PeterBoun, Pgreenfinch, Philopp, Phluid61, PhnomPencil, Pibwl, Pinguin.tk, PlantTrees, Pne, Pol098, Popsracer, Poupoune5, Qadro, Quentar, Qwfp, Qxz, R'n'B, RWillwerth, Ramin Nakisa, Rcsprinter123, Redgolpe, Renesis, RexJacobus, Reza1615, Rich Farmbrough, Richie Rocks, Rinconsoleao, Rjmccall, Rjwilmsi, Robma, RockMagnetist, Rodo82, Ronnotel, Ronz, Rs2, Rygo796, SKelly1313, Saltwolf, Sam Korn, Samratvishaljain, Sankyo68, Seasund, Sergio.ballestrero, Shacharg, Shreevatsa, Siniša Čubrilo, Sjoemetsa, Snegtrail, Snoyes, Somewherepurple, Spellmaster, Splash6, Spotsaurian, SpuriousQ, Stefanez, Stefanomione, StewartMH, Stimpy, Storm Rider, Superninja, Sweetestbilly, Tarantola, Taxman, Tayste, Techhead7890, Tesi1700, Theron110, Thirteenity, ThomasNichols, Thr4wn, Thumperward, Tiger Khan, Tim Starling, Tom harrison, TomFitzhenry, Tooksteps, Toughpkh, Trebor, Twooars, Tyrol5, UBJ 43X, Urdutext, Uwmad, Veszely, Vgarambone, Vipuser, Vividstage, VoseSoftware, Wavelength, Wile E. Heresiarch, William Avery, X-men2011, Yoderj, Zarniwoot, Zoicon5, Zr40, Zuidervled, Zureks, Мурад 97, 489 anonymous edits Unbiased rendering  Source: http://en.wikipedia.org/w/index.php?oldid=560242273  Contributors: 2A01:E35:8BAF:4740:81D5:531F:4B63:505D, Bgwhite, CGEffex, Casablanca2000in, Charybdis3, Chris TC01, Dqeswn, Fuchsiania, FunPika, KeymasterCZ, Kri, M-Jeeli, MarqueIV, Martusienka, Pichpich, Pkisme, PlasmaDragon, StefanTwigt, Xinode, 36 anonymous edits Path tracing  Source: http://en.wikipedia.org/w/index.php?oldid=540825771  Contributors: 2001:4898:1:4:A453:49E6:95AE:1EB0, Abstracte, Annabel, BaiLong, Bmearns, DennyColt, Elektron, Frap, Hmira, Icairns, Iceglow, Incnis Mrsi, Jonon, Keepscases, Kri, M-le-mot-dit, Markluffel, Mmernex, Mrwojo, NeD80, Paroswiki, PetrVevoda, Phil Boswell, Pol098, Psior, RJHall, Srleffler, Steve Quinn, Tamfang, 44 anonymous edits Radiosity  Source: http://en.wikipedia.org/w/index.php?oldid=476536571  Contributors: 63.224.100.xxx, ALoopingIcon, Abstracte, Angela, Ani Esayan, Bevo, Bgwhite, CambridgeBayWeather, Cappie2000, ChrisGualtieri, Chrisjldoran, Cjmccormack, Conversion script, CoolKoon, Cspiel, DaBler, Dhatfield, DrFluxus, Favonian, Furrykef, GDallimore, Inquam, InternetMeme, Jdpipe, Jheald, JzG, Klparrot, Kostmo, Kri, Kshipley, Livajo, Lucio Di Madaura, Luna Santin, M0llusk, Mark viking, Melligem, Michael Hardy, Mintleaf, Misterdemo, Nayuki, Ohconfucius, Oliphaunt, Osmaker, Philnolan3d, Pnm, PseudoSudo, Qutezuce, RJHall, Reedbeta, Reinyday, Rocketmagnet, Ryulong, Sallymander, SeanAhern, Siker, Sintaku, Snorbaard, Soumyasch, Splintercellguy, Ssppbub, Thrapper, Thue, Tomalak geretkal, Tomruen, Trevorgoodchild, Uriyan, Vision3001, Visionguru, VitruV07, Waldir, Wapcaplet, Wernermarius, Wile E. Heresiarch, Yrithinnd, Yrodro, Σ, 76 anonymous edits Photon mapping  Source: http://en.wikipedia.org/w/index.php?oldid=540663764  Contributors: Arabani, Arnero, Astronautics, Brlcad, CentrallyPlannedEconomy, Chas zzz brown, CheesyPuffs144, Colorgas, Curps, Ewulp, Exvion, Fastily, Favonian, Flamurai, Fnielsen, Fuzzypeg, GDallimore, J04n, Jimmi Hugh, Kri, LeCire, MichaelGensheimer, Nilmerg, Owen, Oyd11, Patrick, Phrood, RJHall, Rkeene0517, Strattonbrazil, T-tus, Tesi1700, Thesalus, Tobias Bergemann, Wapcaplet, XDanielx, Xcelerate, 42 anonymous edits Metropolis light transport  Source: http://en.wikipedia.org/w/index.php?oldid=561813965  Contributors: 1ForTheMoney, Aarchiba, Angela, Benindigo, BjKa, Bobianite, Coffee2theorems, D-rew, Ivan Shmakov, Karol Langner, Kri, Levork, Loisel, Luigi30, Nczempin, Petni995, Pfortuny, RJFJR, RJHall, Ryan Postlethwaite, Sergioh91, The Anome, Timo Honkasalo, Vkoylazov, 30 anonymous edits Anti-aliasing  Source: http://en.wikipedia.org/w/index.php?oldid=555483481  Contributors: 4C, ALoopingIcon, Allens, Ap, Arnero, Arvindn, AstroPig7, AzaToth, Barbeesha, BernardH, Betacommand, Bjdehut, Bluemin, Blueshade, Bookandcoffee, Booyabazooka, BrOnXbOmBr21, Brandmeister, Brion VIBBER, Brsanthu, Bswest, C Ronald, Capricorn42, Chadloder, CharlotteWebb, Charu 3012, Chas zzz brown, Chmod007, Chris G, Chris the speller, Christian List, Chumbu, Cometstyles, Crevox, Cvanhasselt, DVdm, Danwills, David Eppstein, David R. Ingham, DavidHOzAu, Davidhorman, Dcoetzee, DeltaQuad, Denisarona, Dicklyon, Doomed Rasher, Drpickem, Dukeofomnium, Epbr123, Erudecorp, Evn2-NJITWILL, Fabiform, Fleminra, Frecklefoot, FvdP, Giftlite, Grayshi, Greglocock, GregorB, H, HalHal, Hervegirod, Hfastedge, Icenine378, Isidore, J.delanoy, JForget, Jacobolus, Jbaio, Jeffschuler, Jerome Charles Potts, Jerryobject, Joe Sewell, Josh Tumath, Jredmond, Jshadias, Just Another Dan, Kelly Martin, Kozuch, Leedeth, Lightmouse, Loisel, Lucyin, Lugiadoom, MIT Trekkie, Marasmusine, Mathias-S, Maxamegalon2000, Medinoc, Michael Hardy, Moroder, MrMambo, Mulad, Negioran, Nerd65536, Nickj, Norrk, Nova77, Oleg Alexandrov, Oli Filth, Omegatron, Onelevel, Orderud, Oskar Sigvardsson, Ost316, OwenBlacker, Oyeahboy369, PainMan, Parsecboy, Patrick, Pavel Vozenilek, Penubag, Personman, Profjohn, Profoss, RDBrown, Radagast83, Rainwarrior, Ratchetrockon, Renaissancee, RenamedUser2, Riana, Richard cocks, Rilak, Rogper, Romansanders, Root4(one), Ryan Cable, S TiZzL3, Salty!, SamB, Sbierwagen, Scott McNay, Serpent212, Sesse, Shadak, Snigbrook, Snoyes, SpartanPhalanx, Stereotype441, Stevertigo, Stw, Taltamir, The Anome, TheNewPhobia, Theresa knott, Thumperward, Timothykinney, Tom Jenkins, Ulric1313, Unused0029, Valarauka, Vruba, Waldir, WhiteOak2006, Wiccan Quagga, Wiknerd, WildWildBil, Wji, Wknight94, Wsmarz, Xompanthy, Zigger, Zr40, Zundark, Zvar, ‫ﻣﺎﻧﻲ‬, 221 anonymous edits Ambient occlusion  Source: http://en.wikipedia.org/w/index.php?oldid=559189174  Contributors: ALoopingIcon, Arunregi007, BigDwiki, Bovineone, CitizenB, Closedmouth, Dave Bowman Discovery Won, Eekerz, Falkreon, Frecklefoot, Gaius Cornelius, George100, Grafen, JohnnyMrNinja, Jotapeh, Knacker ITA, Kri, [email protected], Mr.BlueSummers, Mrtheplague, Prolog, Quibik, RJHall, Rehno Lindeque, Simeon, SimonP, The Anome, 63 anonymous edits Caustics  Source: http://en.wikipedia.org/w/index.php?oldid=548165021  Contributors: Akriasas, Craig Pemberton, DabMachine, Dhatfield, Dicklyon, Dmmaus, Drewnoakes, Fir0002, Friendly edit, GDallimore, GandalfDaGraay, H, Hellbus, Hetar, J04n, KennethJ, Kjkolb, Kku, Law, Mbz1, Olaf3142, Otterstedt, Paul venter, Petersam, Pflatau, Pne, Qwfp, Srleffler, Theshadow27, Valandil211, Zeealpal, 14 anonymous edits Subsurface scattering  Source: http://en.wikipedia.org/w/index.php?oldid=561326782  Contributors: ALoopingIcon, Azekeal, BenFrantzDale, Dominicos, Fama Clamosa, Frap, InternetMeme, Kri, Meekohi, Mic ma, NRG753, Piotrek Chwała, Quadell, RJHall, Reedbeta, Robertvan1, Rufous, T-tus, Tinctorius, Xezbeth, 22 anonymous edits Motion blur  Source: http://en.wikipedia.org/w/index.php?oldid=562977699  Contributors: 118d, 16@r, 3kliksphilip, 4twenty42o, Abdull, Atropos235, BFFB, Bejnar, Bitbut, Bobblewik, Brandt Luke Zorn, Brion VIBBER, CanisRufus, DMacks, DaBler, Damian Yerrick, David Levy, Dcoetzee, Durova, Dyanega, Fir0002, Flewis, Generaleskimo, Geni, Gphoto, Gzkn, Imohano, Jahoe, Jdoniach, Jerome Charles Potts, Jrockley, Lee M, Llorando, Mdrejhon, Nauticashades, Nayuki, Ooooooooo, PBP, Padishar, Ppntori, RadRafe, Richfife, Rocketmagnet, Shantavira, SheeEttin, Shousokutsuu, Smalljim, Snesiscool, Spaully, Srleffler, TejasDiscipulus2, The wub, Tide rolls, Timwi, TokyoJunkie, Tomwsulcer, Tsusurfanami, Underwater, Wazery, WereSpielChequers, WikHead, WikianJim, Wingchi, Yaminikanchi, 74 anonymous edits Beam tracing  Source: http://en.wikipedia.org/w/index.php?oldid=552154434  Contributors: Altenmann, Bduvenhage, CesarB, Hetar, Kibibu, Kri, M-le-mot-dit, Mark Arsten, Porges, RJFJR, RJHall, Reedbeta, Samuel A S, Srleffler, 9 anonymous edits Cone tracing  Source: http://en.wikipedia.org/w/index.php?oldid=564971785  Contributors: CesarB, ChrisGualtieri, Fabrice.Neyret, John of Reading, Kri, M-le-mot-dit, RJFJR, RJHall, Reedbeta, Tassedethe, 31 anonymous edits Ray tracing hardware  Source: http://en.wikipedia.org/w/index.php?oldid=553963324  Contributors: Abdull, CTho, ChrisGualtieri, DabMachine, Frap, Joffeloff, JonHarder, Keith D, M-le-mot-dit, Mercury, Phil Boswell, R'n'B, Republican12, Rhett Allen, Rich Farmbrough, Rilak, Rjwilmsi, Shortfatlad, Siker, Taw, ToolmakerSteve, Yoonhm, 40 anonymous edits 3Delight  Source: http://en.wikipedia.org/w/index.php?oldid=560403462  Contributors: Aghiles, Ahy1, Andreas Toth, Bellhalla, BnW.h, Bovineboy2008, Chris the speller, Codename Lisa, Colonies Chris, Daniel Dickman, Dennis Bratland, Dylan Lake, Eco84, Ewlyahoocom, Exir Kamalabadi, Fandraltastic, Finlay McWalter, JLaTondre, Jake Wartenberg, Jerryobject, Jfingers88, LaughingMan, M-le-mot-dit, Moritz Moeller, MoritzMoeller, Nehrams2020, Ohconfucius, PTSE, Pberto, Phuzion, Rilak, Rodmena, Sfan00 IMG, Stephan Leeds, Taw, Zorak1103, 182 anonymous edits Amiga Reflections  Source: http://en.wikipedia.org/w/index.php?oldid=551706790  Contributors: Arbitrarily0, Bob1960evens, David Eppstein, DimaG, Elonka, Jeff3000, Jodi.a.schneider, Jpfagerback, Luffy, Macmanne, Marko75, Martarius, Mattg82, Pcap, Polluks, 1 anonymous edits Autodesk 3ds Max  Source: http://en.wikipedia.org/w/index.php?oldid=565153435  Contributors: 1090467k, 1exec1, 3dcgSherif, 3dline, Aavindraa, Acidburn24m, Adraeus, Adrignola, Ahoerstemeier, Al Lemos, Alai, Alan Au, AlecMoody, Allefant, Alrightythen, Alsandro, Amitpagarwal, Andyjsmith, Angela, AnimaniacsBIGGESTFan, Antonio Lopez, Arash Keshmirian,

294

Article Sources and Contributors Ashley Pomeroy, Asparagus, Aspects, Bacteria, Bbarroux, Beetstra, Ben53000, Benindigo, Bhy5745, Biglovinb, Bigstris88, Billyd95, Blackbox77, Bloodholds, Bluemoose, Bobopetrov, Bovineone, Brandmeister (old), Brilliburger, Broepke, Bsmweb3d, CWii, Calimero, Calmer Waters, Can't sleep, clown will eat me, Captain Conundrum, Carl.bunderson, Charlottemjp, Chris TC01, Chris the speller, Chris81w, ChrisGualtieri, Ckatz, Closedmouth, CobbSalad, Codename Lisa, Colinst, Crahul, Crazydoodle69, Crvaught, Crywalt, Cst17, Cyp, Cyrius, Da Vynci, Danbk, Dancter, Daniel G., Darth Daniel, Davidhorman, Dementation, Der Golem, DerHexer, DeusExNihilum, Diaa abdelmoneim, Dicklyon, Dionyziz, Diotti, Dryo, Dschwa, Dtgriscom, Dto, Eep², ElKevbo, ElinorD, Emily217, Enrico Dirac, Euchiasmus, Everyking, Excirial, Fabartus, Feureau, FleetCommand, Frankn12345, Frecklefoot, Fred the Oyster, Friederbluemle, Frin, FuzzyChops, Gargaj, GateKeeper, GeeJo, Gku, Goldenrowley, Gottwik2, Grabthecat, Graffity, Graft, Grey GosHawk, Hamzarauf, Hard Core Rikki, Im.thatoneguy, Imgx64, Imroy, Interiot, Irishguy, J.delanoy, J04n, Jaaahn, Jamminjosh523, JayKeaton, Jdm64, JeffJonez, Jeffvll, Jennifer3dB, Jerry3D, Jester2001, JoeMonkeyPotato, JoeSmack, Jordandanford, Joshua Scott, Jpgordon, Jusdafax, Kaini, Kanogul, Kinao, Kinneyboy90, Kollision, Kowtoo, Kpimentel, Kukini, Kızılsungur, LX, Lachlan123, Lansd, Lardo, Lathrop1885, Ldnew, LiDaobing, Liftarn, Light2k6, Lkt1126, Llamadog903, Lord Exar-Kun, Lou Sander, LpGod, Luigiesn, Lwann, MER-C, Malachiluke, Malcolmxl5, Manop, Marcika, MarioPool, Mariostar111, Martarius, Materialscientist, Matthewp1998, Matthäus Wander, Mattwigway, Maximus Rex, Mayafishing, Mayalld, Mazin07, Mbisgaier, MeekMark, Melligem, Mhoskins, MidSpeck, Midkay, Mike1024, MikeWazowski, Modster, Momoricks, Morn, Mortense, MrOllie, Mrh30, Mrmanhattanproject, MynameisJayden, Nachoman-au, Nagy, Night Gyr, Norden83, Novaseminary, Noxcel, Nuclear Lunch Detected, Nuno Tavares, Nursingdon, Nv8200p, ONjA, Ohnoitsjamie, Oicumayberight, Omnimmotus, OranL, Orderud, Outriggr, Ouzari, P.gobin, PSam, Paczilla007, Pale blue dot, Palmera22, PaulHanson, Pegship, Persian Poet Gal, Phdrummer, PhilKnight, Philip Trueman, Phuzion, Piano non troppo, PigFlu Oink, Pmkpmk, PostaL, Pratyya Ghosh, Project2501a, Psychonaut, Quarl, Quatonik, Quoth, Qwertyak, R'n'B, Rabarberski, Raja99, Red Act, Retired username, Retro00064, Reza iranboy, Rich Farmbrough, Riddhill, Robert K S, Rocketshiporion, Ronhjones, Ronski, Ronz, Rrinsindika, Rtdixon86, Ryan Postlethwaite, Sanoopezhokaran, SarekOfVulcan, Sea-monsters, Sermed Al-Wasiti, Shainingu Senshi, Shanedidona, Sheehan, Simakuutio, Skillet5, SkyWalker, Smitty121981, Snarius, Sneftel, Solarra, Solstice Halcyon, Someformofhuman, Sophus Bie, Soren.harward, SpacemanSpiff, Speeder21, Splette, Steohawk, Stephen Burnett, Stevertigo, Strangnet, Sue Rangell, Sundström, Superchickensoup, T-tus, T0themax, Tabletop, Tannermorrow, Tbhotch, TestPilot, The Goog, The Thing That Should Not Be, TheCuriousGnome, TheRealFennShysa, Thumperward, Tkgd2007, Tomtheeditor, Toro45, Triplestop, Triwbe, Tweenk, Twodder, Txy, Ut40755, Utcursch, Vahid83, Vanderdecken, Veinor, VideoRanger2525, Vindictive Warrior, Visor, Waldir, Wayne Miller, Whisky drinker, Wiki187, Wikibarista, Wikivid, Wizardman, Wmahan, WolfDenProductions, Xyoureyes, YUL89YYZ, Youssefsan, Zenek.k, ZooFari, Zundark, Zzuuzz, 608 anonymous edits Anim8or  Source: http://en.wikipedia.org/w/index.php?oldid=560797907  Contributors: 0612, A3r0, Account32342q1, Adj31120, After Midnight, Andrevan, Arichnad, BW52, Bryan Derksen, CALR, Chris857, ChrisGualtieri, Dagibit, Davidm617617, Deeahbz, Den fjättrade ankan, DigiPen92, EmpMac, Encephalon, Feeg641, FishheadAric, Galoubet, Geosworld2011, Gioto, Gronky, Grstain, Hamitr, Jackson Carver, John of Reading, Josh Parris, Kreator01, Leuk he, Ligulem, LoMit, M-le-mot-dit, Mblumber, Mild Bill Hiccup, NameIsRon, Narsamson, Nintendofan 7, Picapica, R'n'B, Read-write-services, Remember the dot, Reversegecko, Samuelgames, Scott5114, SeventyThree, Speck-Made, The Colclough, TheCuriousGnome, TheGreenMartian, TimBentley, Tolkien fan, Tomtheeditor, Videogamer, Woohookitty, Zanimum, Zeus, Zn2r41e3, 86 anonymous edits ASAP  Source: http://en.wikipedia.org/w/index.php?oldid=446433174  Contributors: Alanbly, Auntof6, BBdsp, Betacommand, Free Software Knight, Gerriet42, Luke490, Mikeagauvin, Miquonranger03, Mosquitopsu, Photonracer, RJFJR, Srleffler, 10 anonymous edits Blender  Source: http://en.wikipedia.org/w/index.php?oldid=565302984  Contributors: 1exec1, 2A01:E35:2E9F:C140:A15D:3D:227A:280F, 2A01:E35:2E9F:C140:B4E4:BB5D:BB52:7791, AHands, Abce2, Abmac, Acumensch, Addingrefs, Adilkhan1234, Adnantu, Adriatikus, Agaly, Agentjj24, Ahunt, Airhogs777, Alabandit, Alex G, Aliuken, Allefant, AlmazWorks, Alpha 4615, Alsocal, Am088, Amire80, Angela, Animatinator, AnnaFrance, AnonMoos, Anthony Ivanoff, Anupbro, Ardonik, Argento, Arite, Arjayay, Arnos78, Aronzak, Arronax50, ArrowheadVenom, Artofwot, Asav, Ashenai, Astronouth7303, AsumFace, Ausis, AxelBoldt, B Fizz, B9715, BCube, Barneca, Barrera marquez, Baryonic Being, Beao, Ben Ben, Benjaminevans82, Betacommand, BigBob200, Bigjust12345, Biker Biker, Binky1206, Bios Element, Blender3dartist, Blendernewbies, BnW.h, Bobblewik, Bogdan Oancea, Bonhomie, Brandished, Breezeonhold, BrokenSegue, Bsmweb3d, Bubba73, Bugo30, Byped, CYD, Cablop, Calmer Waters, Caltas, CanadianLinuxUser, Captain ranga101, Carbuncle, Casallasmo, Cflm001, Chaoskitten, CharlesC, Charlesworth999, Chris the speller, ChrisGualtieri, Cicero Moraes, Cleared as filed, Clemeaus000, Cobaltcigs, Coderx, Colincbn, CommonsDelinker, Connelly, Contorebel, Coryalantaylor, Coyau, CubOfJudahsLion, Custardninja, Cwebber, CyberSkull, DARTH SIDIOUS 2, DJBarney24, DanBri, Danhash, DanielM, DarkEnder, Darkbane, Darrien, Darthlighning, Darthmgh, Davehi1, Davidhorman, De2e, Debloper, Deeahbz, Dessydes, Dgies, Diaa abdelmoneim, DigiPen92, Digital-h, Diyoev, Djcapelis, Dncnmckn, Dobz116, Donner60, Doradus, DotShell, Drachis, Dsavi, DuckeJ, Dude1717, Dux, Dylan Lake, EIFY, Eagleal, Ed g2s, Ehamberg, El monty, Elfguy, Eloquence, EmpMac, Enigmasoldier, Envval, Erwincoumans, Esn, EstGun1, Estoy Aquí, Ethomson92, Evlekis, Extate, Ezhiki, Fadookie, Feureau, Finemann, FleetCommand, Fr, Fraggle81, Frap, Frecklefoot, Free Software Knight, Friday13, Fubar Obfusco, Furrykef, Fyyer, GDallimore, Gaius Cornelius, Game-Guru999, Gat19g, GeeJo, George100, Ghettoblaster, Giannisf, Gillwill2000, Gku, Glaesisvellir, Glopk, Goncalopp, Graibeard, Gralco8, Grandscribe, GraphicsKid, GregorB, Gronky, Guillaumefox111242, Gus the mouse, Guywithacoolname, Haha169, Haipa Doragon, Halsteadk, Ham Pastrami, Hamzarauf, Haruyasha, Hede2000, Hemite, Henry3, Herorev, Hgfernan, Hiflyer, HolmesSPH, Hooperbloob, Hu12, HxG, Hyad, HyperCapitalist, I'll bring the food, Ianozsvald, Icep, Ideasman42, Ielsner, Iketsi, Ilion2, Imroy, InverseHypercube, Ipatrol, Isacdaavid, Itdev9345, Ivan Štambuk, Ixfd64, JTauso, Jack Frost, Jahoe, Jaimemf, Jambay, JamesNZ, Jarble, Jaromil, Jaryth000, Jasonphysics, JayDez, Jdm64, Jeff Alexander, JeffJonez, Jeffrey Smith, Jeronimo, Jevav, Jh51681, Jhessela, Jim1138, Jjron, Jkh.gr, JoeGoss, Johan zhe, John259, JonasRH, JordoCo, Jpgordon, Julian Herzog, K1Bond007, Kakurady, Kanags, Karnesky, Katsusand, Keenan Pepper, Kellen, Kelvinsong, Khalid hassani, Kl4m-AWB, Knellotron, Knismogenic, KnowledgeOfSelf, Kolrok, Korval, Kozuch, Kri, Laneling, LarsKemmann, Lateg, Lbs6380, Lemontea, Lessogg, Lethalmonk, LetterRip, Liftarn, LightBWK, LilHelpa, Lilchimy, Lishy Guy, LittleDan, Logan, Looodi, Lordmetroid, Lotje, Luckofbuck, Luigi.tuby, Luke Bales, Lumpymeat 2.0, M-Jeeli, MC10, MMuzammils, Macduy, Magnus.de, Manoridius, Marc-André Aßbrock, Mark Yen, Marko75, Martarius, Maruchan, Master of Pies, Materialscientist, MattGiuca, Mattdm, Mattebb, Matthewrbowker, Maxgilead, Mayalld, Mdebets, Mhamilt, Michael9422, Michaelhodgins, Mikenolte, Mikethedj4, Minghong, Mion, MirandaStreeter, Mirek2, MisfitToys, Mitul0520, Mmanley, Mmj, Mongny, Mormegil, Morn, Mortense, MrMan, Msikma, Muppt, MusikAnimal, NE Ent, Nadavvin, Nadyes, Nagy, Nahiyanbdh, Naidevinci, Najjy, NapoliRoma, Ndanielm, Ne0pets22, Necro86, Netoholic, Neurolysis, Nigholith, Nkansahrexford, Nneonneo, Nohomers48, NoiseEHC, Noldoaran, Northamerica1000, NuclearWarfare, Nx, OAV, Oberiko, ObfuscatePenguin, Octagesimal, Oicumayberight, Old Guard, Oliver.hessling, Olivier, Omegatron, Optigan13, Orderud, OsamaK, Otto GB, Ouzari, Paddy1066, Palmbeachguy, Panu, Parulga, Patrias, PatrickSalsbury, Pearle, Pengo, PenguiN42, Penubag, Peppage, Philipmac, Phocks, PieterDeBruijn, Pisharov, Pitel, Pkisme, Playdagame6991, Plommespiser, Pnm, Polluks, Popski, Prakreet, Praveen Illa, Prime Entelechy, Prolog, Putka Keeper, Pyro3d, QVanillaQ, QuackOfaThousandSuns, Quinacrine, Qwe, R'n'B, R6MaY89, RJaguar3, RP9, RaccoonFox, Raffaele Megabyte, RaySys, Read-write-services, Red Act, RedWolf, Reedbeta, Regre7, Retiono Virginian, Rfc1394, Rhe br, Rich Farmbrough, Ricvelozo, Riksweeney, Riley Huntley, Rofthorax, Roqqy, Rpgsimmaster, Rrburke, Rror, Rursus, Rwxrwxrwx, SF007, Samboy, Sameboat, Saxifrage, Scandza, Schizomid, SchuminWeb, Scottkosty, Seegoon, Servant Saber, Sfoskett, ShaunMacPherson, Shawnlandden, Shereth, Sheridan, Shotalot, Silvergoat, Silverjb, Simeon, Singlemaltscotch, Sitethief, Sketch-The-Fox, SkyWalker, Slark, Slashme, Sn1per, Snarius, Snell45, Snow cat, SoylentGreen, Spoonriver, SpunkyBob, Stassats, Stephan Leeds, Stepheng3, Stepho-wrs, Steven Zhang, Stib, Stikonas, Storkk, Strangnet, SudoGhost, Superdestroyer-1, Suso, Svenstaro, T-tus, TJRC, Tagtool, Tassedethe, Tcrow777, Techauthor, TestPilot, The Thing That Should Not Be, TheBilly, TheCuriousGnome, Theintrepid, This, that and the other, Thumperward, Thunderbolt16, Tide rolls, Tigerwolf753, Tizio, Toanicky, Tobias Bergemann, Toehead2001, Tomwsulcer, Tony1, Toussaint, TouzaxA, Towfieee, Trex2001, Turtle Man, Tzzzzy, UncleZeiv, Unibrow1994, UrbenLegend, Ut40755, Vanderdecken, Vaubin, Vincent Simar, Virogtheconq, Vssun, Wapcaplet, WarwickAllison, WeirdHat, Where, Who, Wickey-nl, WikiUserPedia, WikipedianMarlith, Wikipelli, Wizar, Wizzigle, Wolfgang Kufner, Wolfmanj, Woohookitty, Ww2censor, Wwwwolf, XVreturns, Xmteam, Xorxos, Yug, ZanQdo, Zbobet2012, ZeroOne, Zeus, Zhaofeng Li, Zirka, Zistoire, Zoicon5, Zundark, Zvar, ÞorsHammer, être, Μύθος, Милан Јелисавчић, 766 anonymous edits Brazil R/S  Source: http://en.wikipedia.org/w/index.php?oldid=542641110  Contributors: Alanbly, AlecMoody, Asav, Asparagus, Biasoli, Brazil4Linux, Cmdrjameson, Cozdas, Editor2020, Fnordware, Frap, Hunturk, Lord Bodak, M-le-mot-dit, Markhobley, MrOllie, Neitsa, Tedernst, TheParanoidOne, Wahooney, 6 anonymous edits BRL-CAD  Source: http://en.wikipedia.org/w/index.php?oldid=561762524  Contributors: Ahkilinc, Altenmann, Arichnad, BigChicken, Boijunk, Booshakla, Brlcad, Butkiewiczm, CanisRufus, ChrisGualtieri, David.Monniaux, DuLithgow, Duk, Electron9, Eleniel, Flamurai, Fotoguzzi, Frap, Fredrik, Freederick, GeordieMcBain, Gioto, J36miles, Japsu, Javawizard, Jodi.a.schneider, Jrssr5, Karnesky, Kernoz, Kl4m-AWB, Kozuch, Louipc, M-le-mot-dit, MrOllie, Muchenhaeser, Neustradamus, Nikai, Oanjao, Paul Mackay, PaulTanenbaum, PavelSolin, Piksi, Reedbeta, Reisio, RobertL, Sligocki, Stevertigo, Sunny256, TAP3AH, Thaurisil, Tony1, Tpotthast3, Trevj, WOSlinker, Wheger, Where, 43 anonymous edits Form-Z  Source: http://en.wikipedia.org/w/index.php?oldid=550644242  Contributors: Chrhardy, Chris the speller, Dahouse, DuLithgow, EHowell117, Favonian, HaeB, Inese eR, Jbmcb, Longhair, Ryan Postlethwaite, Sandypope, VitruV07, Wmahan, 19 anonymous edits Holomatix Rendition  Source: http://en.wikipedia.org/w/index.php?oldid=468210834  Contributors: Coastline, M-le-mot-dit, TubularWorld, 10 anonymous edits Imagine  Source: http://en.wikipedia.org/w/index.php?oldid=551713901  Contributors: Bkonrad, Creative1985, Doprendek, Polluks, Querl, RuineR, Suso, Sybersitizen, 8 anonymous edits Indigo Renderer  Source: http://en.wikipedia.org/w/index.php?oldid=550667797  Contributors: Abelgroenewolt, Argento, Arneoog, Auntof6, Benindigo, Casablanca2000in, Centrx, CommonsDelinker, Dialectric, Dsavi, Edward321, Emagist, Finemann, FleetCommand, GandalfDaGraay, Gioto, GoingBatty, GustavTheMushroom, Hamitr, Jac16888, Kausalitaet, Maruchan, Neilwightman, NeoKron, Nihonjoe, Nshopik, Nurg, Oscarthecat, Pixie.pt, Pkisme, Rehevkor, Ricvelozo, Robth, Ryan Postlethwaite, Shopik, Silver Spoon, Slippens, SmartDen, Stephan Leeds, Suffusion of Yellow, Te24409nsp, UKURL, Xorm, 93 anonymous edits Kerkythea  Source: http://en.wikipedia.org/w/index.php?oldid=563861766  Contributors: 842U, Abmac, Bazonka, Blastolite, Bobblehead, Booyabazooka, Brandmeister (old), Dialectric, Gioto, Hamitr, Hippo99, Iameukarya, Insulam Simia, Jak Phreak, Juhoeemeli, Kri, Lightmouse, Maruchan, Mblumber, Nczempin, Partymetroid, Pnm, Qinjuehang, RedAndr, Roman Dawydkin, SkyWalker, Squid tamer, Srleffler, Strangnet, Theeth, ThisIsAce, Tkgd2007, UKURL, Verne Equinox, Wgsimon, Widr, Wikipowered, WindowsSe7enLuver, Ysangkok, Zaiken, ZimZalaBim, Zundark, 52 anonymous edits LightWave 3D  Source: http://en.wikipedia.org/w/index.php?oldid=560177957  Contributors: 2001:44B8:31BB:4E00:22CF:30FF:FEE2:A0B3, 21655, AAAlexander, ABVS1936, AJComix, Addict 2006, After Midnight, Alex G, AlistairMcMillan, Anthony Appleyard, Artipol, Asparagus, Azylber, BD2412, Bacteria, Beetstra, Beevee, Betacommand, Bigscooper22, BjarteSorensen,

295

Article Sources and Contributors Black Kite, Blu3d, Cabreet, CanisRufus, Cappie2000, Carllooper, Cgs, Charles Matthews, Chill Pill Bill, ChrisGualtieri, Crewman06, Csabo, Cst17, DS9 Forever, DarkHorizon, Dinolover45, Dsc, Dysprosia, Egemenk, Favonian, Fiftyquid, Fortdj33, FrankCostanza, Frecklefoot, FreplySpang, FuzzyChops, GMLSX, Georgepauljohnringo, Gettingtoit, Gku, Glacialfox, Goa103, Grabthecat, Hairy Dude, HappyDog, Harriv, Horologium, Hu12, Hydrowire, Inigo07, Ipsign, Is there any username not used?, J04n, JeffJonez, Jersyko, Jiawen, Jpgordon, JukoFF, Jvincent08, Kukini, Leibniz, Liftarn, Lugnuts, Luk, Lumenrain, M3dmastermind, Manoj Vangala, Martarius, Maruchan, Mayafishing, Mechamind90, Mingebinge, Minghong, Mirror Vax, Misioslaw, Morn, Ncr100, Oicumayberight, Ojigiri, Oktal, Ouzari, Ozguroot, Particleman24, Paulomatsui, Pixel8, Pmkpmk, Polluks, Prcjac, Raise exception, Ranma13, Razorx, Ronz, SEWalk, ShawnStovall, Shiroi Hane, Sitharus, Skoot13, Softwarehistorian, Soth3d, Spinal83, Stib, Strangnet, Sturm br, Suso, Sysrpl, T-Dawg, T-tus, The undertow, TheRealFennShysa, Tonsofpcs, USA92, Useight, WereSpielChequers, White Boy, Wiki Raja, Wikidieter, Will Yums, Yintan, Yoasif, Zeh, 277 anonymous edits LuxRender  Source: http://en.wikipedia.org/w/index.php?oldid=543359192  Contributors: Abelgroenewolt, Beao, Boulevardier, Chendy, Dobz116, Frap, Nshopik, PlasmaDragon, Ricvelozo, Shopik, Shotalot, Tatu Siltanen, Unibrow1994, Xmkrx, Zeealpal, Zorak1103, 40 anonymous edits Manta Interactive Ray Tracer  Source: http://en.wikipedia.org/w/index.php?oldid=488483903  Contributors: Blanchardb, Malcolma, Thiago, Thine Antique Pen, 1 anonymous edits Maxwell Render  Source: http://en.wikipedia.org/w/index.php?oldid=561972063  Contributors: Benindigo, Bgwhite, Blacklemon67, Bobianite, Bonadea, Chatboy 91, Chendy, Chris TC01, Cmorer, CommonsDelinker, Dialectric, Edgar181, EoGuy, FleetCommand, Fr!tz, Frecklefoot, GaMip, Jurohi, JustAGal, Kausalitaet, Levil, M-le-mot-dit, MZMcBride, Mohamax, Next Limit Marketing, Nshopik, Ophello, Oscarthecat, Pietrow, PigFlu Oink, Pkisme, Plantonini, Quentar, Rjwilmsi, Robert K S, Shopik, Solarra, ThaddeusB, Tigerwolf753, Uyulala, Vinhtantran, 62 anonymous edits Mental ray  Source: http://en.wikipedia.org/w/index.php?oldid=561581375  Contributors: Aaron1a12, Acdx, Alecperkins, Anteru, Asparagus, Benindigo, Bnrayner, CCFreak2K, Cablecow, CharlotteWebb, Chendy, Frap, Grayshi, Gurch, Hope(N Forever), Hunturk, Jenblower, Jester2001, Jonnabuz, KBi, La hapalo, Lunarbunny, M-le-mot-dit, Marek69, Maruchan, MrOllie, Mysid, Orderud, Prari, Puzz769, Reedbeta, Ronz, Sairen42, Slash, Solarra, Strangnet, T-tus, Thumperward, Tizio, Tregoweth, UKURL, WISo, Zzubnik, 80 anonymous edits Modo  Source: http://en.wikipedia.org/w/index.php?oldid=562408376  Contributors: Addshore, Alpha Quadrant, Andreas Toth, Asav, Asparagus, Bbennett27, Betacommand, Bjmonkey, Bofum, Bpeebler, Bradpeebler, Cheyo84, Chris the speller, CyberSkull, Dthomsen8, EdoDodo, Edward, Elembis, ElinorD, Faizan, Fhuhugugads, Frecklefoot, George100, GhostOfGulf, Gr33nt, Grabthecat, Grafen, Harryboyles, Henrik, InstantIndian, Jenniferwilkins, Jester2001, Jhessela, Jona, Kevin, Manoj Vangala, Math.campbell, Nczempin, New2lw, Northamerica1000, Open2universe, Qutezuce, RHaworth, Rjwilmsi, S4xton, Skorp, Sladuuch, Spires, Strangnet, The Anome, Themfromspace, Tsmes, Warren Young, Whitepaw, Wizardman, Yoasif, Yoghurt, Zeioth, 136 anonymous edits OptiX  Source: http://en.wikipedia.org/w/index.php?oldid=557986443  Contributors: ArnoldReinhold, Holgerl, Maury Markowitz, RadioFan, 4 anonymous edits PhotoRealistic RenderMan  Source: http://en.wikipedia.org/w/index.php?oldid=560287414  Contributors: 4th-otaku, Alanbly, Bender235, Chendy, Chickenmonkey, Chowbok, Damon555, Daniel Dickman, Datameister, Devilitself, ERobson, Feureau, Flamurai, Furrykef, Gabbs1, Gabypixar, HotWheels53, Klow, LaughingMan, Levork, Longhair, Lord Opeth, M-le-mot-dit, Mmernex, MoritzMoeller, Nahum Reduta, Quentin X, Rhindle The Red, Squamate, Stevenrasnick, Thinko, Tregoweth, Widefox, 27 anonymous edits Picogen  Source: http://en.wikipedia.org/w/index.php?oldid=534008263  Contributors: Bmordue, Danim, Free Software Knight, Mitch Ames, Phresnel, The Thing That Should Not Be, Thumperward, WereSpielChequers, Zundark, 10 anonymous edits Pixie  Source: http://en.wikipedia.org/w/index.php?oldid=542456639  Contributors: BenFrantzDale, Bobianite, Bojanbozovic, Catzillator, Daniel Dickman, DanielPharos, Eekerz, Frap, Grendelkhan, J04n, Jognet, Longhair, Rajah, Ronz, Speck-Made, Taw, The Rambling Man, TulipVorlax, 22 anonymous edits POV-Ray  Source: http://en.wikipedia.org/w/index.php?oldid=565062552  Contributors: Aboh24, Akavel, Altenmann, Amatulic, Angr, Archelon, Arite, Arjun G. Menon, Arnero, Ayacop, Azunda, BD2412, Badhabit007, Bernd vdB, Bigbluefish, Bobblehead, Bruce89, Bryan Derksen, BryanD, CesarB, CharlesC, ChrisGualtieri, Conscious, Cprompt, Cronholm144, Cyp, D V S, Danakil, Daniel.Cardenas, DanielPharos, Darrien, DavidCary, Davidhorman, Dhilvert, Ed g2s, Elfalem, Elwikipedista, EvilEntropy, Faradayplank, Favonian, FiP, Finlay McWalter, Frap, Frecklefoot, Fredrik, Freederick, Froth, GDallimore, Gadfium, Gilles Tran, Gioto, Gku, Graue, GregorB, GreyTeardrop, Hamitr, Helder.wiki, Hervegirod, Hetar, Hohum, Homerjay, Hosiah, Hyperquantization, IMBJR, Imagico, J04n, Jacobolus, Jihb, Jleedev, Jntesteves, John Reid, JordanDeLong, Jormungand, Josh Parris, JoshJenkins, Karl-Henner, Karnesky, Kbh3rd, Kenchikuben, Korval, Kotasik, Liftarn, Magnus.de, Maruchan, Mate2code, Michael Hardy, Misto, Mmernex, Mogigoma, MorgothX, Mormegil, Morn, MrBudgens, Mrwojo, Nave.notnilc, Necro86, Neil916, Nikai, Nmajmani, OlEnglish, Orkybash, P.gibellini, Paul Stansifer, Phil Boswell, Pmc, Pmsyyz, Polluks, Povray, Qutezuce, Ravedave, RedAndr, Reedy, Rgrof, Rich Farmbrough, Ronz, SF007, Salix alba, Samour50, Scientus, SeanAhern, SeeSchloss, Seraphimblade, Slash, Stan Shebs, Stormscape, Suruena, Tamfang, TheRanger, TheTrueSora, Tok, Tomandlu, Touch2007, Toytoy, Trueblues, Tsiaojian lee, Unsliced, Val42, Vincent Simar, Wapcaplet, Where, Windharp, Woohookitty, Wwagner, Ylai, ZeroOne, Zorak1103, Zundark, 148 anonymous edits Radiance  Source: http://en.wikipedia.org/w/index.php?oldid=544273472  Contributors: 16@r, Ahruman, ChKnoflach, Chris the speller, DGJM, Dugwiki, EdgeOfEpsilon, EzPresso, Hamitr, Imroy, Jleedev, Latebird, Leonard G., M-le-mot-dit, Maruchan, Mblumber, Mintrick, Pahazzard, Praemonitus, Qutezuce, Randwolf, Rich Farmbrough, Rjwilmsi, Sadharan, Stemonitis, Tamariki, Tesi1700, Thumperward, Tom Duff, Tuba mirum, WakingLili, Warwick, 8 anonymous edits Real3D  Source: http://en.wikipedia.org/w/index.php?oldid=280126152  Contributors: ChrisGualtieri, Gsgsgsgs, Jacob Poon, Methamorpheus, Polluks, Rilak, TheCuriousGnome, 6 anonymous edits Realsoft 3D  Source: http://en.wikipedia.org/w/index.php?oldid=543229831  Contributors: ChrisGualtieri, Gsgsgsgs, Jacob Poon, Methamorpheus, Polluks, Rilak, TheCuriousGnome, 6 anonymous edits Sunflow  Source: http://en.wikipedia.org/w/index.php?oldid=544797974  Contributors: Accelerometer, AndyHedges, Exvion, Frap, Free Software Knight, Frietjes, JLaTondre, Kl4m-AWB, Natophonic, Palosirkka, Srleffler, Sturm br, Tom Jenkins, VernoWhitney, 3 anonymous edits TurboSilver  Source: http://en.wikipedia.org/w/index.php?oldid=496400903  Contributors: Alvestrand, Chrislk02, D6, Doprendek, Duncharris, Excerpted31, Hephaestos, J04n, Monotonehell, Polluks, ShelfSkewed, Suso, TheParanoidOne, 1 anonymous edits V-Ray  Source: http://en.wikipedia.org/w/index.php?oldid=555870658  Contributors: A Jalil, Alecperkins, Asgvis, Beetstra, Bobopetrov, Chuckhoffmann, Cloderic, DGG, De728631, Der Golem, Dllu, Elena the Quiet, FleetCommand, Foil Fencer, Frin, InTheCity, JDoorjam, JenniferHeartsU, Jll, Jurohi, Label9, Levil, Liatrinity, Makeemlighter, Martarius, Mate2code, Meierbac, Miladbat, Mimigu, MrOllie, Mutatis mutandis, Mykhal, NeoKron, PamD, ProfDEH, Prolog, Reza1615, SchreiberBike, Shopik, SiobhanHansa, Strangnet, Strattonbrazil, That Guy, From That Show!, TheParanoidOne, Thumperward, Travistlo, TrumblesWorth, UKURL, Violiszt, Vkoylazov, Vlado nk, Vraygroup, ‫ﺃﺣﻤﺪ ﻣﺼﻄﻔﻰ ﺍﻟﺴﻴﺪ‬, 90 anonymous edits YafaRay  Source: http://en.wikipedia.org/w/index.php?oldid=540080241  Contributors: Archelon, Arite, Arru, Aussie Evil, Burschik, CmdrFirewalker, DavidCary, Donvinzk, Dra, Ed g2s, Eike, Felix Wiemann, Frap, Free Software Knight, Grin, HappyDog, Jbmcb, Jdm64, Jntesteves, Kl4m-AWB, M-le-mot-dit, Mammique, Maruchan, Mavrisa, Pisharov, Pokipsy76, Prometheus.pyrphoros, Qwertyus, Rich Farmbrough, Ricvelozo, Rory096, Samadam, Sarkar112, Siddhant, Simeondahl, Snarius, Stanleyozoemena, TheParanoidOne, UKURL, Unyoyega, Wapcaplet, ZeroOne, Zundark, 41 anonymous edits

296

Image Sources, Licenses and Contributors

Image Sources, Licenses and Contributors File:Glasses 800 edit.png  Source: http://en.wikipedia.org/w/index.php?title=File:Glasses_800_edit.png  License: Public Domain  Contributors: Gilles Tran file:Engine movingparts.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Engine_movingparts.jpg  License: GNU Free Documentation License  Contributors: Original uploader was Wapcaplet at en.wikipedia file:Dunkerque 3d.jpeg  Source: http://en.wikipedia.org/w/index.php?title=File:Dunkerque_3d.jpeg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Common Good, Danhash, FSII, Rama, SharkD file:Cannonball stack with FCC unit cell.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Cannonball_stack_with_FCC_unit_cell.jpg  License: GNU Free Documentation License  Contributors: User:Greg L File:Render Types.png  Source: http://en.wikipedia.org/w/index.php?title=File:Render_Types.png  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Maximilian Schönherr Image:Glasses 800 edit.png  Source: http://en.wikipedia.org/w/index.php?title=File:Glasses_800_edit.png  License: Public Domain  Contributors: Gilles Tran Image:Cg-jewelry-design.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Cg-jewelry-design.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: http://www.alldzine.com File:Latest Rendering of the E-ELT.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Latest_Rendering_of_the_E-ELT.jpg  License: unknown  Contributors: Swinburne Astronomy Productions/ESO Image:SpiralSphereAndJuliaDetail1.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:SpiralSphereAndJuliaDetail1.jpg  License: Creative Commons Attribution 3.0  Contributors: Robert W. McGregor Original uploader was Azunda at en.wikipedia File:ESTCube orbiidil 2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:ESTCube_orbiidil_2.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Quibik, Utvikipedist File:Recursive raytrace of a sphere.png  Source: http://en.wikipedia.org/w/index.php?title=File:Recursive_raytrace_of_a_sphere.png  License: Creative Commons Attribution-Share Alike  Contributors: Tim Babb File:Ray trace diagram.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Ray_trace_diagram.svg  License: GNU Free Documentation License  Contributors: Henrik File:BallsRender.png  Source: http://en.wikipedia.org/w/index.php?title=File:BallsRender.png  License: Creative Commons Attribution 3.0  Contributors: Averater, Magog the Ogre File:Ray-traced steel balls.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Ray-traced_steel_balls.jpg  License: GNU Free Documentation License  Contributors: Original uploader was Greg L at en.wikipedia (Original text : Greg L) File:Glass ochem.png  Source: http://en.wikipedia.org/w/index.php?title=File:Glass_ochem.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Purpy Pupple File:PathOfRays.svg  Source: http://en.wikipedia.org/w/index.php?title=File:PathOfRays.svg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Traced by User:Stannered, original by en:user:Kolibri file:Axonometric projection.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Axonometric_projection.svg  License: Public Domain  Contributors: Yuri Raysper File:Perspective Transform Diagram.png  Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_Transform_Diagram.png  License: Public Domain  Contributors: Skytiger2, 1 anonymous edits File:The sun1.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:The_sun1.jpg  License: GNU Free Documentation License  Contributors: DrKiernan, Halfdan, HappyLogolover2011, Patricka, Sebman81, Tom, 2 anonymous edits File:EM spectrum.svg  Source: http://en.wikipedia.org/w/index.php?title=File:EM_spectrum.svg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: User:Sakurambo Image:Refraction-with-soda-straw.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Refraction-with-soda-straw.jpg  License: GNU Free Documentation License  Contributors: User Bcrowell on en.wikipedia File:Cloud in the sunlight.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Cloud_in_the_sunlight.jpg  License: Creative Commons Attribution 2.0  Contributors: Ibrahim Iujaz from Rep. Of Maldives File:Night yamagata city 2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Night_yamagata_city_2.jpg  License: GNU Free Documentation License  Contributors: photo by User:Tokino, enhanced and cropped and downsized by User:Dicklyon Image:PierreGassendi.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:PierreGassendi.jpg  License: Public Domain  Contributors: fr:Louis-Édouard Rioult (Born Montdidier 1790, died 1855) File:Young Diffraction.png  Source: http://en.wikipedia.org/w/index.php?title=File:Young_Diffraction.png  License: Public Domain  Contributors: DrKiernan, Glenn, Quatar File:light-wave.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Light-wave.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Angstorm, Badseed, Bruce1ee, Gpvos, Guérin Nicolas, Liangent, Pieter Kuiper, The Anonymouse, ‫ﻭﻫﺮﺍﻧﻲ‬, 3 anonymous edits Image:Luminosity.png  Source: http://en.wikipedia.org/w/index.php?title=File:Luminosity.png  License: Public Domain  Contributors: Original uploader was Dicklyon at en.wikipedia File:OldLampShadow.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:OldLampShadow.jpg  License: Creative Commons Attribution 3.0  Contributors: Malter File:Park grid.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Park_grid.JPG  License: Creative Commons Zero  Contributors: Владимир Шеляпин File:Diagram of umbra, penumbra & antumbra.png  Source: http://en.wikipedia.org/w/index.php?title=File:Diagram_of_umbra,_penumbra_&_antumbra.png  License: Public Domain  Contributors: Qarnos File:Steam phase eruption of Castle Geyser with crepuscular rays and shadow.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Steam_phase_eruption_of_Castle_Geyser_with_crepuscular_rays_and_shadow.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Brocken Inaglory. Original uploader was Brocken Inaglory File:Vapour shadow.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Vapour_shadow.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Original uploader was Celcom at en.wikipedia File:Fog shadow of GGB.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Fog_shadow_of_GGB.jpg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Brocken Inaglory File:CloudsShadow.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:CloudsShadow.jpg  License: Creative Commons Attribution 3.0  Contributors: Etan J. Tal File:British Library Gate Shadow.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:British_Library_Gate_Shadow.jpg  License: Creative Commons Attribution 2.0  Contributors: C. G. P. Grey File:Fog shadow tv tower.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Fog_shadow_tv_tower.jpg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Brocken Inaglory File:Jasminum sambac1SHSU.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Jasminum_sambac1SHSU.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Shu Suehiro File:Kernschatten und Halbschatten.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Kernschatten_und_Halbschatten.svg  License: Creative Commons Attribution 3.0  Contributors: Klaus-Dieter Keller File:Antumbra.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Antumbra.jpg  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Barraki File:Earth umbral cone (partial).png  Source: http://en.wikipedia.org/w/index.php?title=File:Earth_umbral_cone_(partial).png  License: Public Domain  Contributors: Qarnos Image:CGfog.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:CGfog.jpg  License: Creative Commons Attribution 2.0  Contributors: User T-tus on en.wikipedia Image:Gouraud_high.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Gouraud_high.gif  License: Creative Commons Attribution 2.0  Contributors: Freddo, Jalo, Origamiemensch, WikipediaMaster, Yzmo Image:Shading1.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Shading1.jpg  License: GNU Free Documentation License  Contributors: Jalo, WikipediaMaster Image:Shading1.png  Source: http://en.wikipedia.org/w/index.php?title=File:Shading1.png  License: Public Domain  Contributors: Jalo, Sevela.p Image:Shading2.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Shading2.svg  License: Public Domain  Contributors: Shading2.PNG: user:Al Hart derivative work: Me6620 (talk) Image:Shading3.PNG  Source: http://en.wikipedia.org/w/index.php?title=File:Shading3.PNG  License: Public Domain  Contributors: Jalo, 1 anonymous edits

297

Image Sources, Licenses and Contributors Image:Floodlight.png  Source: http://en.wikipedia.org/w/index.php?title=File:Floodlight.png  License: Public Domain  Contributors: Jalo, Sevela.p, Stefan4 Image:2squares-1.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:2squares-1.jpg  License: Public Domain  Contributors: Jalo, Jamesofur, WikipediaMaster, 2 anonymous edits Image:2squares-2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:2squares-2.jpg  License: Public Domain  Contributors: Jalo, WikipediaMaster Image:Phong-shading-sample.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Phong-shading-sample.jpg  License: Public Domain  Contributors: Jalo, Mikhail Ryazanov, WikipediaMaster, 1 anonymous edits File:Lambert2.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Lambert2.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: GianniG46 Image:Diffuse reflection.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_reflection.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: GianniG46 File:Diffuse reflection.PNG  Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_reflection.PNG  License: GNU Free Documentation License  Contributors: Original uploader was Theresa knott at en.wikipedia File:Gouraudshading00.png  Source: http://en.wikipedia.org/w/index.php?title=File:Gouraudshading00.png  License: Public Domain  Contributors: Maarten Everts File:D3D Shading Modes.png  Source: http://en.wikipedia.org/w/index.php?title=File:D3D_Shading_Modes.png  License: Public Domain  Contributors: Lukáš Buričin Image:Gouraud_low_anim.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Gouraud_low_anim.gif  License: Creative Commons Attribution 2.0  Contributors: Jalo, Kri, Man vyi, Origamiemensch, WikipediaMaster, Wst, Yzmo Image:Oren-nayar-vase1.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase1.jpg  License: GNU General Public License  Contributors: M.Oren and S.Nayar. Original uploader was Jwgu at en.wikipedia Image:Oren-nayar-surface.png  Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-surface.png  License: Public domain  Contributors: Image:Oren-nayar-reflection.png  Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-reflection.png  License: Public domain  Contributors: Image:Oren-nayar-vase2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase2.jpg  License: GNU General Public License  Contributors: M. Oren and S. Nayar. Original uploader was Jwgu at en.wikipedia Image:Oren-nayar-vase3.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-vase3.jpg  License: GNU General Public License  Contributors: M. Oren and S. Nayar. Original uploader was Jwgu at en.wikipedia Image:Oren-nayar-sphere.png  Source: http://en.wikipedia.org/w/index.php?title=File:Oren-nayar-sphere.png  License: Public domain  Contributors: Image:Phong components version 4.png  Source: http://en.wikipedia.org/w/index.php?title=File:Phong_components_version_4.png  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: User:Rainwarrior File:Blinn Vectors.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Blinn_Vectors.svg  License: Creative Commons Zero  Contributors: User:Martin Kraus Image:Blinn phong comparison.png  Source: http://en.wikipedia.org/w/index.php?title=File:Blinn_phong_comparison.png  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: User:Rainwarrior Image:Reflection angles.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Reflection_angles.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Arvelius, EDUCA33E, Ies Image:Tso Kiagar Lake Ladakh.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Tso_Kiagar_Lake_Ladakh.jpg  License: Creative Commons Attribution 2.0  Contributors: Prabhu B File:Specular highlight.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Specular_highlight.jpg  License: GNU Free Documentation License  Contributors: Original uploader was Reedbeta at en.wikipedia Image:Corner-Cube.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Corner-Cube.jpg  License: GNU Free Documentation License  Contributors: Kkmurray Image:Corner reflector.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Corner_reflector.svg  License: Creative Commons Zero  Contributors: User:Chetvorno File:Comparison_of_retroreflectors.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Comparison_of_retroreflectors.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Cmglee Image:Eyeshine-BW-cat.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Eyeshine-BW-cat.jpg  License: Public Domain  Contributors: derivative work: Una Smith File:Observation angle retroreflector.PNG  Source: http://en.wikipedia.org/w/index.php?title=File:Observation_angle_retroreflector.PNG  License: Public Domain  Contributors: several; see report File:Entrance angle retroreflector.PNG  Source: http://en.wikipedia.org/w/index.php?title=File:Entrance_angle_retroreflector.PNG  License: Public Domain  Contributors: several; see report File:BicycleRetroreflectors.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:BicycleRetroreflectors.JPG  License: Public Domain  Contributors: Julo Image:BikeShoeReflectors.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:BikeShoeReflectors.jpg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: User:Kristian Ovaska Image:Apollo 11 Lunar Laser Ranging Experiment.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Apollo_11_Lunar_Laser_Ranging_Experiment.jpg  License: Public Domain  Contributors: NASA Image:Texturedm1a2.png  Source: http://en.wikipedia.org/w/index.php?title=File:Texturedm1a2.png  License: GNU Free Documentation License  Contributors: Anynobody Image:Bumpandopacity.png  Source: http://en.wikipedia.org/w/index.php?title=File:Bumpandopacity.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Anynobody Image:Perspective correct texture mapping.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Perspective_correct_texture_mapping.jpg  License: Public Domain  Contributors: Rainwarrior Image:Texturemapping subdivision.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Texturemapping_subdivision.svg  License: Public Domain  Contributors: Arnero File:Bump-map-demo-full.png  Source: http://en.wikipedia.org/w/index.php?title=File:Bump-map-demo-full.png  License: GNU Free Documentation License  Contributors: Bump-map-demo-smooth.png, Orange-bumpmap.png and Bump-map-demo-bumpy.png: Original uploader was Brion VIBBER at en.wikipedia Later version(s) were uploaded by McLoaf at en.wikipedia. derivative work: GDallimore (talk) File:Bump map vs isosurface2.png  Source: http://en.wikipedia.org/w/index.php?title=File:Bump_map_vs_isosurface2.png  License: Public Domain  Contributors: GDallimore File:BRDF Diagram.svg  Source: http://en.wikipedia.org/w/index.php?title=File:BRDF_Diagram.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: BRDF_Diagram.png: Meekohi derivative work: tiZom(2¢) Image:Kayaking Deep Fork Wildlife Refuge Oklahoma.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Kayaking_Deep_Fork_Wildlife_Refuge_Oklahoma.jpg  License: Creative Commons Attribution 2.0  Contributors: Thomas & Dianne Jones Image:RefractionReflextion.svg  Source: http://en.wikipedia.org/w/index.php?title=File:RefractionReflextion.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Josell7 Image:Fényvisszaverődés.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Fényvisszaverődés.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Zátonyi Sándor (ifj.) Image:Diffuse refl.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_refl.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: GianniG46 Image:Corner-reflector.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Corner-reflector.svg  License: Public Domain  Contributors: Alureiter, Waldir File:Studio soundproofing panel.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Studio_soundproofing_panel.jpg  License: Creative Commons Attribution 3.0  Contributors: Daniel Christensen (talk) Image:Refl sample.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Refl_sample.jpg  License: Public Domain  Contributors: Lixihan Image:Mirror2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Mirror2.jpg  License: Public Domain  Contributors: Al Hart Image:Metallic balls.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Metallic_balls.jpg  License: Public Domain  Contributors: AlHart Image:Blurry reflection.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Blurry_reflection.jpg  License: Public Domain  Contributors: AlHart Image:Glossy-spheres.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Glossy-spheres.jpg  License: Public Domain  Contributors: AlHart Image:Image-Metal-reflectance.png  Source: http://en.wikipedia.org/w/index.php?title=File:Image-Metal-reflectance.png  License: GNU Free Documentation License  Contributors: Bob Mellish. File:Fresnel reflection coefficients.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Fresnel_reflection_coefficients.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Fresnel_reflection_coefficients_(DE).svg: Cepheiden derivative work: Cepheiden (talk) Image:Water reflectivity.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Water_reflectivity.jpg  License: Public Domain  Contributors: Alvin-cs, Dan Pangburn, GianniG46, Kelly

298

Image Sources, Licenses and Contributors File:Partial transmittance.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Partial_transmittance.gif  License: Public Domain  Contributors: Oleg Alexandrov File:Fresnel1.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Fresnel1.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Kilom691, Niteshb in File:fresnel reflection.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Fresnel_reflection.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Ulflund File:Amptitude Ratios air to glass.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Amptitude_Ratios_air_to_glass.JPG  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Kohlik17 File:Amplitude ratios glass to air.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Amplitude_ratios_glass_to_air.JPG  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Kohlik17 File:Dichroic filters.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Dichroic_filters.jpg  License: Public Domain  Contributors: Glenn, Masur, Pfctdayelise, Pieter Kuiper, WikipediaMaster File:Opacity Translucency Transparency.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Opacity_Translucency_Transparency.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Anynobody File:Diffuse refl.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Diffuse_refl.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: GianniG46 Image:Military laser experiment.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Military_laser_experiment.jpg  License: Public Domain  Contributors: US Air Force Image:Nightvision.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Nightvision.jpg  License: Public Domain  Contributors: Benchill, Cirt, Diagraph01, KTo288, Sanandros, Tano4595, Zaccarias, 1 anonymous edits Image:Backlit mushroom.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Backlit_mushroom.jpg  License: GNU Free Documentation License  Contributors: Eric Meyer (= Eraticus) Image:Meiningen Glasfenster Katholische Kirche.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Meiningen_Glasfenster_Katholische_Kirche.jpg  License: GNU Free Documentation License  Contributors: User:Immanuel Giel Image:1D normal modes (280 kB).gif  Source: http://en.wikipedia.org/w/index.php?title=File:1D_normal_modes_(280_kB).gif  License: GNU Free Documentation License  Contributors: Badseed, Herbythyme, Pabouk, Pieter Kuiper, 2 anonymous edits Image:Optical-fibre.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Optical-fibre.svg  License: Public Domain  Contributors: Gringer (talk) Image:Laser in fibre.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Laser_in_fibre.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Timwether Image:Zblan transmit.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Zblan_transmit.jpg  License: Public Domain  Contributors: NASA. Original uploader was Materialscientist at en.wikipedia File:Animation30.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Animation30.gif  License: GNU Free Documentation License  Contributors: MG**** @@@-fr Accueil fr:Accueil 09:01, 31 March 2009 (UTC) Image:Binary transparency monochrome.png  Source: http://en.wikipedia.org/w/index.php?title=File:Binary_transparency_monochrome.png  License: Public Domain  Contributors: Original uploader was Shlomital at en.wikipedia Image:Binary transparency greyscale.png  Source: http://en.wikipedia.org/w/index.php?title=File:Binary_transparency_greyscale.png  License: Public Domain  Contributors: Original uploader was Shlomital at en.wikipedia Image:Alpha transparency image.png  Source: http://en.wikipedia.org/w/index.php?title=File:Alpha_transparency_image.png  License: Public Domain  Contributors: Original uploader was Shlomital at en.wikipedia Image:Transparency overlays.png  Source: http://en.wikipedia.org/w/index.php?title=File:Transparency_overlays.png  License: Public Domain  Contributors: Shlomital, Yarnalgo Image:Transparency example.PNG  Source: http://en.wikipedia.org/w/index.php?title=File:Transparency_example.PNG  License: Public Domain  Contributors: ...... Dendodge .. TalkContribs. Original uploader was Dendodge at en.wikipedia Image:Fénytörés.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Fénytörés.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Zátonyi Sándor (ifj.) Fizped (talk) Image:GGB reflection in raindrops.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:GGB_reflection_in_raindrops.jpg  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Brocken Inaglory Image:Snells law.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Snells_law.svg  License: Public Domain  Contributors: Original uploader was Cristan at en.wikipedia Later version(s) were uploaded by Dicklyon at en.wikipedia. File:Pencil in a bowl of water.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Pencil_in_a_bowl_of_water.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Pencil_in_a_bowl_of_water.png: User:Theresa_knott derivative work: Gregors (talk) 10:51, 23 February 2011 (UTC) Image:Refraction in a ripple tank.png  Source: http://en.wikipedia.org/w/index.php?title=File:Refraction_in_a_ripple_tank.png  License: GNU Free Documentation License  Contributors: Frazzydee, Pieter Kuiper, Saperaud, 2 anonymous edits File:Refraction.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Refraction.jpg  License: GNU Free Documentation License  Contributors: Fir0002 (talk) (Uploads) File:Refraction-with-soda-straw.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Refraction-with-soda-straw.jpg  License: GNU Free Documentation License  Contributors: User Bcrowell on en.wikipedia File:Refraction-ripple-tank.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Refraction-ripple-tank.JPG  License: unknown  Contributors: Original uploader was RenamedUser2 at en.wikipedia File:Angle of incidence Refraction example.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Angle_of_incidence_Refraction_example.jpg  License: Public Domain  Contributors: Delamaran File:Réflexion total.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Réflexion_total.svg  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Clément 421138 File:TIR in PMMA.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:TIR_in_PMMA.jpg  License: Public Domain  Contributors: Sai2020 File:Teljes fényvisszaverődés.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Teljes_fényvisszaverődés.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Zátonyi Sándor (ifj.) File:TIRDiagram2.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:TIRDiagram2.JPG  License: Public Domain  Contributors: File:RefractionReflextion.svg  Source: http://en.wikipedia.org/w/index.php?title=File:RefractionReflextion.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Josell7 File:Drinking glass fingerprint FTIR.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Drinking_glass_fingerprint_FTIR.jpg  License: Public Domain  Contributors: Olli Niemitalo File:Mirror like effect.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Mirror_like_effect.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Akshat.saxena21 File:Total internal reflection of Chelonia mydas .jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Total_internal_reflection_of_Chelonia_mydas_.jpg  License: Creative Commons Attribution-Share Alike  Contributors: Brocken Inaglory File:PD-icon.svg  Source: http://en.wikipedia.org/w/index.php?title=File:PD-icon.svg  License: Public Domain  Contributors: Alex.muller, Anomie, Anonymous Dissident, CBM, MBisanz, PBS, Quadell, Rocket000, Strangerer, Timotheus Canens, 1 anonymous edits Image:BSDF05 800.png  Source: http://en.wikipedia.org/w/index.php?title=File:BSDF05_800.png  License: GNU Free Documentation License  Contributors: User:Jurohi, User:Twisp Image:BSSDF01 400.svg  Source: http://en.wikipedia.org/w/index.php?title=File:BSSDF01_400.svg  License: GNU Free Documentation License  Contributors: Jurohi (original); Pbroks13 (redraw) Original uploader was Pbroks13 at en.wikipedia File:Line-Sphere Intersection Cropped.png  Source: http://en.wikipedia.org/w/index.php?title=File:Line-Sphere_Intersection_Cropped.png  License: GNU Free Documentation License  Contributors: Sfan00 IMG, Zephyris Image:Plane-line intersection.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Plane-line_intersection.svg  License: Public domain  Contributors: en:User:Ion.lesan (original); Pbroks13 (talk) (redraw) Image:Line plane.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Line_plane.svg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Original uploader was Salix alba at en.wikipedia Image:Simple polygon.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Simple_polygon.svg  License: Public Domain  Contributors: Oleg Alexandrov Image:RecursiveEvenPolygon.svg  Source: http://en.wikipedia.org/w/index.php?title=File:RecursiveEvenPolygon.svg  License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: Melchoir

299

Image Sources, Licenses and Contributors File:Geodesic Grid (ISEA3H) illustrated.png  Source: http://en.wikipedia.org/w/index.php?title=File:Geodesic_Grid_(ISEA3H)_illustrated.png  License: GNU Free Documentation License  Contributors: Perry R. Peterson, 1 anonymous edits File:Octree2.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Octree2.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: WhiteTimberwolf, PNG version: Nü file:Local illumination.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Local_illumination.JPG  License: Public Domain  Contributors: Danhash, Gabriel VanHelsing, Gtanski, Jollyroger, Joolz, Kri, Mattes, Metoc, Paperquest, PierreSelim file:Global illumination.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Global_illumination.JPG  License: Public Domain  Contributors: user:Gtanski File:Rendering eq.png  Source: http://en.wikipedia.org/w/index.php?title=File:Rendering_eq.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Timrb file:Rayleigh-Taylor instability.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Rayleigh-Taylor_instability.jpg  License: Public Domain  Contributors: Lawrence Livermore National Laboratory File:Pi 30K.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Pi_30K.gif  License: Creative Commons Attribution 3.0  Contributors: CaitlinJo File:Monte-carlo2.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Monte-carlo2.gif  License: unknown  Contributors: Zureks File:Monte-Carlo method (errors).png  Source: http://en.wikipedia.org/w/index.php?title=File:Monte-Carlo_method_(errors).png  License: unknown  Contributors: Zureks Image:GermanCountryRoad by David Gudelius.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:GermanCountryRoad_by_David_Gudelius.jpg  License: Creative Commons Attribution 3.0  Contributors: David Gudelius, Also known as Zeitmeister. Image:Bidirectional scattering distribution function.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Bidirectional_scattering_distribution_function.svg  License: Public Domain  Contributors: Twisp Image:Radiosity - RRV, step 79.png  Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_-_RRV,_step_79.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: DaBler, Kri, McZusatz Image:Radiosity Comparison.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_Comparison.jpg  License: GNU Free Documentation License  Contributors: Hugo Elias (myself) Image:Radiosity Progress.png  Source: http://en.wikipedia.org/w/index.php?title=File:Radiosity_Progress.png  License: GNU Free Documentation License  Contributors: Hugo Elias (myself) File:Nusselt analog.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Nusselt_analog.svg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Jheald Image:Utah teapot simple 2.png  Source: http://en.wikipedia.org/w/index.php?title=File:Utah_teapot_simple_2.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Dhatfield File:Glas-1000-enery.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Glas-1000-enery.jpg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Tobias R Metoc Image:aliased.png  Source: http://en.wikipedia.org/w/index.php?title=File:Aliased.png  License: GNU Free Documentation License  Contributors: Original uploader was Loisel at en.wikipedia Later versions were uploaded by Riumplus, AzaToth, Dicklyon at en.wikipedia. Image:antialiased.png  Source: http://en.wikipedia.org/w/index.php?title=File:Antialiased.png  License: GNU Free Documentation License  Contributors: User:Loisel Image:antialiased-lanczos.png  Source: http://en.wikipedia.org/w/index.php?title=File:Antialiased-lanczos.png  License: Public Domain  Contributors: Ancient Anomaly, Dicklyon, Sesse, Surena, Technion Image:antialiased-zoom.png  Source: http://en.wikipedia.org/w/index.php?title=File:Antialiased-zoom.png  License: GNU Free Documentation License  Contributors: Maksim, Mormegil Image:Anti-aliased-diamonds.png  Source: http://en.wikipedia.org/w/index.php?title=File:Anti-aliased-diamonds.png  License: GNU Free Documentation License  Contributors: Rainwarrior, Torsch, WikipediaMaster Image:Anti-aliased diamond enlarged.png  Source: http://en.wikipedia.org/w/index.php?title=File:Anti-aliased_diamond_enlarged.png  License: GNU Free Documentation License  Contributors: Admrboltz, Cflm001, Joonasl, Torsch, 1 anonymous edits Image:Sinc(x) x sinc(y) plot.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Sinc(x)_x_sinc(y)_plot.jpg  License: GNU Free Documentation License  Contributors: Maksim, Mormegil Image:Gaussian plus its own curvature.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Gaussian_plus_its_own_curvature.jpg  License: GNU Free Documentation License  Contributors: Arbitrarily0, Maksim, Mormegil Image:Mandelbrot_"Turbine"_desk_shape.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot_"Turbine"_desk_shape.jpg  License: GNU Free Documentation License  Contributors: Maksim, Mormegil, Pascal.Tesson Image:Mandelbrot_Turbine_big_all_samples.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot_Turbine_big_all_samples.jpg  License: GNU Free Documentation License  Contributors: Maksim, Mormegil, Pascal.Tesson, WikipediaMaster Image:Mandelbrot_Budding_turbines.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot_Budding_turbines.jpg  License: GNU Free Documentation License  Contributors: D-Kuru, Saperaud Image:Mandelbrot_Turbine_Chaff.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot_Turbine_Chaff.jpg  License: GNU Free Documentation License  Contributors: Maksim, Mormegil, Pascal.Tesson, WikipediaMaster Image:Mandelbrot Budding Turbines downsampled.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot_Budding_Turbines_downsampled.jpg  License: GNU Free Documentation License  Contributors: Maksim, Mormegil, Pascal.Tesson Image:Mandelbrot-spiral-original.png  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot-spiral-original.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Doomed Rasher (talk) Image:Mandelbrot-spiral-antialiased-4-samples.png  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot-spiral-antialiased-4-samples.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Doomed Rasher (talk) Image:Mandelbrot-spiral-antialiased-25-samples.png  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot-spiral-antialiased-25-samples.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Doomed Rasher (talk) Image:Mandelbrot-spiral-antialiased-400-samples.png  Source: http://en.wikipedia.org/w/index.php?title=File:Mandelbrot-spiral-antialiased-400-samples.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Doomed Rasher (talk) Image:Aocclude bentnormal.png  Source: http://en.wikipedia.org/w/index.php?title=File:Aocclude_bentnormal.png  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Original uploader was Mrtheplague at en.wikipedia File:Kaustik.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Kaustik.jpg  License: GNU Free Documentation License  Contributors: Heiner Otterstedt, OtterstedtOriginal uploader was Otterstedt at de.wikipedia File:Caustic00.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Caustic00.jpg  License: GNU Free Documentation License  Contributors: User:Paul venter File:Great Barracuda, corals, sea urchin and Caustic (optics) in Kona, Hawaii 2009.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Great_Barracuda,_corals,_sea_urchin_and_Caustic_(optics)_in_Kona,_Hawaii_2009.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Brocken Inaglory Image:ShellOpticalDescattering.png  Source: http://en.wikipedia.org/w/index.php?title=File:ShellOpticalDescattering.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Meekohi Image:Subsurface scattering.png  Source: http://en.wikipedia.org/w/index.php?title=File:Subsurface_scattering.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Piotrek Chwała Image:Sub-surface scattering depth map.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Sub-surface_scattering_depth_map.svg  License: Public Domain  Contributors: Tinctorius Image:London bus and telephone box on Haymarket.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:London_bus_and_telephone_box_on_Haymarket.jpg  License: Creative Commons Attribution-Sharealike 2.0  Contributors: E01 File:Dog Leaping.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Dog_Leaping.jpg  License: Public Domain  Contributors: Jahoe, Jarble, Rudolph Buch, SunOfErat, TejasDiscipulus2 File:Figure-Animation2.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Figure-Animation2.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Niabot File:Three women in a taxi in Manhattan who said hello.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Three_women_in_a_taxi_in_Manhattan_who_said_hello.jpg  License: Creative Commons Zero  Contributors: User:Tomwsulcer

300

Image Sources, Licenses and Contributors File:Image restoration (motion blur, Wiener filtering).png  Source: http://en.wikipedia.org/w/index.php?title=File:Image_restoration_(motion_blur,_Wiener_filtering).png  License: Public Domain  Contributors: DaBler File:Magnify-clip.png  Source: http://en.wikipedia.org/w/index.php?title=File:Magnify-clip.png  License: Public Domain  Contributors: User:Erasoft24 Image:Motorbike rider mono.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Motorbike_rider_mono.jpg  License: unknown  Contributors: Bdk, Fir0002, Julia W, Morio, Peprovira, Takabeg Image:Nightly Rotation above San Jose International Airport.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Nightly_Rotation_above_San_Jose_International_Airport.jpg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Wing-Chi Poon Image:Tj pullingout.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Tj_pullingout.jpg  License: Creative Commons Attribution 2.5  Contributors: Sung Liff Image:BabyWavingArms.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:BabyWavingArms.JPG  License: Public Domain  Contributors: Richfife Image:Strickland_Falls_Shadows_Lifted.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Strickland_Falls_Shadows_Lifted.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: JJ Harrison ( [email protected]) Image:Moths attracted by floodlight.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Moths_attracted_by_floodlight.jpg  License: unknown  Contributors: Chrisahn, Dysmorodrepanis, Fir0002, Ingolfson, Leonardorejorge File:Mikrofone.png  Source: http://en.wikipedia.org/w/index.php?title=File:Mikrofone.png  License: Public Domain  Contributors: Macmanne File:Apples made with 3ds max.jpeg  Source: http://en.wikipedia.org/w/index.php?title=File:Apples_made_with_3ds_max.jpeg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Rrinsindika File:Audirender.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Audirender.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Simonjaap Image:ASAP Logo.gif  Source: http://en.wikipedia.org/w/index.php?title=File:ASAP_Logo.gif  License: GNU Free Documentation License  Contributors: Breault Research Organization File:Blender 2.66 Mac OS X.png  Source: http://en.wikipedia.org/w/index.php?title=File:Blender_2.66_Mac_OS_X.png  License: GNU General Public License  Contributors: Jaimemf File:BlenderDesktop-2-63.png  Source: http://en.wikipedia.org/w/index.php?title=File:BlenderDesktop-2-63.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Asav File:Blender3D 2.4.5-screen.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Blender3D_2.4.5-screen.jpg  License: GNU General Public License  Contributors: Bayo, Danhash, DingTo, Lasse Havelund, Mechamind90, Teebeutel, 1 anonymous edits File:Suzanne.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Suzanne.svg  License: Public Domain  Contributors: Inductiveload File:Steps of forensic facial reconstruction - Virtual Mummy - cogitas3d.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Steps_of_forensic_facial_reconstruction_-_Virtual_Mummy_-_cogitas3d.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:Cicero Moraes Image:Working with Nodes Blender.PNG  Source: http://en.wikipedia.org/w/index.php?title=File:Working_with_Nodes_Blender.PNG  License: GNU Free Documentation License  Contributors: Original uploader was Charlesworth999 at en.wikipedia Image:Engine_movingparts.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Engine_movingparts.jpg  License: GNU Free Documentation License  Contributors: Original uploader was Wapcaplet at en.wikipedia Image:Lone House.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Lone_House.jpg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Michael Otto (user Mayqel) File:Sintel Cover Durian Project.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Sintel_Cover_Durian_Project.jpg  License: Creative Commons Attribution 3.0  Contributors: Lee Salvemini, Pablo Vazquez, David Revoy, Durian Team, Blender Foundation File:Blender Version 2.570.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Blender_Version_2.570.jpg  License: unknown  Contributors: Myself File:Stucco-blendergame.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Stucco-blendergame.jpg  License: GNU General Public License  Contributors: Agaly (talk). Original uploader was Agaly at en.wikipedia file:Big buck bunny poster big.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Big_buck_bunny_poster_big.jpg  License: unknown  Contributors: (c) copyright Blender Foundation UNIQ-nowiki-0-fbb8c687dc69ceeb-QINU peach.blender.org file:Sintel poster.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Sintel_poster.jpg  License: Creative Commons Attribution 3.0  Contributors: (c) copyright Blender Foundation UNIQ-nowiki-0-fbb8c687dc69ceeb-QINU durian.blender.org file:Tos-poster.png  Source: http://en.wikipedia.org/w/index.php?title=File:Tos-poster.png  License: Creative Commons Attribution 3.0  Contributors: (CC) Blender Foundation UNIQ-nowiki-0-fbb8c687dc69ceeb-QINU Project Mango File:Tears of Steel frame 08 4a.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Tears_of_Steel_frame_08_4a.jpg  License: Creative Commons Attribution 3.0  Contributors: Asav, Julian Herzog, Morn File:BRL-CAD logo.png  Source: http://en.wikipedia.org/w/index.php?title=File:BRL-CAD_logo.png  License: Public Domain  Contributors: Brlcad File:brlcad diagram.png  Source: http://en.wikipedia.org/w/index.php?title=File:Brlcad_diagram.png  License: Public Domain  Contributors: Brlcad, Liftarn File:Pdp11,70 640x507.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Pdp11,70_640x507.jpg  License: Public Domain  Contributors: Original uploader was Brlcad at en.wikipedia File:GermanCountryRoad by David Gudelius.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:GermanCountryRoad_by_David_Gudelius.jpg  License: Creative Commons Attribution 3.0  Contributors: David Gudelius, Also known as Zeitmeister. File:Prism.png  Source: http://en.wikipedia.org/w/index.php?title=File:Prism.png  License: Creative Commons Attribution 3.0  Contributors: MarcellusWallace Image:CrystalKT.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:CrystalKT.jpg  License: Creative Commons Attribution-Sharealike 2.0  Contributors: Qin Jue Hang File:Lightwave Icon.png  Source: http://en.wikipedia.org/w/index.php?title=File:Lightwave_Icon.png  License: Creative Commons Attribution 3.0  Contributors: Sevar File:Luxrender logo 128px.png  Source: http://en.wikipedia.org/w/index.php?title=File:Luxrender_logo_128px.png  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Andrzej Ambroż (jendrzych) File:Luxrender 0.7 Screenshot.png  Source: http://en.wikipedia.org/w/index.php?title=File:Luxrender_0.7_Screenshot.png  License: GNU Free Documentation License  Contributors: Christophe Leyder Image:LuxRender-Schulraum.png  Source: http://en.wikipedia.org/w/index.php?title=File:LuxRender-Schulraum.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Niabot File:Logo mw.gif  Source: http://en.wikipedia.org/w/index.php?title=File:Logo_mw.gif  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Maxwell Render File:Benjaminbrosdaux.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Benjaminbrosdaux.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Benjamin Brosdaux File:Tom_rusteberg(maxwellrender).jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Tom_rusteberg(maxwellrender).jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Tom Rusteberg Image:Dof_bloom_mental_ray.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Dof_bloom_mental_ray.jpg  License: Creative Commons Attribution 3.0  Contributors: Zzubnik Image:CATIA_Diamond_rendering.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:CATIA_Diamond_rendering.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Abhishekkaushal File:Luxology Modo Icon.png  Source: http://en.wikipedia.org/w/index.php?title=File:Luxology_Modo_Icon.png  License: Creative Commons Attribution 3.0  Contributors: Sevar Image:picogen logo.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Picogen_logo.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Phresnel Image:Picogen-sample-skypanel.png  Source: http://en.wikipedia.org/w/index.php?title=File:Picogen-sample-skypanel.png  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Phresnel Image:Picogen-cascades.png  Source: http://en.wikipedia.org/w/index.php?title=File:Picogen-cascades.png  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Phresnel Image:Picogen-alpine.png  Source: http://en.wikipedia.org/w/index.php?title=File:Picogen-alpine.png  License: GNU Free Documentation License  Contributors: Phresnel, Sven Manguard Image:Picogen-sample-heightmap.png  Source: http://en.wikipedia.org/w/index.php?title=File:Picogen-sample-heightmap.png  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Phresnel

301

Image Sources, Licenses and Contributors File:Povray logo sphere.png  Source: http://en.wikipedia.org/w/index.php?title=File:Povray_logo_sphere.png  License: Public Domain  Contributors: SharkD File:Venn 0000 0001 0001 0110.png  Source: http://en.wikipedia.org/w/index.php?title=File:Venn_0000_0001_0001_0110.png  License: Creative Commons Attribution 3.0  Contributors: Mate2code Image:PNG transparency demonstration 1.png  Source: http://en.wikipedia.org/w/index.php?title=File:PNG_transparency_demonstration_1.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: User:ed g2s/Dice.pov, user:ed_g2s Image:I example povray scene rendering.png  Source: http://en.wikipedia.org/w/index.php?title=File:I_example_povray_scene_rendering.png  License: Public Domain  Contributors: Original uploader was IMBJR at en.wikipedia Image:I example povray scene rendering2.png  Source: http://en.wikipedia.org/w/index.php?title=File:I_example_povray_scene_rendering2.png  License: Public Domain  Contributors: Original uploader was IMBJR at en.wikipedia File:Sunflow logo.png  Source: http://en.wikipedia.org/w/index.php?title=File:Sunflow_logo.png  License: unknown  Contributors: Magog the Ogre, Tuankiet65 File:Glass ochem dof2.png  Source: http://en.wikipedia.org/w/index.php?title=File:Glass_ochem_dof2.png  License: Creative Commons Attribution-Sharealike 3.0  Contributors: Purpy Pupple File:Folder paper 2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Folder_paper_2.jpg  License: Creative Commons Attribution-Sharealike 3.0  Contributors: ProfDEH Image:Engine movingparts.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Engine_movingparts.jpg  License: GNU Free Documentation License  Contributors: Original uploader was Wapcaplet at en.wikipedia

302

License

License Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/

303

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF