|
|
|
Minimum System Requirements
|
- Windows 2000/XP OS
- 800MHz PIII or equivalent
- 128MB RAM
- 64MB Graphics Accelerator
- 5MB Hard Drive space
- Java Runtime Environment
|
OR |
- Solaris 10 OS
- AMD 2.6GHz or equivalent
- 128MB RAM
- 64MB Graphics Accelerator
- 5MB Hard Drive space
- Java Runtime Environment
|
OR |
- Mac OS X Tiger
- 1.33GHz G4 or equivalent
- 128MB RAM
- 32MB Graphics Accelerator
- 5MB Hard Drive space
- Java Runtime Environment
|
|
Texture created by Michelle Stalvey |
|
Model & Texture created by Tank |
Model & Texture created by Tank |
3D Gallery Generator
|
Did you know?
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life.
Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. Animations for non-interactive media, such as video and film, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality.
Rendering times for individual frames may vary from a few seconds to an hour or more for complex scenes. Rendered frames are stored on a hard disk, then possibly transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.
Photo-realistic image quality is often the desired outcome, and to this end several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity.
Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera.
Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).
The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system.
|
The 3d modeling stage could be described as shaping individual objects that are later used in the scene. There exist a number of 3d modeling techniques, including, but not limited to the following:
constructive solid geometry,
NURBS 3d modeling ,
polygonal 3d modeling,
subdivision surfaces and
implicit surfaces.
3d Modeling processes may also include editing object surface or material properties (e.g., color, luminosity, diffuse and specular shading components—more commonly called roughness and shininess, reflection characteristics, transparency or opacity, or index of refraction), adding textures, bump-maps and other features.
3d Modeling may also include various activities related to preparing a 3d model for animation (although in a complex character model this will become a stage of its own, known as rigging). Objects may be fitted with a skeleton, a central framework of an object with the capability of affecting the shape or movements of that object. This aids in the process of animation, in that the movement of the skeleton will automatically affect the corresponding portions of the model. See also Forward kinematic animation and Inverse kinematic animation. At the rigging stage, the model can also be given specific controls to make animaton easier and more intuitive, such as facial expression controls and mouth shapes (phonemes) for lipsyncing.
Tessellation and meshes
The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as spheres, cones etc, to so-called meshes, which are nets of interconnected triangles.
Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to render using scanline rendering.
Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene.
|
|
|