Transcript
Copyright © 2007 by Orad Hi-Tec Systems Ltd. All rights reserved worldwide. No part of this publication may be reproduced, modified, transmitted, transcribed, stored in retrieval system, or translated into any human or computer language, in any form or by any means, electronic, mechanical, magnetic, chemical, manual, or otherwise, without the express written permission of Orad Hi-Tec Systems, 15 Atir Yeda st., POB 2177, Kfar Saba, 44425, Israel. Orad provides this documentation without warranty in any form, either expressed or implied. Orad may revise this document at any time without notice. This document may contain proprietary information and shall be respected as a proprietary document with permission for review and usage given only to the rightful owner of the equipment to which this document is associated. This document was written, designed, produced and published by Orad Hi-Tec Systems. Trademark Notice 3Designer, 3Designer Advanced, Maestro, Maestro Controller, Maestro PageEditor, JStation, JServer, ProSet, 3DPlay, DVP-500 are trademarks of Orad Hi-Tec Systems Ltd. All other brand and product names may be trademarks of their respective companies. If you require technical support services, contact Orad Hi-Tec Systems Ltd. at
[email protected]. December 19, 2007
Contents 1.
Introduction .............................................................................................. 5
Related Documents ....................................................................................... 5 The ProSet GUI............................................................................................... 6 Configuring the Way ProSet Opens........................................................................... 7
2.
System Configuration .............................................................................. 9
Understanding TrackingSet ........................................................................ 10 Editing the TrackingSet Configuration File .............................................................. 14 File Structure......................................................................................................... 15 Configuring External Switchers............................................................................. 15 Using the TrackingSet GUI ...................................................................................... 16 The Tracker Section.............................................................................................. 17 Tracker Data Section ............................................................................................ 19 TrackingSet GUI Tabs .......................................................................................... 20 Axes Offsets....................................................................................................... 21 Panel Filters ....................................................................................................... 22 Lens Parameters................................................................................................ 24 Mounting Shifts .................................................................................................. 27 DVP Settings...................................................................................................... 28 Delays ................................................................................................................ 30 Tracker Origin .................................................................................................... 31 Tracking Mode ................................................................................................... 33 DVP Loading...................................................................................................... 35 Blocking ............................................................................................................. 39 Observer ............................................................................................................ 43 Ranges............................................................................................................... 43
Using the TrackingSet Camera GUI ........................................................... 45 Depth of Field Control ................................................................................. 48 Walkthrough ................................................................................................. 50
3.
Set Creation ............................................................................................ 53
Set Design .................................................................................................... 54
2 Scenography.................................................................................................54 Hardware Limitations................................................................................................ 54 Polygon Count....................................................................................................... 55 Blue Box Dimensions ............................................................................................... 55
Creating a Virtual Set ...................................................................................56 Creating the Virtual Set Geometry ........................................................................... 56 Hidden Polygons ................................................................................................... 56 Mapping the Textures onto the Geometry ............................................................ 57 Rendering the Set with Desired Lighting............................................................... 57 Single vs. Parallel Set Geometry ............................................................................. 58 Single Set Geometry (Simple Geometry).............................................................. 58 Parallel Set Geometry (Complex Geometry) ........................................................ 59 Simplifying the Set ............................................................................................. 59 Creating a Duplicate Set .................................................................................... 60 Design Approach................................................................................................ 61
Using Textures to Provide Realism ............................................................62 Lighting Environment ............................................................................................... 62 Environment Mapping .............................................................................................. 63
Creating an Infinite Blue Box ......................................................................64 The Blue Box Model in ProSet ................................................................................. 65
4.
Set Conversion........................................................................................67
Preparation....................................................................................................68 Reducing Geometry .....................................................................................70 Materials................................................................................................................... 71 Attaching Objects and Welding Vertices .................................................................. 71 Grouping Objects ..................................................................................................... 72 Rendering the Scene................................................................................................ 72 The ID Channel ........................................................................................................ 73 Exporting in VRML format ........................................................................................ 73
Importing to 3Designer ................................................................................74 Expand and Shrink Folders in 3Designer ................................................................ 77 Changing the Material and Colors in the Scene ...................................................... 78 Updating Textures .................................................................................................... 80 Applying Transparency and Reflection Effects to Objects ....................................... 80 Creating an Animated Image (Flipbook) .................................................................. 81
3 Adding a BlueBox..................................................................................................... 82 Finishing the Virtual Set ........................................................................................... 84
5.
Chroma Key ............................................................................................ 85
Using Ultimatte............................................................................................. 86 General Lighting Tips ............................................................................................... 86
HDVG Internal Chroma Keyer ..................................................................... 87 Opening the Chroma Key Window........................................................................... 88 Menu Options........................................................................................................ 89 Compositing Control ............................................................................................. 89 Chroma Key Parameters ................................................................................... 90 Additional Parameters........................................................................................ 91 Saving/Restoring Parameter Sets............................................................................ 91 Display Modes (HDVG)............................................................................................ 92 Routing.................................................................................................................. 93 Outputs.................................................................................................................. 93 Video Config ......................................................................................................... 94 Genlock Phase...................................................................................................... 94
4
Terms Used in this Manual CCD – Charge-Coupled Device image sensor. CCU – Camera Control Unit. CoC – Circle of Confusion (defocus) FOV – Field of View. GUI – Graphic User Interface. DVP – Digital Video Processor. The unit that analyzes the video signal from the camera and determines the camera position in the studio in real time. The position is then sent to the HDVG, and RenderEngine positions the Set according to this information. Orad Grid Panel – the grid with the unique pattern that is used as a backdrop in the physical blue box studio (provided by Orad). PR – Pattern Recognition. VDI – Video Data Inserter.
1. Introduction What is ProSet? ProSet is a tracking application intended for Virtual Set production. A virtual Set is a computer generated, graphic environment that simulates a real studio environment. Virtual sets allow you to create sets composed of elements that would not be possible in a real studio. Blending live images with virtual objects provides a virtual Set that is realistic in all aspects of texture, depth, and perspective. Virtual sets can be changed on the fly, and scenes can be re-shot in a fraction of the time needed for physical sets. ProSet mediates between the graphic virtual Set and the live image, to make all movement appear realistic.
Related Documents Documents mentioned in this manual, available from Orad: • DVP-500 Setup Guide • 3Designer User Guide • Orad tutorial on: Modeling, Rendering, Exporting, and Importing Virtual Sets from 3ds Max into 3Designer
6
Introduction
The ProSet GUI ¾ To start ProSet: • Go to Start > All Programs > Orad > ProSet and the ProSet Panel icon opens in the system tray. All ProSet settings are made by right-clicking the ProSet Panel icon in the system tray, and selecting the required option. The ProSet Panel menu contains the following options: Option
Description
DoF panel
Opens the Depth of Field panel to allow you to set background focus. See Depth of Field Control, p.48
Tracking
TrackingSet GUI opens this GUI. See Using the TrackingSet GUI, p.16. Camera Studio opens the TSCameraGUI. See Using the TrackingSet Camera GUI, p.45. Camera Switcher opens the TSCameraSwitcher window that allows you to choose which tracking camera’s data to display. Machine Example – modify ProSet xml this option is changed to the tracking host, as described in Configuring the Way ProSet Opens, p.7. • Start Tracking starts tracking for the defined tracking host. • Stop Tracking stops tracking for the defined tracking host. • Config opens the TrackingSet.cfg file. See Editing the TrackingSet Configuration File, p.14.
HDVG Video Controller
Opens the DVG Video Controller window.
Walkthrough
Opens the TSWalkthrough GUI that allows you to switch off the tracking data and then move around in the virtual space from the virtual camera’s perspective.
See Opening the Chroma Key Window, p.88.
See Walkthrough, p.50. About
Displays the version information window.
Exit
Closes the ProSet application.
Introduction
Configuring the Way ProSet Opens Before you begin working with ProSet, you must configure the ProsetTray.xml file, to set the options that will be available in ProSet. Open the file C:\Orad\ProSet\ProSetTray.xml with an editor, such as Vim (available from www.vim.org), or with Microsoft Notepad. IMPORTANT ProSet must be closed when editing the ProsetTray.xml file.
Here you can determine what items are available in the ProSet menu, and what their target files are.
¾ To modify the ProSet menu: 1. You must replace the submenu name “Machine example - modify ProSet xml” with a name describing the HDVG that ProSet is running on. 2. Rename the HDVG in the ARGS attribute as the target for any command given from this menu item. 3. If ProSet is to be used with multiple tracking hosts, copy this submenu section, and rename each instance, appropriately. 4. Save the before closing.
7
8
Introduction
NOTE: Previous releases of ProSet were integrated with Orad controller software. This release (1.0) is a standalone version that does not include controller software.
2. System Configuration About this Chapter This chapter describes ProSet system configuration. ProSet includes several components that must be set before using the application, in order to connect the virtual world of ProSet to the hardware in use, tracking devices, and studio settings. This chapter describes how to configure the different parts of ProSet.
10
System Configuration
Understanding TrackingSet ProSet uses tracking systems to get camera data (location – X, Y and Z, orientation – Pan, Tilt and Roll and field of view/focus) from the real studio camera, for use by the rendering computer. TrackingSet is a ProSet process that gathers data from tracking systems and exports this data to the real time rendering application (RenderEngine). TrackingSet receives information from the various tracking systems used in the studio to track video settings, talent tracking, and more. NOTE: ProSet uses a logical camera to identify the data stream from a camera, and to connect that data stream to the camera location data coming from the trackers. Inside ProSet, there are logical cameras, and every logical camera is connected with a studio camera, a real camera with tracking data, or a virtual camera that is used only for navigating inside the rendered Set.
There are several types of tracking systems: • Pattern recognition. • Infra Red. • Mechanical Sensors. TrackingSet gathers data from any system type, for use in ProSet.
System Configuration
TrackingSet is configured in two ways: • Editing the TrackingSet.cfg file. • The TrackingSet GUI. Some configurable parameters can be modified from both the TrackingSet.cfg file and the TrackingSet window. Others can only be modified using one of these interfaces. The TrackingSet window reads and writes configuration information to and from the .cfg file. The TrackingSet process receives camera location and orientation information from one camera, analyzes that information, and provides RenderEngine with the information to render the Set. In the example in Figure 1, there are three studio cameras. All three cameras feed to one of the following: • To the VDI that displays a unique ID for each camera in its output (outside the safe area), to allow the DVP to detect the currently selected camera. For more information, see the manual provided with your VDI. • Directly to an external switcher or router. The switcher cuts one of these data streams to the DVP as foreground. For more information, see the manual provided with your DVP. The HDVG rendering computer receives the data stream from the DVP via an Ethernet connection. The HDVG also gets information from the other tracking systems via Ethernet or serial connections. In ProSet, the HDVG sends out two signals: background and alpha (key). In a standard ProSet configuration, chroma keying may be done inside the HDVG itself; this means that the HDVG produces the PGM signal directly, and there is no requirement for an external chroma keyer. The TrackingSet process runs on the HDVG. This process collects tracking information directly from the trackers. While the IR system or the sensors send continuous data regarding all cameras, the DVP sends the data of the on-air camera only (the camera that is sent to DVP from the switcher) and the camera ID is taken from the VDI.
11
12
System Configuration
TrackingSet is physically controlled from the external switcher. The software switcher within ProSet also sends a selected camera to TrackingSet. This switcher gets information from CameraSet, which draws camera information from the TrackingSet.cfg file. All tracking information is sent from TrackingSet to the RenderEngine process, which renders the Set.
13 System Configuration
Figure 1 : Information flow in a typical studio
14
System Configuration
Editing the TrackingSet Configuration File The TrackingSet configuration file (G:\config\TrackingSet.cfg) is a text file containing information about the type and number of trackers and switchers. A tracker can be a tracking element, such as an infrared camera, or a studio camera. (An infrared tracker is not a camera.) A configured TrackingSet.cfg file is installed with ProSet. This file can be edited in any text editor.
¾ To open the TrackingSet configuration file: 1. Right-click the ProSet Panel icon in the system tray. 2. Select
> Config. The following image shows a sample configuration file opened in Microsoft Notepad.
System Configuration
File Structure The statements in the TrackingSet.cfg file describe the tracking systems in use. Their type, location within the Set in relation to the origin of the Set coordinate system, and their IP address must be specified for each tracking element. The heads entry indicates the number of trackers used in the studio. In the example above, there are two heads. Each head represents a tracker (e.g., IR, sensors, or PR). The last tracker on the list is the Orad pattern (PR), which is the camera that ProSet refers to as the real studio camera in TrackingSet and the TrackingSet window. Other trackers should be defined above this one in the configuration file. Although CameraSet treats them as studio cameras and shows them on the Camera list, they are not independent trackers, and cannot be used as studio cameras. (When you switch to an IR camera, the displayed tracking data is actually the PR camera tracking data). NOTE: Each head/tracker has an assigned studio camera number (CCU number), but this number is not always used as the logical camera number.
Configuring External Switchers A Switcher (a.k.a. vision mixer, video switcher, video mixer or production switcher) is a device used to choose between several different video sources and in some cases composite (mix) video sources together and add special effects. The example TrackingSet.cfg file, shown above, shows how external switchers are configured. Near the bottom of the file is a Switchers statement that indicates the number of external switchers in use (the number that appears immediately after the statement). The next line provides TrackingSet with the properties of the one switcher in the studio.
15
16
System Configuration
If there is more than one camera connected to the system, the system must know which camera is on air at any given moment. There are two common configurations: • Switching is done externally and ProSet knows which camera is current and switches internally as a slave. This is the most common configuration, based on PR, using the Orad grid panel and a VDI unit. • The cameras are not connected to a DVP computer, and the Orad grid is not used. In this case, an independent tracking system is used (such as Xync), and TrackingSet is connected directly to the external switcher.
Using the TrackingSet GUI The TrackingSet GUI displays information about tracking devices and provides an interface for configuring TrackingSet parameters. The Tracker and Tracker data sections display real-time information about the current tracker. The tabs contain configuration parameters that affect the behavior of TrackingSet. All parameter settings in the tabs apply only to the selected channel, i.e. render host, and the current tracker assigned in the top of the window. The upper section contains general information about the trackers and the rendering host. The lower section consists of a number of tabs containing various parameters. There are two sections in the upper part of the TrackingSet GUI: • Tracker • Tracker data
¾ To open the TrackingSet GUI: • Right-click the ProSet Panel icon in the system tray, and select Tracking > TrackingSet GUI.
System Configuration
The Tracker Section The parameters in the Tracker section of the TrackingSet GUI describe the tracking system for the selected channel and tracker. From this section, select the channel (rendering host) and the tracker (either by selecting the Show Current check box or by setting the tracker number. The rest of the parameters in this section are taken directly from TrackingSet configuration file and from CameraSet, and cannot be changed by the operator.
17
18
System Configuration
The following table describes the parameters in the Tracker section: Parameter
Description
Channel
Select a channel to configure. A channel corresponds to a render host or group of hosts in chain configurations.
Show current
Select this check box to show configuration settings for the current camera selected in the Camera Switcher (usually the camera assigned to the DVP).
Number
Select a tracker from the drop-down list. When Show current is selected, this automatically displays the on-air camera.
Name
The name of the tracker as defined in the configuration file.
Type
The tracker type as defined in the configuration file.
Port
The Ethernet port that tracking data is received on or the serial connection device name. Tracking data is relayed from the tracking system to the rendering host via an Ethernet or serial connection.
IP
The multicast IP used for transmission. For unicast transmissions or for serial connections this field is empty.
Calibration File
The file containing calibration information about the camera lens (for systems with sensor-based zoom and focus tracking like sensor cameras). The calibration file is defined as part of the tracker configuration in the TrackingSet.cfg file. The tracker will locate the file according to the location designated.
CCU Number
Inside the parameter of the TrackingSet, the ID given to each camera or tracker by CameraSet. Also determines the order of cameras in the Camera switcher.
Logical camera number
The logical camera number assigned to the displayed tracker.
System Configuration
Tracker Data Section The parameters in the Tracker data section of the TrackingSet GUI describe the tracker and its parameters. All the values are reported from the tracker and cannot be changed or entered manually.
The following table describes the parameters in the Tracker data section: Parameter
Description
Rate
Fields per second.
View angle
Vertical field of view of the selected camera.
Aspect
Picture ratio of the camera (PAL or NTSC).
Zoom Range
Displays the zoom ranges for sensor lenses only.
Focus range
Displays the focus ranges when clicking Reset Ranges for sensor lenses only.
CRC (cyclic redundancy check)
The number of received data blocks with an incorrect checksum. It is reset to zero when you click Reset Communication. A rising number indicates communication problems.
Bad Bytes
If Rate is not correct or sensor is not connected correctly, bad bytes are transmitted from the tracking system to the rendering host.
Focus plane
The distance in meters between the tracked camera and sharp picture plane. This distance is properly reported only if it is properly calibrated (or transmitted from the tracking system). This field does not apply to DVP tracking.
Focal length
The focal length of the tracking camera.
CCD center
The center of the image displayed on the monitor. This value is manually entered in the Lens Parameters tab or transmitted by the tracking system (X-pecto for example).
Position and Orientation
The tracker position (X,Y,Z) and orientation (Pan, Tilt, Roll) reported from the tracker.
19
20
System Configuration
At the bottom of the TrackingSet GUI are several buttons:
Reset Ranges is used to reset the ranges of the zoom and focus with sensor cameras so that TrackingSet will use the full range of movement Reset Communication resets communication between TrackingSet and the tracker that is currently displayed, and resets the CRC and BadBytes counters. DVP Control opens the DVP control window to allow control of one or more DVPs via Ethernet. It displays DVP status and lets you change parameters in order to optimize functionality. For more information on the DVP Control, see the DVP-500 Setup Guide. Save Settings saves any changes made to tracker configuration in the tabs, to the TrackingSet.cfg file.
TrackingSet GUI Tabs The TrackingSet GUI contains the following tabs: Tab
Description
Axes Offsets
Add given offsets to values reported by the camera tracking system. Used to change the zero (origin) position of the axis. Mainly used with sensor heads to store pan and tilt offsets.
Panel Filters
Select and configure the filters that compute camera location when using pattern recognition tracking.
Lens Parameters
Configure parameters specific to camera lenses.
DVP Settings
Set the Chroma/luminance values for the pattern recognition.
Mounting Shifts
Set distances from the center of the pan and tilt axes of the camera head, or from the LED connected to the camera (in IR tracking system), to the mounting ring of the lens.
DVP Settings
Set color range and threshold for DVP identification.
Delays
Set video delay and processing delay, and an additional delay to adjust differences between all used trackers.
Tracker Origin
Adjust the origin point and origin orientation in the studio for
System Configuration
Tab
Description all trackers.
Tracking Mode
Select the mode in which the camera is tracked (only DVP tracked cameras).
DVP Loading
Initialize and test the DVP and the pattern recognition quality.
Blocking
Not used for ProSet in the current version, used only for CyberSport.
Observer
Not used in the current version.
Ranges
Set the ranges of the zoom and focus encoders on a sensor head.
Each tab is explained in detail in the sections below.
Axes Offsets
In the Axes Offsets tab, you can configure an offset from the studio coordinate system to the actual orientation and position of the camera. The offset is applied to all frames rendered from the selected camera. When axes offsets are correctly configured, the Orientation values in the Tracker data are zero.
21
22
System Configuration
The Axes Offsets tab is often used to correct pan/tilt values after power to sensors is cycled off and on, or after sensors are reset. The camera will report all zeros indicating that the camera is at right angles in relation to the studio coordinate system. Since the camera will actually have some pan, tilt, or roll, this shift between the actual orientation of the camera and the reported, origin based, orientation must be explicitly added to the reported tracking information. The values on the Axes Offsets tab are usually all zeros. The following table describes the parameters on the Axes Offset tab: Parameter
Description
Pan
The angle added to the reported pan of the camera.
Tilt
The angle added to the reported tilt of the camera.
Roll
The angle of roll of the camera.
X
The distance (in meters) added to the reported X value of the camera.
Y
The distance (in meters) added to the reported Y value of the camera.
Z
The distance (in meters)added to the reported Z value of the camera.
Panel Filters
System Configuration
A camera always has a small degree of movement. This movement shows in the rendered frames. TrackingSet can be configured to ignore a certain amount of movement by computing camera position in different ways. Three filters are available and they are used for pattern recognition tracking only. Parameter
Description
Position
Uses the difference in the camera position in relation to the Orad grid panel. Position is computed from the average position in each of the frames included within the Previous-Future range. Previous - Select a previous frame. This frame is in relation to the current frame. Future - Select a future frame. This frame is in relation to the current frame. The future filter also adds to the total rendering delay of the system. Typical values are 6 previous frames and 0 future frames.
Matrix
Uses a matrix of the Orad grid panel created by the DVP computer that includes camera position information to locate the camera position between frames. Previous - Select a previous frame. This frame is in relation to the current frame. Future - Select a future frame. This frame is in relation to the current frame. The future filter also adds to the total rendering delay of the system. Typical values are 2-6 previous frames and 0 future frames.
Sticky
Defines the offset in number of fields, typically 8 to 10 fields. The system tries to compute the difference in camera position. If the difference is less than the threshold value, the previous camera position is used for rendering. Offset: number of frames to use to calculate the camera position. Difference is computed between each frame. Threshold: 0.1 – 1.0. The larger the value the greater the camera movement that is allowed before camera is considered to have moved.
If the Orad Grid Panel tracking is not used, this option is disabled.
23
24
System Configuration
Each filter can be turned on and off separately, and has its own parameters. If used in the correct circumstances, they can improve tracking stability in PR mode. They are generally set once by the engineer installing the system.
!
CAUTION Use extreme care when adjusting the panel filters. It is recommended to use the settings made during system installation.
Lens Parameters
In the Lens Parameters tab, you specify the lens parameters for the current camera. There is often a slight discrepancy between the reported view angle and the actual angle. The Focal Scale parameter can be used to correct this discrepancy. If calibration done by TrackingSet or the data in the calibration file for the lens meets the requirements, these values are provided automatically. The parameters in this tab are used when the lens calibration file is for this type of lens, but the lens optics are slightly different. VScale and VShift scale incoming data only. It is recommended to adjust Focal Scale and not VScale or VShift.
System Configuration
NOTE: In ProSet, the view angle of a lens is the vertical view angle. Instead of using the horizontal view angle, the aspect ratio is indicated. The aspect ratio is the ratio between the horizontal width and the vertical height of the viewing area of the camera. Typical aspect ratios are predefined for PAL and NTSC. Custom values are also possible.
25
26
System Configuration
The following table describes the parameters on the Lens Parameters tab: Parameter
Description
Default View
Defines the view angle of the camera lens, in degrees. This parameter is disabled if tracking provides view angle information. To enter a value manually, set the lens at full zoom out and enter the view angle value from the lens documentation.
Default Nodal
Defines the distance between the beginning of the lens and the nodal point. The nodal point is always on the optical axis of the lens. The beginning point of the view angle, i.e. the vertex of the view angle; assumed to be the beginning of the lens (the mounting ring). This parameter is disabled if tracking provides nodal shift information. If there is no way to determine the position of the nodal point, enter a zero. This sets the nodal point to coincide with the beginning of the lens.
Aspect
Defines the aspect ratio of the picture, calculated by dividing the width by the height of the picture. System - NTSC or PAL Ratio number - Enter an aspect ratio for custom values. Must be defined for Orad grid panel tracking. Other tracking systems provide this information automatically and this parameter is disabled.
Vscale
Scales the view angle by the specified value. This parameter can be used for adjusting calibration, however it is recommended to use the Focal scale parameter instead.
VShift
Shifts the view angle by the specified value. This parameter can be used for adjusting calibration, however it is recommended to use the Focal scale parameter instead.
Focal Scale
Defines the scaling factor for the focal or image distance of the lens.
CCD Centering
Enter the shift value to add to the reported CCD center so that the calculated CCD center matches the optical axis. The optical axis of the lens is not always exactly in the middle of the CCD. CCD Centering is the shift that must be added to the received value from tracking. X, Y: The coordinates on the CCD plane, in pixels. Partial pixel settings are allowed.
System Configuration
Mounting Shifts
Camera mounting shift values are the distances from the center of the pan and tilt axes of the camera head, or from the LED connected to the camera (in IR tracking system), to the mounting ring of the lens. A shift value is determined for each axis. All shift values are measured in meters. The following table describes the parameters on the Mounting Shifts tab: Parameter
Description
Pan/Tilt head
Offset between pan and tilt rotation and the beginning of the lens. X shift: the distance from the pan movement axis on the head to the optical axis of lens, along the X-axis. Value is positive when the camera is standing to the right of the head from the camera operator position (usually equal to zero). Y shift: the distance from pan movement axis on the head to the beginning of the lens along the Y-axis. Value is positive when the lens is farther than the center of the head from the camera operator position. Z shift: the distance from the tilt movement axis on the head to the optical axis of lens along the Z-axis. Value is positive when the camera is above the head. P/T Cross - The distance, if any, between the pan and tilt axes on the optical axis of the lens. This parameter is usually zero.
27
28
System Configuration
Parameter
Description This parameter is positive when the tilt axis is farther away from the camera operator than the pan axis. Head Balance - Not supported in the current version. The value is always zero.
LED
Enabled only when IR tracking system is used. X, Y, Z - Offset between the LED and the beginning of the lens. X defines the distance between the LED and the nodal point of the lens in left-right direction. Value is positive if LED is left of the camera from camera operator's position (back of the camera). Usually equal to zero. Y defines the distance between the LED and the nodal point of the lens in the front-back direction. Value is positive if nodal point is far than LED from camera operator’s position. Z defines distance between the LED and the nodal point of the lens in the up-down direction. Value is negative if LED is higher than lens. Usually a negative number.
DVP Settings
System Configuration
The DVP uses pattern recognition to analyze the data from the grid and translate it to camera information (position, orientation, and FOV). In order to have DVP processing only for the relevant part of the studio (e.g. the part where the Grid is visible in the view of the camera), the chroma values (U and V) should be set to the narrowest range possible. This is when the DVP is able to identify both blues (not two colors but two types of blue). The range should be big enough to include both blues and small enough to exclude any other colors in processing. The threshold is for setting the luminance so that the DVP will be able to separate the blue lines from the blue background. In order to see the changes one of the DVP tests (like edge test or chroma test) should be activated (see DVP loading tab). The following table describes the parameters on the DVP Settings tab: Parameter
Description
U and V
Enter a value or use the sliders to adjust the color range. These values should be checked if cameras are changed or Set lighting is changed. Lighting changes can change the color of the grid and pixels might not be recognized as part of the grid.
Gradient threshold
Enter a value or use the slider to set the luminance value by which the DVP will identify the edges between blue lines and background on the panel grid. 0 is black no change, 255 is white, largest change. Default range should be 12-20.
Apply
Applies the settings to the DVP.
29
30
System Configuration
Delays
When using graphics for the background, the foreground signal must be delayed in order to synchronize between the time the system requires to calculate the camera parameters and render the graphics. The system delays the video signal from input to output according to the amount of time it takes for the background to be created. The amount of time for delay may vary from one type of tracker to another. Enter the delays (in fields) to make changes and click Apply to set them. NOTE: These units are in fields, not frames.
The following table describes the parameters on the Delays tab: Parameter
Description
Video delay
Total Foreground delay. The time (in fields) that the video is delayed. Units are always even (2, 4, etc) since the FG delay is always in frames. This is set for the DVP and not for each camera.
System Configuration
Parameter
Description
Process delay
Delay in fields for the DVP to process the pattern recognition. Default value is set to 4 and normally this value should not be changed.
Delay reports
Additional delay in the tracking reports (usually fields). Used in a situation where total system delay has an odd field number (and because video delay must be full frames, one additional field must be add to all trackers) or when you use a tracking system with a different internal processing delay, for example sensors and DVP (in such a situation a faster system may experience less delay).
Tracker Origin
In order for the TrackingSet process to calculate the position of the trackers and the cameras, it must have the origin of the coordinate system used by the tracker. The coordinate system of the tracker is often different from the studio coordinate system, and ProSet must know the difference. There must be one coordinate system common to all tracking systems. The x, y, z values on the Tracker Origin tab represent the difference between the studio coordinate system and the coordinate system used by the tracker. If the x, y, z values are (0,0,0), then the origin of the studio and tracker coordinate systems coincide. If they are not identical, you must define the difference between them.
31
32
System Configuration
If the panel is perpendicular to the floor, the shift can be entered under Panel Position. If the panel is not perpendicular to the floor, the pan/tilt/roll values under Transformation are used. Transformation and Panel Position contain the same information, but it is easier to measure the position in three points and use the Panel Position values. NOTE: The angles of tilt or roll of the panel must be known precisely. All angle measurements are in degrees.
In the 3ds Max coordinate system, the XY plane is the floor. This coordinate system is also used in the TrackingSet window. The TrackingSet.cfg file uses the GL coordinate system. In GL coordinate system, the XY plane is the back of the blue box and the Z direction is perpendicular to that plane. When settings in the Tracking Set window are saved to the TrackingSet.cfg file, coordinates are transformed from the 3ds Max coordinate system to the GL coordinate system automatically. The following table describes the parameters on the Tracker Origin tab: Parameter
Description
Specify by
There are two options: Transformation – Show the offset (delta) between the tracker and the origin, e.g. if the panel is used as origin (normally) than the point on the floor underneath the lower left corner of the panel is the studio origin. When the panel is above the floor the distance between the grid and the floor should be calculated. In that case, the Z value will show the distance between the grid and the floor. Panel position – The actual position of the panel according to the studio origin (see above), for example if the grid is 20 cm above the floor than the Z value will be 0.2. The result of using both is identical; it is just the calculating method that is different (one is actual panel position and the other is the offset).
System Configuration
Parameter
Description
Pan, Tilt, Roll
The rotation of the tracker coordinate system. It is not the same as offsets, because the tilt rotation, for example, also changes the reported XYZ values (according to new studio coordinate system).
X, Y, Z
XYZ values represent the position of the tracker coordinate system in the studio coordinate system
Left Lower
Position of the left lower corner of the grid is according to the studio coordinate system, including orientation (the point on the floor underneath the left lower corner of the grid is).
Left Upper
Position of the left upper corner of the grid according to studio coordinate system, including orientation.
Right Lower
Position of the right lower corner of the grid according to studio coordinate system, including orientation.
Tracking Mode
The Tracking Mode tab displays and controls the type of tracking used by the chosen camera (e.g. Sensor, PR Free, LEDs). NOTE: Used only with a DVP. Other types of trackers do not use this tab at all.
33
34
System Configuration
By selecting the desired type in the check boxes, the type used by selected camera automatically changes. For some trackers, like sensors or fix position, the system must recalculate the camera position (using the grid and [optionally] the targets) before shooting. In that case, click Recalculate position after selecting the tracker type. The following table describes the parameters on the Tracking Mode tab: Parameter
Description
Tracking Mode
List of all tracking methods. Click to select the current type. This tab is active only when Orad grid panel camera is selected in the tracker list at the top of the TrackingSet window (see above). Free – Pattern recognition only (Orad grid panel). All camera parameters (X, Y, Z, Pan, Tilt, Roll, and FOV) are calculated from the grid. This type is not recommended for production, only for testing. Fix pos (Fixed Position) – Pattern recognition only (Orad grid panel). All parameters are calculated from the grid. The position parameters are fixed (after recalculating the camera position). Fix zoom – Pattern recognition only (Orad grid panel). All parameters except lens parameters (Field of View) are calculated from the grid. The FOV parameters are fixed (after recalculating the camera position or inserting the values manually). Ext pos (External position) – The same as free type with the use of IR tracking system for calculating the position of the camera (LEDs). Ext zoom (External zoom) – zoom information is from other tracking system, usually a slave head. The external source must be defined in the TrackingSet.cfg file. Sensors – only available if slave sensors are defined in TrackingSet.cfg. Can go outside the panel, zoom beyond the limits of the panel, or zoom ranges of other trackers. Sensor mode is similar to Fix Pos mode – the positional data is not passed from the tracker, and should be set by recalculating the camera position.
Recalculate position
Click to recalculate the position of the camera. Used in the fix camera types and Sensors.
System Configuration
DVP Loading The DVP computer must be initialized before it can process Orad grid panel data. Orad provides several initialization scripts, which are installed with ProSet. These scripts are located in the DVP directory. Scripts added to this directory are listed on the DVP Loading tab after the TrackingSet process is restarted. The script named run is for the DVP 100. The script named run2 is for the DVP 200. (However, the DVP-500 is controlled from the DVP Control window that opens when you click DVP Control.) The following image shows the DVP Loading tab.
The following table describes the parameters on the DVP Loading tab: Parameter
Description
DVP Loading
Select a script from the list.
Execute
Click to execute the selected script.
DVP Tests DVP chroma and luminance values are dependent on the studio lights, camera settings, and other factors. Before working with the DVP, the correct values should be set (see DVP Settings tab). The following tests assist in making these settings and some of them should be done.
35
36
System Configuration
¾ To select a test 1. Click the DVP Loading drop-down list to see the DVP tests. 2. Select a test. 3. Click Execute. The results of each test are shown on the preview monitor. If satisfactory results cannot be obtained from one or more of the tests, open the DVP Settings tab and change the U, V, and Threshold values. 4. After you obtain optimum results from the test, click rerun2. The DVP is reinitialized and ready to work. The following table describes the tests: Test
Description
All Lines
Based on the detected edges (see Edges test below), the DVP shows vertical and horizontal white lines spanning the full height and width of the studio. It also shows any “false lines” – lines detected by something in the studio other than the panel. These appear as flashing white lines, obviously not connected to the panel lines. Adjustment: adjust the U/V settings in the DVP Control, or find and remove the object in the studio causing the false lines (for recurring false lines).
All True Lines
Based on the detected edges (see Edges test below), the DVP shows both vertical and horizontal white lines spanning the full height and width of the studio. It does not show any “false lines” – lines detected by something in the studio other than the panel.
Chroma
Checks whether the DVP is actually seeing all the chroma blue on the Set as blue. On the monitor, check whether the grid pattern, walls, and any blue objects appear as black, while all other colors appear as green. Adjustment: If there is any deviation from these colors, change the U/V settings in the DVP Control. Perform the chroma test again to see if the results are satisfactory.
System Configuration
Test
Description
Edges
A test to make sure that the DVP is correctly detecting the transitions from one shade of blue to another in the grid. The edges of the blue lines are outlined in white. All the grid edges should be detected, while no other edges should appear in the studio. Adjustment: Redundant edges can usually be corrected by one or more of the following: Smoothing the studio walls/floor Round the corners Improving the lighting Adjusting the cameras Adjusting the UV values Remember to repeat the chroma test each time after changing the UV values. Repeat the edge detection test after any change is made to the studio lighting or construction.
False Lines
This test is performed like the above All Lines and True Lines tests, but shows only the false lines. Adjustment: adjust the U/V settings in the DVP Control, or find and remove the object in the studio causing the false lines (for recurring false lines).
Full Line Match
This test shows the number of horizontal and vertical lines identified by the DVP. Every pair of lines represents one DVP stripe. The long lines are the identified edges, and the short ones the non-identified or non-visible lines. The number of long lines should match the number of visible lines.
Vertical Lines
Based on the detected edges (see Edges test above), the DVP shows vertical white lines spanning the full height of the studio. It also shows any “false lines” – lines detected by something in the studio other than the panel. These appear as flashing white lines, obviously not connected to the panel lines. Adjustment: adjust the U/V settings in the DVP Control, or find and remove the object in the studio causing the false lines (for recurring false lines).
37
38
System Configuration
Test
Description
Horizontal Lines
Based on the detected edges (see Edges test above), the DVP shows horizontal white lines spanning the full width of the studio. It also shows any “false lines” – lines detected by something in the studio other than the panel. These appear as flashing white lines, obviously not connected to the panel lines. Adjustment: adjust the U/V settings in the DVP Control, or find and remove the object in the studio causing the false lines (for recurring false lines).
Rerun and Rerun2
After running the tests, the DVP should reset to the working condition with no test loaded. The number (2) indicates DVP200 and up. If there is no number, the reference is to DVP100.
Run and Run2
After power off or manual reset, the DVP software needs to be loaded in order for the DVP to perform pattern recognition. The run or run2 commands initialize the DVP with the tracking parameters and values. This command must run before all the others. Run2 is for DVP200 and Run is for DVP100.
True Vertical Lines
Based on the detected edges (see Edges test above), the DVP shows vertical white lines spanning the full height of the studio. It does not show any “false lines” – lines detected by something in the studio other than the panel.
True Horizontal Lines
Based on the detected edges (see Edges test above), the DVP shows horizontal white lines spanning the full height of the studio. It does not show any “false lines” – lines detected by something in the studio other than the panel.
Virtual View
The final and complete match between the real grid and the virtual grid. The 2 grids should be similar in design and placed one on top of the other. This test is for verifying the matching between the parameters loaded to the DVP and the real world.
System Configuration
Blocking The Blocking tab is used mostly for sports broadcasts for calibration of a mechanical sensor head. In studio broadcasts, reference points that correspond with real markers in the studio space are entered here, and then used to calculate the position of the studio camera.
The following table describes the parameters on the Blocking tab: Parameter
Description
Reference Points
Measured positions that are used to calculate camera position.
Projection Method
Projection method to be used for calibration (see below).
Get plane
Chooses the best suitable plane to blocking points and calculates offset. Used for sports broadcasts.
Offset
The offset of the projection plane from the 0 point in meters. Used for sports broadcasts.
Calculate
Calculate the landmark values entered.
Add
Add a landmark.
Clear all
Clears the landmark values entered.
Load
Load a saved model.
Edit
Edit an existing landmark.
Reset all
Resets all entered landmark values.
Save
Saves the model.
39
40
System Configuration
Calculating Camera Position and Orientation
¾ To acquire data: 1. Aim at a landmark with the camera, and completely zoom in using the center cross (viewfinder marker) as a reference point. 2. Lock pan-tilt-brakes if necessary. 3. Click Set for the corresponding landmark in the TrackingSet GUI. 4. Repeat this process for each landmark.
• Four measurements with significantly different pan and tilt-values are mandatory. Seven to ten measurements are recommended for a flat field (Plane method). • The order of the aimed landmarks is unimportant. Click Set for the corresponding landmark in the Tracking Set Window after each measurement. • Aim carefully at the landmarks. Every inaccuracy decreases the overall measurement accuracy.
System Configuration
• If the Set option was clicked by mistake, the Reset button deletes the current pan and tilt measurement. The Set option can be used again for a new measurement. • If further points are be added and their coordinates are known, click the Add button, and enter the coordinates manually. • The Edit option enables you to edit existing points and labels. • The Reset all option resets the current measurements for all landmarks. • The Clear all option deletes all entries. • The check box at the right hand side of the landmark coordinates determines if this point is used for the pose determination. While running the final calculation without a specific landmark, you can test if this landmark and/or the measurement for it were properly done. A reprojected point is calculated for this landmark.
41
42
System Configuration
Processing After all measurements have been completed, click Calculate to calculate the results for the pose determination and the coordinates of the re-projected points (see the following figure).
The results for the pose determination are immediately displayed in the TSPositioning application. However, the results for the re-projection are not displayed.
¾ To use the pose determination results, perform the following: 1. Click Save to display the Save dialog box.
System Configuration
2. Type a proper file name with the extension .blk (=blocking). It is recommended that for multiple measurements, add further information to the file name, such as date, place, etc. 3. Select the with measurements check box to store the complete data.
4. Click Save to save all camera information to the TrackinSet.cfg file.
Observer The Observer tab is not currently used by ProSet.
Ranges The Ranges tab is used to define the range of the zoom and focus encoders that come as part of the mechanical sensor head kits. During sensor setup, the Update ranges check box is enabled, and by moving through the full range of the zoom and focus on the lens, the range of the encoders is recorded. This data is saved to a file called TrackingRanges.cfg and is automatically retrieved when TrackingSet is run.
43
44
System Configuration
The following table describes the parameters on the Ranges tab: Parameter
Description
Zoom range
Shows the range of the zoom encoders attached to the lens. Is updated when the Update ranges check box is selected.
Focus Range
Shows the range of the focus encoders attached to the lens. Is updated when the Update ranges check box is selected.
Update ranges
Should be cleared during normal studio operation.
Reset ranges
Resets the currently stored range value to 0; should be done prior to updating the range of the encoders.
Load
Loads a previously saved range file.
Save
Saves current range values to G:\config\TrackingRanges.cfg.
When selected, the zoom and focus encoder ranges are updated when you move through the full zoom / focus range on the lens.
¾ To start/stop tracking: 1. Right-click the ProSet Panel icon in the system tray. 2. Select Tracking > > Start Tracking/Stop Tracking.
System Configuration
Using the TrackingSet Camera GUI The TrackingSet Camera GUI displays the list of studio cameras and provides a means to create new studio cameras, change, delete, and save them to a configuration file.
¾ To open the TrackingSet Camera GUI: • Right-click the ProSet Panel icon in the system tray, and select Tracking > Camera Studio.
Each Studio camera is related to one of the cameras in the studio, and can be defined as: • A Static camera – a real studio camera, with fixed parameters (position, orientation, viewing angle) that do not change during the production.
45
46
System Configuration
• A Tracked camera – is bound to a real studio camera with a tracking device assigned, whose parameters (position, orientation, viewing angle) can change and are available at all times. The left side of TSCamera GUI contains the host selection and the list of studio cameras already defined. If the file %CYBER_ETC%\TrackingSet.xml exists (see the Save option description below), the list of cameras is loaded from that file. Otherwise, there is one tracked camera created for each of the trackers defined in TrackingSet.cfg created automatically. This list can be modified by the user. The Tracker column displays the tracking device (or “Static” in the case of a Static Studio Camera) associated with the studio camera currently selected on the list. The right side of the TSCamera GUI contains the parameters of the currently selected camera in the list. The parameters here can be modified if necessary. The following table describes the parameters in the TSCamera GUI: Parameter
Description
Add
Creates and adds to the list a new studio camera. By default, the camera created is static.
Delete
Deletes the currently selected camera.
Load
Loads a list of studio cameras from the %CYBER_ETC%\TrackingSet.xml file.
Save
Saves the list of studio cameras to the %CYBER_ETC%\TrackingSet.xml file. The list can be retrieved using Load.
Video In
Sets the number of the video input on the production switcher associated with the currently selected studio camera.
Name
The name of the currently selected studio camera. Can be modified here.
Camera On
Select this check box to enable the currently selected studio camera.
Tracking Enabled
Select this check box to enable tracking of the currently selected camera (not available for static cameras). This function allows the “freezing” of a tracked camera.
System Configuration
Parameter
Description
X, Y, Z
Position of the currently selected studio camera. Applicable for static cameras only.
Pan, Tile, Roll
Orientation of the currently selected studio camera. Applicable for static cameras only.
View Angle
Viewing angle of the currently selected studio camera. Applicable for static cameras only.
CCD X (Y) Offset
CCD offsets in horizontal (X) and vertical (Y) axis of the currently selected studio camera. Applicable for static cameras only.
Distortion
Distortion values of the currently selected studio camera. Applicable for static cameras only.
Aspect
Aspect ratio of the currently selected studio camera. Applicable for static cameras only.
Get Current Position
Pressing this button retrieves the viewing parameters of the currently selected studio camera and inserts them into position, orientation, viewing angle, distortion, and aspect ration fields.
Close
Closes the TSCamera GUI.
47
48
System Configuration
Depth of Field Control With the latest version of RenderEngine, you can choose which scenes belong to the background and are out of focus, and which foreground scenes will stay focused. You can also determine the level of focus at any zoom. The defocus is controlled by the zoom and the focus of the camera lens. Those values are tracked and then applied in real time. Starting zoom and focus values must be set for the defocus starting point (in focus) and for defocus end point (out of focus). NOTE: The depth of field control requires a separate license for RenderEngine.
¾ To calibrate for DoF: 1. Right-click the ProSet Panel icon and select DoF panel. The DoF panel opens.
System Configuration
2. Enter the host name, and click Connect. NOTE: When working with multiple hosts, you must set each one, using Connect / Disconnect, and the following steps.
3. Select the Depth of Field check box. The zoom parameters are enabled. 4. Click Calibrate. 5. Set the minimal and maximal zoom and focus ranges using the slider and the appropriate buttons. 6. To set focus parameters, select the Focus check box. 7. Set the minimal and maximal zoom and focus ranges using the slider and the appropriate buttons. 8. For rundowns/playlists with defined VSlots, set the VSlot threshold. When defocus is applied, scenes in Vslots from 0 up to the last defined here will be defocused. For example, if you set the VSlot to 3, then slots 0 to 3 will all be out of focus when the effect is applied. Tickers, lower thirds, station logos etc, should be placed in a higher slot, so they remain in focus. 9. Use the Presets (or create a preset) to remember the settings from one session to another.
49
50
System Configuration
Walkthrough The TSWalkthrough window allows you to change the position of the virtual camera within the Set for viewing as output, without changing the tracked camera in any way.
¾ To use walkthrough: 1. Right-click the ProSet Panel icon and select Walkthrough. The TSWalkthrough window opens.
2. Select your HDVG from the Host drop-down list. 3. Clear the Bypass check box. 4. Set the available parameters as follows:
System Configuration
Parameter
Description
Pan
The pan angle of the selected camera. Use the dial to adjust this angle.
Tilt
The tilt angle of the selected camera. Use the slider to adjust this angle.
Roll
The roll angle of the selected camera. Use the dial to adjust this angle.
View Angle
Enter a value to change the camera view angle.
Track
The horizontal position of the virtual camera (X).
Dolly
The vertical position of the virtual camera (Y).
Height
The elevation of the virtual camera (Z).
Bypass
Selected by default. When this check box is selected, the actual camera position, taken from the tracking system is used. Clear this check box to allow adjustment of the virtual camera.
51
52
System Configuration
3. Set Creation About this Chapter This chapter explains the philosophy behind Set creation. It includes: • Set Design • Scenography • Creating a Virtual Set • Single vs. Parallel Set Geometry • Using Textures to Provide Realism • Measurements
54
Set Creation
Set Design Virtual Set creation is an integral part of virtual studio production. ProSet follows an open environment philosophy. Except where noted, Set design for ProSet follows the normal 3D modeling procedure in 3ds Max. Textures used in a Set can be: • Imported texture libraries. • Scanned-in photographs. • Rendered from various rendering packages. • Hand painted via a paint package, such as Adobe Photoshop, Corel Draw, or VDS Twister. Proprietary software tools within ProSet combine all the above features, producing a Set that can be rendered from any camera view in real-time.
Scenography Two major practical considerations influence the software aspects of Set design: • Hardware Limitations. • Blue Box Dimensions.
Hardware Limitations The complexity of a Set is limited by the graphics card being used. There are always limits imposed by: • The number of polygons that can be processed in real time. • The size of the texture memory. • The display speed of the system.
Set Creation
Polygon Count The following are factors that determine the maximum polygon count that can be processed in real time: • Depth complexity. • Geometrical complexity. • Polygon size. • Whether or not polygons are lit. • Number of edges. • Sizes and number of textures. • Number of transparent objects. A reasonable working limit is 60 Set-redraws per second in NTSC, 50 in PAL.
Blue Box Dimensions The dimensions of the blue box determine the region where the talent will perform. This area must be defined in advance since it may dictate the location of virtual objects.
55
56
Set Creation
Creating a Virtual Set Creating a virtual Set involves the following steps: • Creating the virtual Set geometry • Mapping the textures onto the geometry • Rendering the Set with the desired lighting environment using texture baking tools • Optimizing for image quality—texture setting • Loading the Set for real time viewing in 3Designer or ProSet
Creating the Virtual Set Geometry ProSet uses scenes created with 3Designer. 3Designer can import VRML2 files from a variety of modeling packages such as 3ds Max, Softimage and Maya. In 3Designer, optimize the Set for use in ProSet, create animations, and assign exports to scene elements for real-time data updates. Throughout this portion of the manual, 3ds Max will be the modeling package referred to; for different packages, see the manufacturer documentation for information about comparable tools. This chapter outlines general concepts. See Set Conversion, p.67 for the key processes used in 3ds Max. When building a Set, try to minimize the number of polygons used to create the geometry. Look out for the following:
Hidden Polygons Polygons that will never be in the camera view are called hidden polygons. They should be removed from the Set geometry.
Set Creation
To locate hidden polygons, create camera viewpoints within the Set that coincide with the views you desire to get from real cameras in the studio. If the studio cameras move in the x-, y-, and z-directions, the created viewpoints should match the extreme positions of the studio cameras. In this way, you can view the Set geometry from the viewpoints of the studio cameras and determine which polygons are in the camera’s view and which ones are not. Generally, a Set is built of whole objects. Some of these objects intersect each other. It is recommended to perform Boolean operations to identify and remove polygons that are hidden inside of objects.
Mapping the Textures onto the Geometry Using textures instead of polygon objects can dramatically reduce the polygon count of a Set. Most modeling packages provide excellent texture mapping tools to accomplish this task, and may allow texture baking.
Rendering the Set with Desired Lighting Establishing a lighting environment for a virtual Set is crucial to the overall realism of the Set. A lighting environment illuminates the Set, creates shaded colors, enhances the mapped and procedural textures, and generates the necessary shadows. These aspects bring out the full 3D features of the Set. For instance, 3ds Max provides various lighting tools to add lighting information to these textures. In most cases, choose the lighting environment that produces the best visual result, even though it may take longer to render. Using texture-baking tools in 3ds Max, the lighting effects can be ‘baked’ onto the textures; this allows the designer to create realistic lighting without the need for many lights to be used in the scene.
57
58
Set Creation
Single vs. Parallel Set Geometry Set design can be approached in one of two ways: • Single Set geometry—the original Set design runs in real time. • Parallel Set geometry—the original design is too complex to run in real time, so a similar but simpler parallel Set is designed to run in real time. Regardless of Set geometry, it is always necessary to: • Define the animations in 3Designer • Assign the animation events in the pages created with PageEditor
Single Set Geometry (Simple Geometry) Single Set geometry consists of one Set in which the geometry and final textures mapped to this geometry run in real time. The procedure for building the Set varies slightly, depending on whether or not the textures contain lighting information. The simplest Set consists of: • Low-polygon-count model that runs in real time. • Light textures: either photographs of real world environments or lighting information already painted onto the textures. • Textures mapped to the Set geometry. No other lighting environments need to be established, and after some finetuning, the Set may be downloaded for real-time display. If none of the textures has embedded lighting information, you must decide whether to illuminate the Set. Flat textures produce a synthetic look; illumination enhances realism. • If a synthetic look is the objective, then the Set can be immediately downloaded for a real time display.
Set Creation
• If a realistic look is the objective, add lighting information to the Set as follows: a. Establish the desired lighting environment. b. Use texture-baking tools to produce rendered textures for the entire scene. The Set is ready for fine-tuning and loading for real time display.
Parallel Set Geometry (Complex Geometry) In many cases, when a Set is created, its geometry is too complex to run in real time. However, this complexity provides the three-dimensional detail necessary for an interesting Set, and losing detail is unacceptable. ProSet provides a unique procedure to simplify the Set. This maintains the complex geometrical look, while allowing the Set to run in real time.
Simplifying the Set To simplify a Set, follow these guidelines: • Replace complex geometry and lighting environments with textures wherever possible. • Analyze the geometry of the complex Set and decide which surfaces can be fully represented by textures and which surfaces must be displayed as geometry with mapped textures.
Examples: • Ceiling and floor moldings, or pictures hanging on a wall, might require many polygons to represent their smooth, curved surfaces. If these features do not protrude from the wall by a significant amount, they could be represented by a texture image on the geometry of the wall, rather than by a separate geometry. • A plant sitting at the far end of a room. The number of polygons associated with this plant could be enormous. A single geometry panel with a plant image texture mapped to it could easily replace this plant.
59
60
Set Creation
The decision to replace geometry with texture is determined by answering the question: If the camera position changes, is it necessary to see the changing perspective view of the geometry? For example: • If the molding protrudes from the wall by a significant amount, it may be desirable to see the depth of the molding as the camera moves; therefore, it would be unwise to replace it with a texture. However, if the molding is flush with the wall, then replacing the molding with a texture provides a simplification in geometry. • A plant’s geometry usually has much depth. However, if the plant is at the far end of the room, a change in camera position will not significantly change the perspective view. A plant image texture would be an appropriate replacement. The extent of the geometry of the Set that needs replacing is determined by many parameters, primarily the hardware platform. See Hardware Limitations, p.54. The following parameters influence the polygon count, as well: • Number of edges. • Number of semi-transparent objects or textures with alpha. • Number of animations active in the Set. • Number of video panels receiving a live video source. Replacing more geometry with a single texture frees computer resources for special effects.
Creating a Duplicate Set Creating a duplicate Set makes a new Set that preserves the overall dimensions of the original Set, but replaces complex geometry with simple planar surfaces. For example: • Remove molding geometry since the wall texture contains the molding texture.
Set Creation
• Create a simple panel around the plant in the original Set. This new panel receives the plant texture. The polygon reduction process produces a simple geometrical representation of the complex Set, possibly grouped, but without textures or lighting characteristics.
Design Approach When working with a complex geometry, you must work through all three of the following stages: • Geometry simplification. • Texture mapping. • Rendering lighting effects with texture baking. You need to work through these stages because for every Set type, entirely new textures must be created for the new, simplified geometry. Once the simplified real time Set is finished, it can be loaded into ProSet for a real-time display.
61
62
Set Creation
Using Textures to Provide Realism Every object in the real world has texture associated with its surface. This is true of a good virtual Set as well. Two texture types create the illusion of a realistic environment: • Textures created by scanning photographs of real world scenes. • Textures rendered using advanced rendering options or plug-ins for your 3D modeling package. This section discusses some issues associated with rendered textures. To provide the illusion of realism, the following are essential: • Sophisticated lighting environment. • Environment mapping.
Lighting Environment Lighting is controlled with a 3D modeling package, which supports all types of lighting needed for virtual objects and animation. For instance, 3ds Max types Omni, Directional, and Spotlight lights are converted to the corresponding realtime lights. Geometric transformations of lights, and the objects they illuminate, can be animated in real time. However, real-time lights have some limitations, compared to lights in 3ds Max: • The number of available parameters and settings is smaller, compared to 3ds Max. In 3ds Max, the total number of lights is limited to eight. Surface properties (materials) of the objects are simpler compared to 3ds Max capabilities.
Set Creation
Therefore, when you need high quality results, comparable to the original Set, use texture baking, so that all the advanced lighting/material features are rendered into textures. On the other hand, when lights and/or objects must be animated, real-time lights should be used.
Environment Mapping Texture mapping coordinates are not static; they are assigned dynamically per frame. These coordinates depend on the following factors: • Camera position and orientation. • Normal vector in each vertex on the object. Normal vectors are generated in 3ds Max, and then the textures are assigned as environment maps. Textures can also be assigned as environment maps from the Environment tab that becomes active when you select a layer in 3Designer.
63
64
Set Creation
Creating an Infinite Blue Box One significant benefit of a virtual Set is that its dimensions can be larger than the dimensions of the actual blue box. If the blue box has only two walls, as seen in the following image, then the region of the Set not backed by a blue box wall is normally not visible in the keyed image. The Infinite Blue Box concept allows portions of the Set outside the region of the real blue box to be visible. This is possible when ProSet generates an additional alpha (matte) signal that is fed to a chroma keyer. The matte signal is generated based on 3D geometry that is usually a model of the physical blue box. The generated matte signal matches the image of the real blue box produced by the studio camera. There are two important advantages of the Infinite Blue box: • You can view the ceiling (not possible in conventional sets due to the lighting grid). • You see a full 360-degree view of the Set. NOTE: The talent is always limited to the floor dimensions of the real blue box. They cannot walk in or in front of the region outside the blue box, since it exists only virtually.
Set Creation
There are two approaches to blue box modeling. ProSet creates and uses the blue box model the same way, regardless of the approach. • The model corresponds to the real blue box. • The model corresponds to the extension of the real blue box, or its complement (the area where no real blue box exists).
The Blue Box Model in ProSet ProSet treats the blue box geometry in a special way. It can be positioned together with cameras, independent to the rest of the scene. This procedure ensures a seamless integration of RGB and alpha signals being generated on the same machine. The blue box file and camera positions must be matched to your studio environment. The alpha (matte) channel may be used for the infinite blue box and the depth key. The appropriate matte signal is generated automatically based upon the action of turning the depth key on. Depth key objects are independent from the blue box object.
65
66
Set Creation
4. Set Conversion About this Chapter This chapter contains information on the model optimization process for use as a virtual Set for models created in programs that do not work with Orad’s suite of applications. Here you will find how to create suitable models, correct modeling and bake the textures. After optimization, the Set is exported to 3Designer, where additional optimizations can be made. General concepts are outlined previously in the Set Creation chapter. Further information on set conversion can be found in the tutorial on: Modeling, Rendering, Exporting, and Importing Virtual Sets from 3ds Max into 3Designer.
68
Set Conversion
Preparation Follow these steps in preparation for Set conversion: 1. To prepare a working directory for models, create a folder in G:\Projects\VirtualSet and name it appropriately. 2. Start your application, and open the scene you want to convert. 3. Set the unit scale to meters. It is recommended to work in meters because the units system used in 3Designer is in meters. For example, in 3ds Max: a. Select Customize > Units Setup… The Units Setup dialog box opens.
Set Conversion
b. Choose Metric under Display Unit Scale and select Meters from the drop-down list. c. Click System Unit Setup. A dialog box opens.
d. Change the System Unit Scale to 1 Unit = 1.0 Meters.
69
70
Set Conversion
Reducing Geometry A major limitation in real-time rendering is the amount of polygons that can be used in the Set. To solve this problem, unneeded geometry should be removed. Unseen geometry is mostly not necessary. Add a camera to your scene, and move it to the position that will be used by the studio camera. Make a note of the objects that are never visible, or only partially visible. Remove the objects that are not required, or delete the unneeded faces. If you have objects that move in and out of the Set (such as a video monitor), it is recommended not to delete any faces. For detailed instructions on how to apply the changes described on the following sections, see your application’s user manual. More help can be found in the tutorial (Modeling, Rendering, Exporting, and Importing Virtual Sets from 3ds Max Into 3Designer). The following image shows an example of a virtual Set in 3ds Max.
IMPORTANT It is important to give objects unique, meaningful, and unambiguous names that will be easily recognizable, and easy to tell apart later.
Set Conversion
Materials In the Material Editor (in 3ds Max) you create the materials and textures that are used in the scene. Not all of the material types can be used – for example, reflections and ray tracing created in 3ds Max are not supported in 3Designer. See Applying Transparency and Reflection Effects to Objects, p.80, for an explanation about glass effects and the way to achieve reflections in 3Designer. It is recommended that you arrange all of your textures and materials into a multi/sub-object. It is very important that objects in your scene get a unique ID. A glass material should not be made in 3ds Max as it decreases the real-time performance within 3Designer. Glass-like materials can be applied when working on the model in 3Designer (as can almost any other material with a reflection effect), without degrading real-time performance.
Attaching Objects and Welding Vertices Any virtual object that is made up of sub elements should be unified into one object, to reduce the polygon count. For example, several elements (stairs, panels, etc.) could be attached as one object and renamed ‘floor’. This reduces the number of objects in the scene. To reduce the polygon count, you should also weld vertices that are located at the same or similar coordinates. It is a good idea to start with a low number and increase it until all vertices are welded. Working this way will weld vertices that are no longer needed, while ensuring that the shape of the geometry remains unchanged. All objects must be converted into an editable mesh or poly.
71
72
Set Conversion
Grouping Objects Group all objects that are do not need to be assigned textures, (e.g. Video walls, glasses, etc.). Glass effects and animated textures will be applied later, in 3Designer. In order to aid the operator in 3Designer, call this group ‘Hidden Objects’. (The following image shows a 3ds Max example of the objects that could be hidden in a scene).
Rendering the Scene Real-time rendering platforms have difficulty rendering lighting and shadows in real-time. Therefore, use the ‘Render to Texture’ function to render all lighting effects as textures. This can be done for a wide range of lighting effects, from spotlight and omni to plugins (such as V-ray, brazil, etc). Once you have added your lights to the scene, and you are satisfied with your lighting environment, you can render the scene with the ‘Render to Texture’ process.
Set Conversion
‘Render to Texture’ allows you to render several textures that have been applied to an object, into a single image. After rendering all objects with ‘Render to texture’, replace the old textures by applying the new ones. Once you have completed this process, you can delete all of the lights in the scene. All lighting effects should now be rendered onto the textures.
The ID Channel When exporting models as VRML, all objects must be assigned to channel 1. If we export the scene to VRML file format using other formats, and then import the file to 3Designer, the mapping becomes corrupted. None of the objects are mapped as they were before the export. This is because VRML format only uses channel 1; any channel higher than channel 1 is ignored. If any objects are mapped to other channels, they must be changed. In 3ds Max, this is done in the Channel Info dialog box.
Exporting in VRML format The VRML format is the only way to bring a model from 3ds Max, Maya, SoftImage XSI, etc, into 3Designer. First, create a VRML folder in: G:\projects\VirtualSet\ When exporting, choose to export as VRML97, if given the option. Always choose to generate “Normals”, “Indentation” and “Primitives”. In the Bitmap URL Prefix, type the path of the texture folder (G:\projects\VirtualSet\Textures\). If you do not enter the path of the folder for your textures, 3Designer will not be able to read the textures when you load the model.
73
74
Set Conversion
Importing to 3Designer This section covers importing the newly created VRML file into 3Designer. For more information, see the 3Designer User Guide.
¾ To import the VRML file to 3Designer: 1. Open 3Designer (Start>Programs>Orad>3Designer 3.1). 2. Create a new scene as follows: a. Select File>New. b. In the Create Scene dialog box, select the Empty template to create a scene from scratch. OrSelect a template to create a scene with previously edited parameters. c. Click OK. 3. Select File > Import > VRML. 4. Browse to the VRML file to be imported, and click Open. The following dialog box opens.
Set Conversion
5. Select the required options as follows: Option
Description
Use lights
Enables importing lights used in the VRML model. It is recommended to clear this check box.
Double sided mesh
If you have triangles that are oriented incorrectly, and you have no tools available to reorient them (such as 3ds Max), select this check box to correct the problem.
Reverse all normals
If all your triangles are incorrectly oriented, select this check box to invert all of the objects’ normals.
Animation Cycles
If you are importing a Flipbook VRML model, this option is enabled; here you can select which of the animation types
75
76
Set Conversion
Option
Description are imported.
Animation Cycles (cont.)
When the Common Textures check box is selected, the textures that are used in the scene will automatically be referenced to each model that is created from the flipbook. If the check box is cleared, a new instance of each texture will be assigned to each model in the flipbook. The Common Lights check box is disabled and selected. The lights in the scene are automatically referenced to all the materials When the Common Materials check box is selected, the materials that are used in the scene will automatically be referenced to each model that is created from the flipbook. If the check box is cleared, a new instance of each material will be assigned to each model in the flipbook.
Use Textures
Select this check box and specify a path to import the VRML model with the accompanying textures. It is recommended to clear the Copy Textures to Directory check box.
Use Animations
Select this check box to import animations with the scene. The following options are enabled: As Non-editable; the animation is imported as-is, and there is no possibility to change the positions or values of the keys. As Editable Keyframes; separate keyframes for each frame of the animation are created; this can take several minutes to complete, depending on the length of the animation and complexity of the scene. Use Frame Rate allows you to set the frame rate for the incoming animation. It should be the same as the frame rate used in the VRML exporting application. Select the Do not interpolate rotation check box to correct problems with rotation animations. Usually, these problems appear as jumps in the animation.
6. Click Process. Depending upon the size of the scene, the import may take some time.
Set Conversion
Expand and Shrink Folders in 3Designer The VRML file that originated in 3ds Max is displayed in 3Designer. Each object has a group with the corresponding name of the nodes from 3ds Max. 3Designer, these nodes are also referred to as objects. The following image shows the Object Tree in 3Designer, and the hierarchy of the nodes/objects that belong to the Virtual Set model.
The camera (Virtual_Set) is located at the top of the hierarchy, and below the camera is the grouping that came from the scene created in 3ds Max. Each folder/group contains subfolders and objects.
¾ To see the objects associated with each group: • Click the plus (+) sign to the left of the group. The subfolder opens. To open all subfolders simultaneously, right-click an object, and select Expand/Shrink > Expand All.
77
78
Set Conversion
Changing the Material and Colors in the Scene By default, when you import a VRML model, the textures look darker; this is because each object gets a material and not plain (or flat) color. The column headings in the Object Tree include Color, Textures, Shadow, Path, etc. When selecting Color, each object has a material. The default setting (seen in the Color column) is DefaultMaterial1.
The material is not 100% white this is the reason that the model looks darker than it was in 3ds Max.
Set Conversion
¾ To adjust the colors for the scene: 1. Select the first object. 2. Right-click DefaultMaterial1 and select Plain. The Color tab in the Property Editor opens. 3. Change the red, green and blue values to 255 (this is pure white). 4. Expand all the nodes in the Object Tree (right-click the first object, and select Expand/Shrink > Expand All). 5. Select all the objects in the Object Tree (select the second object, hold down SHIFT, and select the last object in the tree). 6. Hold down CTRL, and drag the color indicator from the original object (whose color you already changed) to the far left side of the hierarchy, to the line marker of the second object.
This applies the color to all the nested objects.
79
80
Set Conversion
Updating Textures When you import a VRML file in 3Designer, the path of the texture is set to G:\texture. The path must be updated to use the active project folder: G:\project\VirtualSet\Textures\ You must select the texture from G:\project\ and not from C:\Data\Project\, because this path will be common on every system, whether it is a local or remote server. Open the Texture tab in the Property Editor, showing the path of the texture of the selected object.
Use the browse button … at the right, to open a browser window.
Applying Transparency and Reflection Effects to Objects When you apply a glass material to an object in 3Designer, it does not reduce the real-time performance in the same way as an effect generated in 3ds Max.
¾ To apply transparency: 1. Select the object. 2. In the Color tab, change the Alpha value from to approximately 0.2 (depending on the effect that you want to achieve).
Set Conversion
¾ To apply a reflection effect: 1. In the Texture tab, use the … button to select either a silver or chrome looking texture. 2. Change the mapping of the texture under Map Type to Spheremap.
Creating an Animated Image (Flipbook) When you want to apply texture sequences to geometry, use the Animated Image option. This allows you to the run the sequence in a loop or frame by frame.
¾ To create an animated image: 1. Select the object that you wish to apply the animated sequence to. 2. Right-click in the Texture column, and select New Texture > Animated Texture. 3. In the Texture tab, click … next to Image, and select the first texture in the sequence. 4. To start the sequence, select the Loop check box. The animated texture runs continuously.
81
82
Set Conversion
Adding a BlueBox A virtual blue box is used to mask areas of the real studio space that are not keyed out by the chroma keyer. These areas might include the lighting rig, edges of the blue space, or even other studio cameras. This is achieved by creating a blue box layer in 3Designer.
¾ To add a blue box: 1. Create a new perspective layer in the hierarchy to contain all of the geometry for the bluebox. (From the Primitives Asset strip, drag the Persp Layer icon to the Object Tree). 2. Add a group object to the layer – this will be used to control the mask in ProSet. (From the Primitives Asset strip, drag the Group icon, and nest it beneath the new layer.) 3. Add an object or objects to the group that are positioned in the areas to be masked, as shown in the following example. Remember that 1 3Designer unit equals 1m in the studio space. The objects should be visible in both the design view and on the main program output.
Set Conversion
4. To hide the graphics, select each object that is to be used as part of the blue box, and from the Object editor tab, clear the Draw in Color check box.
The object still generates a Key signal; this means that the object is hidden on the program channel, but masks studio objects because of the key that is generated.
83
84
Set Conversion
It is sufficient to make only these changes for a blue box to be generated, however, if you wish to turn the whole blue box, or just some elements on and off from ProSet, you must create exports for each object, and then adjust their visibility in your controller.
Finishing the Virtual Set Once you have finished your adjustments, there is one final step to make the Set ready for use. Shrink all of the folders in the hierarchy window so you will be able to select the entire scene including the lights, and then clear the Draw In Key check box in the Objects tab. By turning this option off, you tell 3Designer not to draw the alpha channel, so the scene will be behind the talent. Of course, if you have objects that you wish to use in front of the talent, then you should allow draw in key for those objects only. At the end of the process, you should be able to see your entire model in the local preview window.
5. Chroma Key
About this Chapter Chroma keying is possible when using a digital compositing system. In this chapter, the Ultimatte™ system is used as an example. This chapter includes general information about the chroma key. It includes: • Setting up the compositing system. • Internal chroma key system
86
Chroma Key
Using Ultimatte For information on using the Ultimatte system, go to the website at http://www.ultimatte.com/UltimatteMain/Tech%20Library.html. Orad recommends following these guidelines to get an acceptable key. Please read the Ultimatte instruction manual for more detail.
General Lighting Tips • The level of light on the backing should be the same as the level on the subject from the key light, for best shading. • Try to position lights so they are pointing in the same direction as the lens, and not straight down on to the floor, to reduce glare. • Do not use dimmers of any kind for the lights used for the background. Dimmer lighting makes the color of the blues screen less pure. • Start with lighting the subject first, then add fill light to the backing to even it out, to keep side lighting issues to a minimum.
Chroma Key
HDVG Internal Chroma Keyer The chroma key system controls the mixing of graphics and video, called compositing. The HDVG has two video outputs, each one able to display compositing of the video input and graphics, or any component of the compositing–graphics only, video only, or compositing masks. Compositing is controlled per graphic feature. A feature can be added in a linear mode, thereby hiding any video “behind” it, and appearing “in front of” the video. Or it can be added in chroma key mode, where you can select a set of chroma key parameters to conceal a surface of a specific color so that it appears “painted” on that surface while enabling the viewing of objects of other colors. For example, a team logo may be painted onto the playing field, and players moving on and over the image do not disturb the viewed image. In addition, the chroma key system can save ten sets of chroma key parameters. The following topics are discussed: • Opening the Chroma Key Window • Compositing Control • Chroma Key Parameters • Saving/Restoring Parameter Sets • Display Modes (HDVG)
87
88
Chroma Key
Opening the Chroma Key Window ¾ To open the chroma key window: • Right-click the ProSet Panel icon in the system tray, and select HDVG Video Controller. The DVGVideoController window opens.
Chroma Key
Menu Options The following menu options are available on the DVGVideoController window menu: Menu Option
Description
Hosts
Select the HDVG host. This is useful in a HDVG network setup. The system identifies which hosts in the network are running the chroma key application and displays them in the chroma key GUI. All online hosts are listed here.
Advanced
There are two options in this menu: • Save – save a backup of all sets • Save to set… – save the current parameters to a specific Set (see Saving/Restoring Parameter Sets, p.91). • Always on top – sets the chroma key GUI to always visible on the desktop. • Safe Routing – uses internal logic to set supported configurations on the HDVG. This information is received automatically from RenderEngine.
Help
Displays the chroma key version.
Compositing Control The option for Zoned mode is used only by Orad’s Sports applications, and in ProSet, should be left cleared. Select the compositing mode from the Mixing drop-down list. When you select Linear Key, all color settings are disabled. When you select the Chromakey, you must define the color to be replaced. Mode
Description
Disabled
No mixing takes place
Linear Key
This mode enables you to make images appear before the video. Chromakey parameters are disabled. The premultiplied graphics check box is enabled; this allows you to remove “background” black from graphics before compositing. This is useful if a “black” shadow exists around the graphics.
89
90
Chroma Key
Mode
Description
Chromakey
This mode enables replacing the video of the selected color only. Set the Hue using the slider. The advanced chromakey check box is enabled to allow you to remove a specific color from the video. Not used for Sport chroma key.
Foreground Chromakey
This mode is generally used for Sports applications. This allows graphic objects to be inserted between the background video and foreground objects. This mode is not used in ProSet.
Chroma Key Parameters The color for the chroma key compositing is defined by the HSV color space. You must set the following parameters:
Hue Hue is the pure color, the attribute of color perception denoted by the basic colors and combinations of them. The following sliders define the range of colors to be replaced: Area
Description
Center
Select the color.
Width
Select a range around the center color.
Ramp
The blending ranges for the distance between the image and field. Ramp is used to soften image borders.
Saturation Saturation is the light affecting the intensity of color, from gray and pastels, to saturated color. The following sliders define the saturation range: Area
Description
Min
Minimum saturation setting.
Max
Maximum saturation setting.
Ramp
The blending ranges for the distance between the image and field. Ramp is used to soften image borders. Each saturation parameter has its own Ramp option.
Chroma Key
Value This is the brightness of a color. The following sliders set the value range: Area
Description
Min
Minimum value setting.
Max
Maximum value setting.
Ramp
The blending ranges for the distance between the image and field. Ramp is used to define image borders. Each value parameter has its own Ramp option.
Additional Parameters Two additional parameters increase compositing control: Area
Description
Gain
This value gives you additional control over the chroma key compositing by applying a factor to compositing alpha. A gain value of less that 1 (<1) makes the graphics transparent.
Clip
This value gives you additional control over the chroma key compositing by applying a limit to compositing alpha.
Saving/Restoring Parameter Sets The chroma key system can save nine sets of parameter groups, denoted as 1-9 and default. At any given time, after setting the parameters of the chroma key, you can save the current set of parameters as a Set (Advance > Save to set). Chroma key settings can also be saved to a Set with CTRL + (n), where (n) is a number between 1 and 9. A saved Set can be used at any time, in any zone, by selecting the Set button (Default or buttons 1-9). A Set can also be recalled with the keyboard shortcut alt + (n), where n is a number between 1 and 9. NOTE: Using a saved Set can be accessed from the chroma key toolbar in ProSet. Saved sets can be useful in many situations. One example would be to save the settings for a specific surface in different lighting conditions (e.g. daylight and night lighting, or shadowed).
91
92
Chroma Key
Display Modes (HDVG) When using HDVG hardware, additional input and output options become available in the configuration window.
Chroma Key
Routing The Mixing source for the internal mixer is defined using these buttons. By default, IN#3 is selected. (In HDVG firmware version 27.3 and later, the input is always #3, regardless of the setup of this GUI). A Mixer test output may also be enabled; there are three possible modes of operation for the test output: Mode
Description
Zone Map
When working in zone mode (usually in sport applications), you connect a graphic to a color zone for chroma key.
Combined Alpha
Shows the effective graphics alpha. You can see any place where there is a graphic and the strength of the graphic. For example, in the sports application, any place where there is no soccer player and there are graphics, is seen. This is not the same as the “Graphics Alpha” display mode, which displays only the alpha output of the graphics signal.
Processed Foreground
When working in advanced chromakey mode, the blue color is deblended from the camera video source. This is done using the Base Color controls in the Chromakey tab. This test mode enables you to see how changes to the base color affect the camera video source.
The Scope Output can be enabled on only one output connector at a time. The scope is very simple vector scope and light meter it can be displayed on one output at a given time. It is intended as an additional tool for calibrating the advanced chroma key.
Outputs The HDVG hardware supports four digital and two analog outputs. They can be configured to display the following: Mode
Description
Compositing
Mix video and graphics. Mode used in broadcasts, etc.
Graphics
Graphics only.
93
94
Chroma Key
Mode
Description
Graphics alpha
The graphics mask.
Mixing Source
Routes the currently selected mixing source to the output.
Chroma key alpha
Chroma key mask.
Compound alpha
A combined mask.
Test Output
Test signal.
Video Config The video input and output filter control how up-sampling from video color of 4:2:2 (subsampled) to internal 4:4:4 is done on the input and how downsampling from internal color to video color of 4:2:2 is done on the output. Internal compositing is done in full color resolution ("4:4:4") because the internal graphics signal is fully color sampled, unlike video which is half horizontal resolution in the color ("4:2:2"). This is in fact an advantage of using internal mixer (chroma- or linear-key) over an external system – compositing is done at 4:4:4, regardless of how the up- and down- filtering is done. Additionally, the analog output may be down-converted to SD by enabling SD mode for AOUT #1 and #2.
Genlock Phase Adjusting the genlock phase affects the timing of the output video signal relative to genlock, without affecting the ‘picture’. The Horizontal phase is in pixels; the Vertical phase is in lines.
INDEX animated image, 81
DVG video controller, 88
applying
DVP (definition), 4
reflection, 80
DVP loading tab, 35
transparency, 80
DVP settings tab, 28
axes offsets tab, 21
DVP tests, 35
blocking tab, 39
environment mapping, 63
calculating camera position, 40
flipbook, 81
camera mounting shift, 27
FOV (definition), 4
CCD (definition), 4
GUI (definition), 4
CCU (definition), 4
HDVG video controller, 88
change the perspective view, 60
hidden polygons, 56
CoC (definition, 4
illusion of realism, 62
combined alpha, 93
infinite blue box, 64
complex geometry. See parallel set geometry
lens parameters tab, 24
complex Set, 58 configuring ProSet, 7 switchers, 15 TrackingSet, 11 creating Set geometry, 56 delays tab, 30 depth of field, 48 design approach, 61 DOF. See depth of field duplicate set, 60
lighting environment, 57, 59, 62, 86 lighting tips, 86 linear key, 89 mapping, 63 mounting shifts tab, 27 multiple tracking hosts, 7 observer tab, 43 opening DOF panel, 48 TrackingSet GUI, 16 walkthrough, 50 Orad grid panel (definition), 4
96
Chroma Key
outputs, 93
textures, 54
panel filters tab, 22
tracker origin tab, 31
parallel set geometry, 59
tracking
pattern recognition, 4
mode tab, 33
polygon count, 55
start/stop, 44
polygon reduction process, 61
types of systems, 10
PR. See pattern recognition
TrackingSet Camera GUI, 45
processed foreground, 93
TrackingSet GUI, 16
ProsetTray.xml, 7
TrackingSet.cfg, 14
ranges tab, 43
TSWalkthrough, 50
reducing polygon count, 70, 71
types of tracking systems, 10
replace geometry with texture, 60
typical studio (diagram), 13
safe routing, 89
using textures, 62
scenography, 54
VDI (definition), 4
Set design, 54
VRML, 73
Set design process, 58
walkthrough, 6, 50
simple geometry. See single set geometry
X shift, 27
single set geometry, 58
Y shift, 27
start/stop tracking, 44
Z shift, 27
texture mapping, 57
zone map, 93