Transcript
TECH DIARY CABLES NB tip: DO NOT REFER TO CABLES AS CORDS!! You will look like an idiot. 1. BNC CABLE
The best cable in the business is a BNC cable. This cable is coaxel, which means that it is a multi‐layered cable with an inner insulated sheath. ABNC cable is most appealing because it cannot be affected by magnetism which assures a clean uninterrupted cable. A thick cable ensures that you have a strong signal that is not easily interfered. BNC is a thick cable. The BNC cable only carries video analogue signal. The female BNC cable is generally the input and the male BNC is commonly on the device.
A BNC T‐piece is used to split the signal in a multiple screening environment (ie.display). The T‐piece splits the signal, however this does weaken the signal. A distribution amplifier can be used in order to up the quality of a split signal.
2. RCA CABLES
These are also analogue but are not professional. They carry both video and audio signal. It is also a coaxel however, because it has a thin cable it is vulnerable to interference. The RCA does not have the insulated sheath which makes the BNC the better option. 3. FIREWIRE CABLES
Firewire cables are digital and carry both video and audio signal. It is also refered to as 1394 protocol or IEEE 1394. This is one of the fastest connections available. Firewire 400 and 800 are available. Firewire 800 is as fast as USB 2. Firewire plugs are refered to in numbers. There is either a 4 end (small) or 6 end (large). 4. S VIDEO CABLE/SVHS CABLE/YC CABLE
These carry Video ONLY. Have less chromonoise, and are much cleaner because its straight to the camera.
5. REMOTE CABLE (9PIN)
This is a control cable also available in 15 pin/26 pin. 6. XLR/ CANON XLR
This is an audio only cable (also available is the brass‐shielded XLR) 7. JACK
One ring is mono and 2 rings are for stereo.
PATHWAYS COMPOSITE
A composite video signal carries the luminance and the chrominance channels in a single signal. The biggest problem with composite video is that luma information can leak into chroma, and vice versa. This leakage can lead to noise, known as chroma crawl, which can be amplified by the video capture process and reduce the overall quality. Composite video is generally understood to be the lowest quality video signal that you can use and is more appropriate for video delivery (broadcast) than as a source for content creation (video editing, DVD authoring, or encoding for the web). Common sources of composite video are VHS (1/2 inch tape) VCRs and camcorders, broadcast television signals, and older U‐Matic 3/4‐inch professional VTRs. The connectors common to composite video are RCA or Cinch for consumer equipment and BNC (Bayonet Neill‐Concelman), a twist‐locking connector common to professional and broadcast video equipment. As the quality of the RCA or BNC cable that is used increases, the noise and attenuation decreases. COMPONENT
Component is the best pathway, referred to as RGB. It is thick and video only. This concentrates on the colours especially red, green, blue, black and white. Component analog, also known as YUV (Y for luminance, U for one chroma channel, and V for the other chroma channel) was the professional and broadcast standard for many years and it is still widely used today. With component analog, the luminance signal and the two color signals are all transmitted over their own dedicated cables. Because all three components or channels of the video signal are transmitted independently, the quality of the signal is quite high. Noise is very low and the colors in the video are richer and more precise. In broadcast and professional environments, component analog has mostly yielded to digital video formats. Component analog has become very popular in the consumer market, though, as the preferred format for connecting home DVD players to new television sets, owing to the pristine nature of the signal compared to S‐Video and Composite formats. In professional environments, the component analog signals are carried by three individual BNC cables because of their ability to accommodate long runs and locking connections. In the new home DVD players, it is common to see RCA or Cinch connectors used that are color coded red, green, and blue to make it easy to connect the DVD player correctly.
DEPTH OF FIELD:
A useful and subtle technique for controlling the complexity of images is depth of field. By managing the depth of field, you can keep unimportant objects in the background or close foreground out of focus while clearly showing the actual subject of the shot. Depth of field is very common in cinematography and photography, so it looks natural to the audience. If there is not enough spatial separation among objects, backing up the camera and using a telephoto lens narrows the region in focus.
FOCAL PLAIN
Pulling focus keep crowd out of focus Great DOF/Shallow DOF Great DOF ‐ everything is sharp Shallow DOF‐ one thing is in focus Definition: the area in which objects located various distances are in focus or not. MANIPULATING DOF
There are 3 ways to manipulate DOF: Iris, Focal length & camera to object distance Focal length = how far zoomed in to object
1.
Iris (aperture)
controls amount of light How much you open or close the iris manipulates the DOF F‐stop 4 – 8 – is optimal ‐ Indicates the worth of the iris opening ‐ Smaller the number the bigger the opening Wide open iris = shallow DOF
Small iris = great DOF ‐>push up the iris = less control of manipulation of DOF
2.
Focal length
The distance from the optical center of the lens to the front surface of the camera’s target measured in millimetres. Changing focal length is known as zooming & is done via the zooming ring. Choose focal length in millimetres. Long FL = shallow DOF Short FL = great DOF
3.
Camera to Object
The shorter the distance between the camera & the person/object, the shallower the DOF Anti clockwise ‐> front of previous object/Close Clockwise ‐> behind the previous object/Away
DEALING WITH MOVING OBJECTS ‐ in focal plane ‐> don’t change the composition ‐ moving out of the focal plane ‐> keep finger on the AF button (push the tit) so that you can track the movement and keep the focus
CAMERA MOVEMENTS THE MOST IMPORTANT TIP IS THAT IN ONE SHOOT THERE ONE MUST MAINTAIN A CONSISTENCY IN APPROACH TO SHOTS. ONE MUST SYNCHRONISE MOVEMENTS AND FOCUS. INTERVIEW/STATIC – AGAIN IN SEVERAL INTERVIEWS THERE MUST BE A CONSISTENCY OF APPROACH
Extreme close‐up Must come to an understanding of what one considers a extreme close‐up to be. It is generally understood to be a shot where the character fills the positive space. There presence itself occupies the negative space. Extreme close‐ups must be considered before being used as they send very strong messaging and makes certain statements. In extreme close‐ups there is an unequal distribution between the top and bottom of the subjects head. In order to get a greater DOF you have to get closer with the camera.
Medium close‐up More space at the bottom than the top (bust shot) when using it with the extreme close‐up it must be consistent in terms of framing.
Composition In terms of setting‐up scenes, 3 planes in composition: front, middle & far focal frame TIP: Therefore, one must consider these 3 focal planes when composing a shot
Movements
Start with small movements with a tripod & concentrate on focus (conservative) Slow camera movements = gives you more control In moving shots – must concentrate on getting 3 shots: Single still frame; start of movement shot & end of movement shot (hold the end shot for 4 secs) Low lighting situation Put the zebra function on the camera (shows the bad light areas in the shot) Varies between 75 & 100
PRODUCING RADIO WITH THE XL1
Use 3 mics but the Shure mixer only has 2 outputs 1. 2. 3. 4.
Piece to camera in a noisy area Live recording in the bathroom Very quiet area (piece‐to‐camera; narrative; interview) Bird sounds
Track ‐> mono (1channel) ‐> Stereo (2 channels) Channel ‐> L & R Shure mixer
Pan ‐> sending sound to L&R channels in order to do a live mix Bass cut filter: use in an echoey situation; cuts bass rumble
Attenuture (ATT) using in a noisy environment – focuses signal narrower (compressors the signal)
CODECS A codec is a device or program capable of encoding and/or decoding a digital data stream or signal. The word codec may be a combination of any of the following: 'compressor‐decompressor' COMPRESSION QUALITY •
Lossy codecs: Many of the more popular codecs in the software world are lossy, meaning that they reduce quality by some amount in order to achieve compression. Smaller data sets ease the strain on relatively expensive storage sub‐systems such as non‐volatile memory and hard disk, as well as write‐once‐read‐many formats such as CD‐ROM, DVD and Blu‐ray Disc.
•
Lossless codecs: There are also many lossless codecs which are typically used for archiving data in a compressed form while retaining all of the information present in the original stream. If preserving the original quality of the stream is more important than eliminating the correspondingly larger data sizes, lossless codecs are preferred. Especially if the data is to undergo further processing (for example editing) in which case the repeated application of processing (encoding and decoding) on lossy codecs will degrade the quality of the resulting data such that it is readily identifiable (visually, audibly or both). Using more than one codec or encoding scheme successively can also degrade quality significantly. The decreasing cost of storage capacity and network bandwidth has a tendency to reduce the need for lossy codecs for some media.
•
Codecs are often designed to emphasise certain aspects of the media to be encoded. For example, a digital video (using a DV codec) of a sports event, such as baseball or soccer, needs to encode motion well but not necessarily exact colours, while a video of an art exhibit needs to perform well encoding colour and surface texture. For example, audio codecs for cell phones need to be very low latency between a word being spoken and that word being heard; while audio codecs for recording or broadcast can use high‐latency audio compression techniques to achieve higher fidelity at a lower bit‐rate. There are hundreds or even thousands of codecs[weasel words] ranging from those downloadable for free to ones costing hundreds of dollars or more. This variety of codecs can create compatibility and obsolescence issues. By contrast, raw uncompressed PCM audio (44.1 kHz, 16 bit stereo, as represented on an audio CD or in a .wav or .aiff file) offers more of a persistent standard across multiple platforms and over time. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronisation of audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
The widely spread notion of AVI being a codec is incorrect as AVI is a container format, which many codecs might use (although not to ISO standard). There are other well‐known alternative containers such as Ogg, ASF, QuickTime, RealMedia, Matroska, DivX, and MP4. An audio codec is a hardware device or a computer program that compresses/decompresses digital audio data according to a given audio file format or streaming audio format. The term codec is a combination of 'coder‐decoder'. The object of a codec algorithm is to represent the high‐fidelity audio signal with minimum number of bits while retaining the quality. This can effectively reduce the storage space and the bandwidth required for transmission of the stored audio file. Most codecs are implemented as libraries which interface to one or more multimedia players, such as XMMS, Winamp or Windows Media Player. In some contexts, the term "audio codec" can refer to a hardware implementation or sound card. When used in this manner, the phrase audio codec refers to the device encoding an analog audio signal. Audio compression is a form of data compression designed to reduce the size of audio files. Audio compression algorithms are implemented in computer software as audio codecs. Generic data compression algorithms perform poorly with audio data, seldom reducing file sizes much below 87% of the original, and are not designed for use in real time. Consequently, specific audio "lossless" and "lossy" algorithms have been created. Lossy algorithms provide far greater compression ratios and are used in mainstream consumer audio devices. As with image compression, both lossy and lossless compression algorithms are used in audio compression, lossy being the most common for everyday use. In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, pattern recognition and linear prediction to reduce the amount of information used to describe the data. The trade‐off of slightly reduced audio quality is clearly outweighed for most practical audio applications where users cannot perceive any difference and space requirements are substantially reduced. For example, on one CD, one can fit an hour of high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in MP3 format. A video codec is a device or software that enables video compression and/or decompression for digital video. The compression usually employs lossy data compression. Historically, video was stored as an analog signal on magnetic tape. Around the time when the compact disc entered the market as a digital‐format replacement for analog audio, it became feasible to also begin storing and using video in digital form, and a variety of such technologies began to emerge. Audio and video call for customized methods of compression. Engineers and mathematicians have tried a number of solutions for tackling this problem.
There is a complex balance between the video quality, the quantity of the data needed to represent it, also known as the bit rate, the complexity of the encoding and decoding algorithms, robustness to data losses and errors, ease of editing, random access, the state of the art of compression algorithm design, end‐to‐end delay, and a number of other factors. Audio processing Audio may only get a fraction of the bits that comprise video, but it is half the experience. While audio processing is generally easier than video processing, it is important to correctly process both. Resampling audio For low data rates, especially those targeting dial‐up modems, you should reduce the audio sample rate to 22.050 KHz or lower. Modern video editing applications use algorithms for rate resampling, which provide much better quality than the algorithms of a few years ago. Setting volume The peak volume should be slightly below the maximum volume. Using a compressor/limiter to reduce dynamic range, you can sometimes improve audio quality at very low data rates. In gen‐ eral, television broadcast audio is easy to encode because the audio has already been processed through compressors and limiters. Adjusting channels and bit rates Mono content encodes better than stereo at lower data rates. Consequently, it’s preferable to convert the audio from stereo to mono when you target lower bit rates (typically anything below 32 Kbps). Given the choice of setting audio to stereo at a lower sample rate or setting it to mono at a higher rate, you should choose mono because sample rate matters more than stereo for the content sounds. A number of sound systems also support 5.1 and 7.1 discrete surround‐sound audio. The data rates required for surround sound (128 Kbps and above just for audio) generally preclude their use in real‐time streaming, but they can be quite usable for disc‐based playback. Dolby Pro Logic surround sound (which stores surround sound in a stereo pair) doesn’t survive the encoding process with many audio codecs. If Pro Logic encoding is required for a project, test carefully to make sure that the Pro Logic data survives the encoding process. With some codecs, raising the data rate or turning off options like Joint Stereo might help. Some codecs can’t encode Pro Logic. Reducing noise
There are also noise reduction filters for audio, although these filters are normally part of professional audio tools like Adobe Audition®, not a part of the compression tools. Providing clean, undistorted, hiss‐ and pop‐free audio source results in better compression. • Unbalanced audio This type of audio connection consists of a single wire that carries the signal, surrounded by a grounded shield. It is used commonly in consumer audio products because the connection and circuitry are less complex. The downside is that unbalanced connections are more susceptible to interference, so they are not often used in professional applications. There are three basic kinds of unbalanced audio connectors. The most familiar to consumers is traditional unbalanced audio that uses RCA or Cinch jacks (normally red for right and white or black for left). To connect a stereo signal, you can use a single RCA cable for each channel, or a stereo cable that has two single cables molded into one. There are other single‐cable unbalanced connectors as well. The quarter‐inch connector is common to high‐end headphones and is standard on most home stereo equipment. Similar but smaller is the eighth‐inch connector that is standard for portable audio devices and computer sound cards. • Balanced audio This type of connection consists of two wires, which serve to balance the signal, and a shield. A balanced connection is far less susceptible to interference, so it is pos‐ sible to maintain high quality with long cable runs. Balanced audio normally uses the XLR, or three‐pin, locking connector, which provides a much tighter and robust connection than the connectors used for unbalanced audio. Once again, longer cable runs and a locking connector form the basic standard for a professional connection Bit depth Bit depth is the number of bits used for each sample in each channel—the higher the bit depth, the better the audio quality. • 8‐bit sampling Originally, multimedia audio used 8‐bit sampling, which means there was a measurement of volume from 0‐255 for each sample. This bit depth, which was responsible for that harsh sound in the early CD‐ROM titles, produces fairly poor quality audio. There is no reason to distribute 8‐bit audio anymore. Modern 16‐bit codecs provide better quality at a smaller file size than 8‐bit codecs were ever able to. • 16‐bit sampling This bit depth is the current standard for audio distribution. Most modern codecs and all audio CDs use 16‐bit sampling as well.
COMMONLY USED STANDARDS AND CODECS
A variety of codecs can be implemented with relative ease on PCs and in consumer electronics equipment. It is therefore possible for multiple codecs to be available in the same product, avoiding the need to choose a single dominant codec for compatibility reasons. In the end it seems unlikely that one codec will replace them all. Some widely‐used video codecs are listed below, starting with a chronological‐order list of the ones specified in international standards. H.261: Used primarily in older videoconferencing and videotelephony products. H.261, developed by the ITU‐T, was the first practical digital video compression standard. Essentially all subsequent standard video codec designs are based on it. It included such well‐established concepts as YCbCr color representation, the 4:2:0 sampling format, 8‐bit sample precision, 16x16 macroblocks, block‐wise motion compensation, 8x8 block‐wise discrete cosine transformation, zig‐zag coefficient scanning, scalar quantization, run+value symbol mapping, and variable‐length coding. H.261 supported only progressive scan video. MPEG‐1 Part 2: Used for Video CDs, and also sometimes for online video. If the source video quality is good and the bitrate is high enough, VCD can look slightly better than VHS. To exceed VHS quality, a higher resolution would be necessary. However, to get a fully compliant VCD file, bitrates higher than 1150 kbit/s and resolutions higher than 352 x 288 should not be used. When it comes to compatibility, VCD has the highest compatibility of any digital video/audio system. Very few DVD players do not support VCD, but they all inherently support the MPEG‐1 codec. Almost every computer in the world can also play videos using this codec. In terms of technical design, the most significant enhancements in MPEG‐1 relative to H.261 were half‐pel and bi‐predictive motion compensation support. MPEG‐1 supports only progressive scan video. MPEG‐2 Part 2 (a common‐text standard with H.262): Used on DVD, SVCD, and in most digital video broadcasting and cable distribution systems. When used on a standard DVD, it offers good picture quality and supports widescreen. When used on SVCD, it is not as good as DVD but is certainly better than VCD due to higher resolution and allowed bitrate. Though uncommon, MPEG‐1 can also be used on SVCDs, and anywhere else MPEG‐2 is allowed, as MPEG‐2 decoders are inherently backwards compatible. In terms of technical design, the most significant enhancement in MPEG‐2 relative to MPEG‐1 was the addition of support for interlaced video. MPEG‐2 is now considered an aged codec, but has tremendous market acceptance and a very large installed base. H.263: Used primarily for videoconferencing, videotelephony, and internet video. H.263 represented a significant step forward in standardized compression capability for progressive scan video. Especially at low bit rates, it could provide a substantial improvement in the bitrate needed to reach a given level of fidelity. Sorenson Spark: A codec that was licensed to Macromedia for use in its Flash Player 6. In the same family as H.263.
MPEG‐4 Part 2: An MPEG standard that can be used for internet, broadcast, and on storage media. It offers improved quality relative to MPEG‐2 and the first version of H.263. Its major technical features beyond prior codec standards consisted of object‐oriented coding features and a variety of other such features not necessarily intended for improvement of ordinary video coding compression capability. It also included some enhancements of compression capability, both by embracing capabilities developed in H.263 and by adding new ones such as quarter‐pel motion compensation. Like MPEG‐2, it supports both progressive scan and interlaced video. DivX, Xvid, FFmpeg MPEG‐4 and 3ivx: Different implementations of MPEG‐4 Part 2. MPEG‐4 Part 10 (a technically aligned standard with the ITU‐T's H.264 and often also referred to as AVC). This emerging new standard is the current state of the art of ITU‐T and MPEG standardized compression technology, and is rapidly gaining adoption into a wide variety of applications. It contains a number of significant advances in compression capability, and it has recently been adopted into a number of company products, including for example the XBOX 360, PlayStation Portable, iPod, iPhone, the Nero Digital product suite, Mac OS X v10.4, as well as HD DVD/Blu‐ray Disc. x264: A GPL‐licensed implementation of H.264 encoding standard, x264 is only an encoder. VP6, VP6‐E, VP6‐S, VP7: Proprietary high definition video codecs developed by On2 Technologies used in platforms such as Adobe Flash Player 8 and above, Adobe Flash Lite, Java FX and other mobile and desktop video platforms. Supports resolution up to 720p and 1080p. Sorenson 3: A codec that is popularly used by Apple's QuickTime, basically the ancestor of H.264. Many of the QuickTime movie trailers found on the web use this codec. Theora: Developed by the Xiph.org Foundation as part of their Ogg project, based upon On2 Technologies' VP3 codec, and christened by On2 as the successor in VP3's lineage, Theora is targeted at competing with MPEG‐4 video and similar lower‐bitrate video compression schemes. WMV (Windows Media Video): Microsoft's family of video codec designs including WMV 7, WMV 8, and WMV 9. It can do anything from low resolution video for dial up internet users to HDTV. The latest generation of WMV is standardized by SMPTE as the VC‐1 standard. VC‐1: SMPTE standardized video compression standard (SMPTE 421M). Based on Microsoft's WMV9 video codec. One of the 3 mandatory video codecs in both HD DVD and Blu‐Ray high‐ definition optical disc standards. Commonly found in portable devices and on streaming video websites in its Windows Media Video implementation. RealVideo: Developed by RealNetworks. A popular codec technology a few years ago, now fading in importance for a variety of reasons.
Cinepak: A very early codec used by Apple's QuickTime. Huffyuv: Huffyuv (or HuffYUV) is a very fast, lossless Win32 video codec written by Ben Rudiak‐ Gould and published under the terms of the GPL as free software, meant to replace uncompressed YCbCr as a video capture format. See Lagarith as a more up‐to‐date codec. Lagarith: A more up‐to‐date fork of Huffyuv is available as Lagarith. SheerVideo: A family of ultrafast lossless QuickTime and AVI codecs, developed by BitJazz Inc., for RGB[A], Y'CbCr[A] 4:4:4[:4], Y'CbCr[A] and 4:2:2[:4] formats; for both 10‐bit and 8‐bit channels; for both progressive and interlaced data; for both Mac and PC. Mobiclip, a codec created by Actimagine, maximising mobile phone battery life when playing full length films on a smart‐phone handset. All of the codecs above have their qualities and drawbacks. Comparisons are frequently published. The tradeoff between compression power, speed, and fidelity (including artifacts) is usually considered the most important figure of technical merit.