Facebook Twitter
Video is an electronic medium for the recording, copying and broadcasting of moving visual images. History[edit] Video technology was first[citation needed] developed for cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape. Video recorders were sold for $50,000 in 1956, and videotapes cost $300 per one-hour reel.[1] However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes to the public.[2] After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Video


Motion compensation Visualization of MPEG block motion compensation. Blocks that moved from one frame to the next are shown as white arrows, making the motions of the different platforms and the character clearly visible. Motion compensation is an algorithmic technique employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation
Motion interpolation Motion interpolation or motion-compensated frame interpolation (MCFI) is a form of video processing in which intermediate animation frames are generated between existing ones, in an attempt to make animation more fluid, and to compensate for display motion blur. Applications[edit] Software[edit] Some video software suites and plugins offer motion-interpolation effects to enhance digitally-slowed video. Motion interpolation
위키백과, 우리 모두의 백과사전. RGB 비트맵 그림의 맨 위 왼쪽 모퉁이에 웃는 얼굴이 있다고 치자. 확대를 하면 커다란 웃는 얼굴은 오른쪽과 같이 보이게 된다. Raster graphics Raster graphics
Cutaway rendering of a color CRT:1. Three Electron guns (for red, green, and blue phosphor dots)2. Electron beams3. Focusing coils4. Deflection coils5. Anode connection6. CRT CRT
Overscan is extra image area around the four edges of a video image that may not be seen reliably by the viewer. It exists because television sets in the 1930s through 1970s were highly variable in how the video image was framed within the cathode ray tube (CRT). Origins of overscan[edit] Early televisions varied in their displayable area because of manufacturing tolerance problems. There were also effects from the early design limitations of linear power supplies, whose DC voltage was not regulated as well as in later switching-type power supplies. Overscan Overscan
YUV pixel formats YUV pixel formats YUV formats fall into two distinct groups, the packed formats where Y, U (Cb) and V (Cr) samples are packed together into macropixels which are stored in a single array, and the planar formats where each component is stored as a separate array, the final image being a fusing of the three separate planes. In the diagrams below, the numerical suffix attached to each Y, U or V sample indicates the sampling position across the image line, so, for example, V0 indicates the leftmost V sample and Yn indicates the Y sample at the (n+1)th pixel from the left. Subsampling intervals in the horizontal and vertical directions may merit some explanation.
Chrominance Chrominance Luminance only, Chrominance only, and full color image. History[edit] The use of two channels, one transmitting the predominating color (signal T), and the other the mean brilliance (signal t) output from a single television transmitter to be received not only by color television receivers provided with the necessary more expensive equipment, but also by the ordinary type of television receiver which is more numerous and less expensive and which reproduces the pictures in black and white only. Previous schemes for color television systems, which were incompatible with existing monochrome receivers, transmitted RGB signals in various ways. Television standards[edit] In analog television, chrominance is encoded into a video signal using a subcarrier frequency.
MPEG MPEG logo Sub Groups[edit] ISO/IEC JTC1/SC29/WG11 – Coding of moving pictures and audio has following Sub Groups (SG):[6] RequirementsSystemsVideoAudio3D Graphics CompressionTestCommunication Cooperation with other groups[edit] MPEG
MPEG-2 system MPEG-2 system MPEG-2 (aka H.222/H.262 as defined by the ITU) is a standard for "the generic coding of moving pictures and associated audio information".[1] It describes a combination of lossy video compression and lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. Main characteristics[edit] MPEG-2 is widely used as the format of digital television signals that are broadcast by terrestrial (over-the-air), cable, and direct broadcast satellite TV systems. It also specifies the format of movies and other programs that are distributed on DVD and similar discs. TV stations, TV receivers, DVD players, and other equipment are often designed to this standard. MPEG-2 was the second of several standards developed by the Moving Pictures Expert Group (MPEG) and is an international standard (ISO/IEC 13818).
Program-specific information (PSI) is metadata about a program (channel) and part of an MPEG transport stream. The PSI data as defined by ISO/IEC 13818-1 (MPEG-2 Part 1: Systems) includes four tables: PAT (program association table)CAT (conditional access table)PMT (program map table)NIT (network information table) PSI is carried in the form of a table structure. Each table structure is broken into sections and can span multiple transport stream packets. PSI
Digital storage media command and control (DSM-CC) is a toolkit for developing control channels associated with MPEG-1 and MPEG-2 streams. It is defined in part 6 of the MPEG-2 standard (Extensions for DSM-CC) and uses a client/server model connected via an underlying network (carried via the MPEG-2 multiplex or independently if needed). DSM-CC may be used for controlling the video reception, providing features normally found on Video Cassette Recorders (VCR) (fast-forward, rewind, pause, etc.). It may also be used for a wide variety of other purposes including packet data transport. It is defined by a series of weighty standards, principally MPEG-2 ISO/IEC 13818-6 (part 6 of the MPEG-2 standard). DSM-CC
PES A typical method of transmitting elementary stream data from a video or audio encoder is to first create PES packets from the elementary stream data and then to encapsulate these PES packets inside Transport Stream (TS) packets or Program Stream (PS) packets. The TS packets can then be multiplexed and transmitted using broadcasting techniques, such as those used in an ATSC and DVB. Transport Streams and Program Streams are each logically constructed from PES packets.
MPEG-2 video compression © BBC. All rights reserved. Except as provided below, no part of a White Paper may be reproduced in any material form (including photocopying or storing it in any medium by electronic means) without the prior written permission of BBC Research except in accordance with the provisions of the (UK) Copyright, Designs and Patents Act 1988. The BBC grants permission to individuals and organisations to make copies of any White Paper as a complete document (including the copyright notice) for their own internal use. No copies may be published, distributed or made available to third parties whether by paper, electronic or other means without the BBC's prior written permission.
Video compression picture types
entropy coding
Huffman coding
Diff. TS vs PS