This document discusses fundamental concepts in digital video. It begins by explaining the differences between analog and digital video, and how digital video allows for direct access and repeated recording without quality degradation. It then examines various digital video standards including CCIR 601, CIF, and QCIF. It provides details on chroma subsampling ratios and how they reduce data requirements. The document also covers high-definition television standards and aims to increase the visual field rather than definition per unit area.
Chapter 3 - Fundamental Concepts in Video and Digital Audio.pptBinyamBekele3
油
This document discusses digital video and audio fundamentals. It covers types of video signals like component, composite, and S-video. It then discusses analog video standards like NTSC, PAL, and SECAM, including details on frame rates, resolution, and color models. It also covers digital video concepts such as chroma subsampling and standards like CCIR 601. Finally, it briefly discusses digital audio topics like MIDI and quantization.
Unit ii mm_chap5_fundamentals concepts in videoEellekwameowusu
油
This document discusses different types of video signals including component, composite, S-video, and digital video. It describes analog video standards like NTSC and PAL used in television broadcasts. NTSC uses interlaced scanning at 525 lines per frame and 29.97 frames per second. It transmits color information using the YIQ color model and quadrature amplitude modulation. PAL and SECAM are similar but use different color encoding schemes. Digital video offers advantages like random access and resistance to degradation from repeated recording. Chroma subsampling reduces color resolution to reduce file size.
An analog video camera converts light intensity to an electric signal that varies over time. There are different types of analog video signals including NTSC, PAL, and SECAM. Digital video represents images as pixels. Digital video compression uses chroma subsampling and different schemes like 4:4:4, 4:2:2, and 4:2:0. Popular digital video compression standards include MPEG-1, MPEG-2, H.261, and H.263 which use techniques like intra and inter frames, motion compensation, and temporal redundancy to reduce file sizes.
This document discusses fundamental concepts in analog and digital video, including:
- Analog video uses continuous electrical signals to represent images, while digital video uses discrete 1s and 0s. Analog video is susceptible to quality loss but digital video maintains perfect quality.
- There are different types of color video signals like composite, S-video, and component, with component providing the best quality but requiring more bandwidth.
- Common video broadcasting standards are NTSC, PAL, and SECAM, which specify resolutions and frame rates and differ in color encoding schemes.
This document provides an overview of fundamental concepts in video, including analog video, digital video, color video signals, and video broadcasting standards. It discusses the following key points in 3 sentences:
Analog video uses continuous electrical signals to represent images and sounds, but is prone to distortion and noise. Digital video represents images using binary digits, allowing for exact copies without quality degradation. Major video standards include NTSC, PAL, and SECAM which specify resolutions and frame rates for analog broadcast television.
This document provides an overview of analog video fundamentals, including:
- How a video picture is generated through scanning an electrical signal across a display.
- The differences between interlaced and progressive scanning systems.
- How video resolution is specified, both in terms of visual resolution and format resolution.
- Common video formats including NTSC, PAL, HDTV/SDTV, VGA, and XGA, and their characteristics.
- The three main types of analog video interfaces: composite, S-video, and component.
Television Standards and systems: Components of a TV system interlacing composite video signal. Colour TV Luminance and Chrominance signal; Monochrome and Colour Picture Tubes Colour TV systemsNTSC, PAL, SECAM-Components of a Remote Control and TV camera tubes, HDTV, LED and LCD TVs, DTH TV.
it is a indtoduction about the analog television, we learn in this slides how can analog television work and when it is use with fequancy band it is need also we will learn about the frame, PAL NTSC and SECAM these tech. are suit for analog television with two systems that 625 line and 525 lines. in this seminer we can learn about the flicking and the finite beam fly-back time , learn about odd and even fields and why we sued them
The document is a project report on the television system. It discusses fundamentals of monochrome and color TV systems including picture formation, number of TV lines per frame, resolution, brightness, contrast, and color composite video signals. It also covers color television topics such as additive color mixing, color difference signals, bandwidth requirements, color carrier modulation, and chroma vectors. Finally, it discusses the PAL color encoding system, audio video chains in TV stations, and DTH broadcasting including downlink and uplink chains.
This document provides an overview of fundamental concepts in video and multimedia technology. It discusses the differences between analog and digital video, including how analog video uses continuous electric signals while digital video represents images as binary digits. It also covers topics like interlaced vs progressive scanning, component vs composite video signals, common video file formats, and standards for analog TV broadcasting like NTSC, PAL, and SECAM. Key video concepts like frame rate, color resolution, spatial resolution, and image quality that apply to digital video are also defined.
This document discusses the digital representation of various media types including images, video, and audio. It covers topics such as how images are represented digitally through pixels, common image file formats, and color models. For video, it describes properties like frame rate and scanning format as well as analog video standards. It also discusses digital video formats and surround sound representation for audio. Common graphic representations and animations using 2D vectors are also summarized.
Introduction to computer graphics part 1Ankit Garg
油
This document discusses computer graphics systems and their components. It describes video display devices like CRTs and how they work. Color is generated using techniques like beam penetration and shadow masks. Raster scan and random scan displays are covered. Input devices for graphics like mice, tablets, and gloves are also summarized. The document provides details on graphics hardware like frame buffers, refresh rates, and video controllers.
The document summarizes key differences between vector scan and raster scan displays. Vector scan displays directly draw lines between points by moving the electron beam between endpoints, while raster scan displays sweep the beam across the entire screen in lines from top to bottom. Raster scan is more common as it does not flicker even with complex images and has lower cost and hardware requirements than vector scan. Both methods store images in a frame buffer but raster scan must convert graphics to pixels while vector scan does not.
The document provides details about a training internship at Doordarshan Kendra Lucknow. It describes the three main divisions of Doordarshan Kendra Lucknow - the studio, transmitter, and earth station. The studio is where various television programs and serials are recorded. The transmission section modulates and transmits both audio and video signals. The earth station communicates with satellites to downlink and uplink signals over long distances.
This document discusses computer graphics systems and their components. It describes common display devices like CRT monitors and how they work. It explains color generation techniques in monitors using beam penetration or shadow mask methods. Input devices for graphics like mice, tablets, and joysticks are also covered. The document provides details on frame buffers, resolution, refresh rates and how raster scan displays redraw images.
This project aims to design and develop a multi-format video converter to convert analog video signals from target detection systems to digital formats for transmission to and processing by computing devices. The converter will convert analog video signals to compressed digital formats and transfer the data over USB or Ethernet ports. The development of this converter is motivated by the benefits of digital signal transmission, storage, and processing compared to analog.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
Difference between Interlaced & progressive scanningaibad ahmed
油
The document discusses the differences between interlaced scanning and progressive scanning techniques for displaying video images. Interlaced scanning, which was developed for CRT monitors, divides image frames into odd and even lines that are refreshed alternately, resulting in some distortion or jaggedness when viewing moving images. Progressive scanning, as used in computer monitors and digital cameras, scans each line sequentially without interlacing, resulting in a smoother image with less flicker suitable for viewing fine details in moving images. The effects of interlacing can be reduced through de-interlacing techniques, though a progressive scan is better able to clearly capture and display details in moving objects.
The document provides an overview of computer graphics systems. It discusses different types of display devices including refresh cathode-ray tubes, raster-scan displays, random-scan displays, color CRT monitors, and flat panel displays. It also covers basics of raster graphics systems and random scan systems, including components like the video controller, display processor, and frame buffer. Input devices for graphics systems such as the keyboard, mouse, and digitizer are also mentioned.
The document discusses several technical issues that arise with interlaced video formats compared to progressive formats. It explains that with interlacing, full vertical detail or motion can be achieved but not both, and that this causes problems for video compression algorithms. It also notes that while progressive formats avoid these interlacing artifacts, moving objects may appear flickering. Overall, the document analyzes various technical tradeoffs between interlaced and progressive video.
The document discusses different types of displays including emissive displays like CRTs and non-emissive displays like LCDs. It then provides details on how CRTs work including the electron gun, deflection coils, and phosphor screen. Key properties of CRTs are described such as resolution, refresh rate, and color reproduction using an electron gun and shadow mask arrangement. Raster scanning is introduced as the process of drawing the image line by line using a frame buffer and video controller. Color mapping with a lookup table is also summarized.
The document discusses different types of displays including emissive displays like CRTs and non-emissive displays like LCDs. It then provides details on how CRTs work including the electron gun, deflection coils, and phosphor screen. Key properties of CRTs are described such as resolution, refresh rate, and color reproduction using an electron gun and shadow mask arrangement. Raster scanning is introduced as the process of drawing the image line by line using a frame buffer and video controller. Color mapping with a lookup table is also summarized.
The document discusses key concepts in digital video, including:
1) Persistence of vision and scanning processes using a cathode ray tube to display video frames made up of scan lines.
2) Analog video signals can pick up noise while digital video uses sampling and compression.
3) Digital video formats like DV use color sampling and compression standards to efficiently capture and store video data, which can then be edited nonlinearly.
4) DV connects to computers through FireWire ports for capture and editing of digital video signals.
Video and television systems work by presenting a sequence of images rapidly enough that the human eye perceives them as continuous motion. Different regions use different television standards that determine aspects like the number of lines, frames per second, and color systems. Video compression codecs like MPEG remove spatial and temporal redundancy to greatly reduce file sizes for storage and transmission while maintaining adequate quality.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
Television Standards and systems: Components of a TV system interlacing composite video signal. Colour TV Luminance and Chrominance signal; Monochrome and Colour Picture Tubes Colour TV systemsNTSC, PAL, SECAM-Components of a Remote Control and TV camera tubes, HDTV, LED and LCD TVs, DTH TV.
it is a indtoduction about the analog television, we learn in this slides how can analog television work and when it is use with fequancy band it is need also we will learn about the frame, PAL NTSC and SECAM these tech. are suit for analog television with two systems that 625 line and 525 lines. in this seminer we can learn about the flicking and the finite beam fly-back time , learn about odd and even fields and why we sued them
The document is a project report on the television system. It discusses fundamentals of monochrome and color TV systems including picture formation, number of TV lines per frame, resolution, brightness, contrast, and color composite video signals. It also covers color television topics such as additive color mixing, color difference signals, bandwidth requirements, color carrier modulation, and chroma vectors. Finally, it discusses the PAL color encoding system, audio video chains in TV stations, and DTH broadcasting including downlink and uplink chains.
This document provides an overview of fundamental concepts in video and multimedia technology. It discusses the differences between analog and digital video, including how analog video uses continuous electric signals while digital video represents images as binary digits. It also covers topics like interlaced vs progressive scanning, component vs composite video signals, common video file formats, and standards for analog TV broadcasting like NTSC, PAL, and SECAM. Key video concepts like frame rate, color resolution, spatial resolution, and image quality that apply to digital video are also defined.
This document discusses the digital representation of various media types including images, video, and audio. It covers topics such as how images are represented digitally through pixels, common image file formats, and color models. For video, it describes properties like frame rate and scanning format as well as analog video standards. It also discusses digital video formats and surround sound representation for audio. Common graphic representations and animations using 2D vectors are also summarized.
Introduction to computer graphics part 1Ankit Garg
油
This document discusses computer graphics systems and their components. It describes video display devices like CRTs and how they work. Color is generated using techniques like beam penetration and shadow masks. Raster scan and random scan displays are covered. Input devices for graphics like mice, tablets, and gloves are also summarized. The document provides details on graphics hardware like frame buffers, refresh rates, and video controllers.
The document summarizes key differences between vector scan and raster scan displays. Vector scan displays directly draw lines between points by moving the electron beam between endpoints, while raster scan displays sweep the beam across the entire screen in lines from top to bottom. Raster scan is more common as it does not flicker even with complex images and has lower cost and hardware requirements than vector scan. Both methods store images in a frame buffer but raster scan must convert graphics to pixels while vector scan does not.
The document provides details about a training internship at Doordarshan Kendra Lucknow. It describes the three main divisions of Doordarshan Kendra Lucknow - the studio, transmitter, and earth station. The studio is where various television programs and serials are recorded. The transmission section modulates and transmits both audio and video signals. The earth station communicates with satellites to downlink and uplink signals over long distances.
This document discusses computer graphics systems and their components. It describes common display devices like CRT monitors and how they work. It explains color generation techniques in monitors using beam penetration or shadow mask methods. Input devices for graphics like mice, tablets, and joysticks are also covered. The document provides details on frame buffers, resolution, refresh rates and how raster scan displays redraw images.
This project aims to design and develop a multi-format video converter to convert analog video signals from target detection systems to digital formats for transmission to and processing by computing devices. The converter will convert analog video signals to compressed digital formats and transfer the data over USB or Ethernet ports. The development of this converter is motivated by the benefits of digital signal transmission, storage, and processing compared to analog.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
Difference between Interlaced & progressive scanningaibad ahmed
油
The document discusses the differences between interlaced scanning and progressive scanning techniques for displaying video images. Interlaced scanning, which was developed for CRT monitors, divides image frames into odd and even lines that are refreshed alternately, resulting in some distortion or jaggedness when viewing moving images. Progressive scanning, as used in computer monitors and digital cameras, scans each line sequentially without interlacing, resulting in a smoother image with less flicker suitable for viewing fine details in moving images. The effects of interlacing can be reduced through de-interlacing techniques, though a progressive scan is better able to clearly capture and display details in moving objects.
The document provides an overview of computer graphics systems. It discusses different types of display devices including refresh cathode-ray tubes, raster-scan displays, random-scan displays, color CRT monitors, and flat panel displays. It also covers basics of raster graphics systems and random scan systems, including components like the video controller, display processor, and frame buffer. Input devices for graphics systems such as the keyboard, mouse, and digitizer are also mentioned.
The document discusses several technical issues that arise with interlaced video formats compared to progressive formats. It explains that with interlacing, full vertical detail or motion can be achieved but not both, and that this causes problems for video compression algorithms. It also notes that while progressive formats avoid these interlacing artifacts, moving objects may appear flickering. Overall, the document analyzes various technical tradeoffs between interlaced and progressive video.
The document discusses different types of displays including emissive displays like CRTs and non-emissive displays like LCDs. It then provides details on how CRTs work including the electron gun, deflection coils, and phosphor screen. Key properties of CRTs are described such as resolution, refresh rate, and color reproduction using an electron gun and shadow mask arrangement. Raster scanning is introduced as the process of drawing the image line by line using a frame buffer and video controller. Color mapping with a lookup table is also summarized.
The document discusses different types of displays including emissive displays like CRTs and non-emissive displays like LCDs. It then provides details on how CRTs work including the electron gun, deflection coils, and phosphor screen. Key properties of CRTs are described such as resolution, refresh rate, and color reproduction using an electron gun and shadow mask arrangement. Raster scanning is introduced as the process of drawing the image line by line using a frame buffer and video controller. Color mapping with a lookup table is also summarized.
The document discusses key concepts in digital video, including:
1) Persistence of vision and scanning processes using a cathode ray tube to display video frames made up of scan lines.
2) Analog video signals can pick up noise while digital video uses sampling and compression.
3) Digital video formats like DV use color sampling and compression standards to efficiently capture and store video data, which can then be edited nonlinearly.
4) DV connects to computers through FireWire ports for capture and editing of digital video signals.
Video and television systems work by presenting a sequence of images rapidly enough that the human eye perceives them as continuous motion. Different regions use different television standards that determine aspects like the number of lines, frames per second, and color systems. Video compression codecs like MPEG remove spatial and temporal redundancy to greatly reduce file sizes for storage and transmission while maintaining adequate quality.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
Indian Soil Classification System in Geotechnical EngineeringRajani Vyawahare
油
This PowerPoint presentation provides a comprehensive overview of the Indian Soil Classification System, widely used in geotechnical engineering for identifying and categorizing soils based on their properties. It covers essential aspects such as particle size distribution, sieve analysis, and Atterberg consistency limits, which play a crucial role in determining soil behavior for construction and foundation design. The presentation explains the classification of soil based on particle size, including gravel, sand, silt, and clay, and details the sieve analysis experiment used to determine grain size distribution. Additionally, it explores the Atterberg consistency limits, such as the liquid limit, plastic limit, and shrinkage limit, along with a plasticity chart to assess soil plasticity and its impact on engineering applications. Furthermore, it discusses the Indian Standard Soil Classification (IS 1498:1970) and its significance in construction, along with a comparison to the Unified Soil Classification System (USCS). With detailed explanations, graphs, charts, and practical applications, this presentation serves as a valuable resource for students, civil engineers, and researchers in the field of geotechnical engineering.
The Golden Gate Bridge a structural marvel inspired by mother nature.pptxAkankshaRawat75
油
The Golden Gate Bridge is a 6 lane suspension bridge spans the Golden Gate Strait, connecting the city of San Francisco to Marin County, California.
It provides a vital transportation link between the Pacific Ocean and the San Francisco Bay.
Best KNow Hydrogen Fuel Production in the World The cost in USD kwh for H2Daniel Donatelli
油
The cost in USD/kwh for H2
Daniel Donatelli
Secure Supplies Group
Index
Introduction - Page 3
The Need for Hydrogen Fueling - Page 5
Pure H2 Fueling Technology - Page 7
Blend Gas Fueling: A Transition Strategy - Page 10
Performance Metrics: H2 vs. Fossil Fuels - Page 12
Cost Analysis and Economic Viability - Page 15
Innovations Driving Leadership - Page 18
Laminar Flame Speed Adjustment
Heat Management Systems
The Donatelli Cycle
Non-Carnot Cycle Applications
Case Studies and Real-World Applications - Page 22
Conclusion: Secure Supplies Leadership in Hydrogen Fueling - Page 27
This presentation provides an in-depth analysis of structural quality control in the KRP 401600 section of the Copper Processing Plant-3 (MOF-3) in Uzbekistan. As a Structural QA/QC Inspector, I have identified critical welding defects, alignment issues, bolting problems, and joint fit-up concerns.
Key topics covered:
Common Structural Defects Welding porosity, misalignment, bolting errors, and more.
Root Cause Analysis Understanding why these defects occur.
Corrective & Preventive Actions Effective solutions to improve quality.
Team Responsibilities Roles of supervisors, welders, fitters, and QC inspectors.
Inspection & Quality Control Enhancements Advanced techniques for defect detection.
Applicable Standards: GOST, KMK, SNK Ensuring compliance with international quality benchmarks.
This presentation is a must-watch for:
QA/QC Inspectors, Structural Engineers, Welding Inspectors, and Project Managers in the construction & oil & gas industries.
Professionals looking to improve quality control processes in large-scale industrial projects.
Download & share your thoughts! Let's discuss best practices for enhancing structural integrity in industrial projects.
Categories:
Engineering
Construction
Quality Control
Welding Inspection
Project Management
Tags:
#QAQC #StructuralInspection #WeldingDefects #BoltingIssues #ConstructionQuality #Engineering #GOSTStandards #WeldingInspection #QualityControl #ProjectManagement #MOF3 #CopperProcessing #StructuralEngineering #NDT #OilAndGas
Preface: The ReGenX Generator innovation operates with a US Patented Frequency Dependent Load
Current Delay which delays the creation and storage of created Electromagnetic Field Energy around
the exterior of the generator coil. The result is the created and Time Delayed Electromagnetic Field
Energy performs any magnitude of Positive Electro-Mechanical Work at infinite efficiency on the
generator's Rotating Magnetic Field, increasing its Kinetic Energy and increasing the Kinetic Energy of
an EV or ICE Vehicle to any magnitude without requiring any Externally Supplied Input Energy. In
Electricity Generation applications the ReGenX Generator innovation now allows all electricity to be
generated at infinite efficiency requiring zero Input Energy, zero Input Energy Cost, while producing
zero Greenhouse Gas Emissions, zero Air Pollution and zero Nuclear Waste during the Electricity
Generation Phase. In Electric Motor operation the ReGen-X Quantum Motor now allows any
magnitude of Work to be performed with zero Electric Input Energy.
Demonstration Protocol: The demonstration protocol involves three prototypes;
1. Protytpe #1, demonstrates the ReGenX Generator's Load Current Time Delay when compared
to the instantaneous Load Current Sine Wave for a Conventional Generator Coil.
2. In the Conventional Faraday Generator operation the created Electromagnetic Field Energy
performs Negative Work at infinite efficiency and it reduces the Kinetic Energy of the system.
3. The Magnitude of the Negative Work / System Kinetic Energy Reduction (in Joules) is equal to
the Magnitude of the created Electromagnetic Field Energy (also in Joules).
4. When the Conventional Faraday Generator is placed On-Load, Negative Work is performed and
the speed of the system decreases according to Lenz's Law of Induction.
5. In order to maintain the System Speed and the Electric Power magnitude to the Loads,
additional Input Power must be supplied to the Prime Mover and additional Mechanical Input
Power must be supplied to the Generator's Drive Shaft.
6. For example, if 100 Watts of Electric Power is delivered to the Load by the Faraday Generator,
an additional >100 Watts of Mechanical Input Power must be supplied to the Generator's Drive
Shaft by the Prime Mover.
7. If 1 MW of Electric Power is delivered to the Load by the Faraday Generator, an additional >1
MW Watts of Mechanical Input Power must be supplied to the Generator's Drive Shaft by the
Prime Mover.
8. Generally speaking the ratio is 2 Watts of Mechanical Input Power to every 1 Watt of Electric
Output Power generated.
9. The increase in Drive Shaft Mechanical Input Power is provided by the Prime Mover and the
Input Energy Source which powers the Prime Mover.
10. In the Heins ReGenX Generator operation the created and Time Delayed Electromagnetic Field
Energy performs Positive Work at infinite efficiency and it increases the Kinetic Energy of the
system.
2. This chapter explores:
the principal notions needed to understand video
in this chapter we shall consider the following aspects
of video and how they impact multimedia
applications:
Analog video
Digital video
Video display interfaces
3D video
2
3. Types of Video Signals
Component video -- each primary is sent as a separate video
signal.
The primaries can either be RGB or a luminance-chrominance
transformation of them (e.g., YIQ, YUV).
Best color reproduction
Requires more bandwidth and good synchronization of the three
components
Composite video -- color (chrominance) and luminance signals
are mixed into a single carrier wave.
Some interference between the two signals is inevitable.
S-Video (Separated video, e.g., in S-VHS) -- a compromise
between component analog video and the composite video. It
uses two lines, one for luminance and another for composite
chrominance signal.
4. 5.1 Analog Video
An analog signal f (t) samples a time-varying image.
So-called progressive scanning traces through a
complete picture (a frame) row-wise for each time
interval.
A high-resolution computer monitor typically uses a
time interval of 1/72 s.
In TV and in some monitors and multimedia
standards, another system, interlaced scanning, is
used.
Here, the odd-numbered lines are traced first, then
the even-numbered lines.
This results in odd and even fieldstwo fields
make up one frame.
4
5. Analog Video
Analog video is represented as a continuous (time varying) signal; Digital video is represented as a sequence of digital
images
NTSC Video
525 scan lines per frame, 30 fps
(33.37 msec/frame).
Interlaced, each frame is divided
into 2 fields, 262.5 lines/field
20 lines reserved for control
information at the beginning of
each field
So a maximum of 485 lines of
visible data
Laserdisc and S-VHS have actual
resolution of ~420 lines
Ordinary TV -- ~320 lines
Each line takes 63.5 microseconds
to scan.
Color representation:
Uses YIQ color model.
PAL (SECAM) Video
625 scan lines per frame, 25
frames per second (40
msec/frame)
Interlaced, each frame is divided
into 2 fields, 312.5 lines/field
Color representation:
Uses YUV color model
6. Interlacing
Interlacing was invented because,
when standards were being defined,
it was difficult to transmit the amount of information in a full
frame quickly enough to avoid flicker,
the double number of fields presented to the eye reduces
the eye perceived flicker.
The jump from Q to R and so on in Fig. 5.1 is called the
horizontal retrace, during which the electronic beam in the
CRT is blanked.
The jump from T to U or V to P is called the vertical
retrace.
6
7. 5.1 interlacing
7
In fact, the odd lines (starting from 1) end up at the
middle of a line at the end of the odd field, and the
even scan starts at a half-way point.
Figure 5.1 shows the scheme used.
First the solid (odd) lines are tracedP to Q, then
R to S, and so on, ending at T
Then the even field starts at U and ends at V.
The scan lines are not horizontal because a small
voltage is applied, moving the electron beam down
over time.
8. Frame Rate and Interlacing
Persistence of vision: The human eye retains an image for a
fraction of a second after it views the image. This property is
essential to all visual display technologies.
The basic idea is quite simple, single still frames are presented at a
high enough rate so that persistence of vision integrates these still
frames into motion.
Motion pictures originally set the frame rate at 16 frames per
second. This was rapidly found to be unacceptable and the frame
rate was increased to 24 frames per second. In Europe, this was
changed to 25 frames per second, as the European power line
frequency is 50 Hz.
When NTSC television standards were introduced, the frame rate
was set at 30 Hz (1/2 the 60 Hz line frequency). Movies filmed at
24 frames per second are simply converted to 30 frames per
second on television broadcasting.
9. Frame Rate and Interlacing
For some reason, the brighter the still image presented to the
viewer, the shorter the persistence of vision. So, bright pictures
require more frequent repetition.
If the space between pictures is longer than the period of
persistence of vision -- then the image flickers. Large bright
theater projectors avoid this problem by placing rotating
shutters in front of the image in order to increase the repetition
rate by a factor of 2 (to 48) or three (to 72) without changing
the actual images.
Unfortunately, there is no easy way to "put a shutter" in front of a
television broadcast! Therefore, to arrange for two "flashes" per
frame, the flashes are created by interlacing.
With interlacing, the number of "flashes" per frame is two, and
the field rate is double the frame rate. Thus, NTSC systems
have a field rate of 59.94 Hz and PAL/SECAM systems a field
rate of 50 Hz.
10. Fundamentals of Multimedia, Chapter 5
Fig. 5.2: Interlaced scan produces two fields for each frame. (a) The
video frame, (b) Field 1, (c) Field 2, (d) Difference of Fields
(a)
(b) (c) (d)
10 Li & Drew
11. 5.1.1 NTSC Video
11
NTSC stands for (National Television System Committee
of the U.S.A)
The NTSC TV standard is mostly used in North America
and Japan.
It uses a familiar 4:3 aspect ratio (i.e., the ratio of picture
width to height) and 525 (interlaced) scan lines per frame
at 30 fps.
Figure 5.4 shows the effect of vertical retrace and sync
and horizontal retrace and sync on the NTSC video
raster.
12. 5.1.1 NTSC Video
12
Figure 5.4 shows the effect of vertical retrace and sync
and horizontal retrace and sync on the NTSC video
raster.
Blanking information is placed into 20 lines reserved for
control information at the beginning of each field.
Hence, the number of active video lines per frame is
only 485.
Similarly, almost 1/6 of the raster at the left side is
blanked for horizontal retrace and sync.
The nonblanking pixels are called active pixels.
Image data is not encoded in the blanking regions, but
other information can be placed there, such as V-chip
information, stereo audio channel data, and subtitles in
many languages.
13. 5.1.1 NTSC Video
13
NTSC video is an analog signal with no fixed horizontal
resolution.
Therefore, we must decide how many times to sample the
signal for display.
Each sample corresponds to one pixel output.
A pixel clock divides each horizontal line of video into
samples.
The higher the frequency of the pixel clock, the more
samples per line.
Different video formats provide different numbers of
samples per line, as listed in Table 5.1.
14. 5.1.1 NTSC Video
14
Table 5.1: Samples per line for various analog video
formats
Format Samples per line
VHS 240
S-VHS 400-425
Betamax 500
Standard 8m 300
Hi-8 mm 425
15. Sampling
15
a sample is an intersection of channel and a pixel
The diagram below depicts a 24-bit pixel, consisting of 3
samples for Red (channel) , Green (channel) , and Blue
(channel) .
In this particular diagram, the Red sample occupies 9
bits, the Green sample occupies 7 bits and the Blue
sample occupies 8 bits, totaling 24 bits per pixel
A sample is related to a subpixel on a physical display.
16. Vertical Trace
16
Alternatively referred to as a vertical blanking
interval or the vertical sync signal, vertical retrace is
used to describe the action performed within the
computer monitor that turns the monitor beam off when
moving it from the lower-right corner of a monitor to the
upper-left of the monitor.
This action takes place each time the beam has
completed tracing the entire screen to create an image.
17. 5.1.2 PAL Video
17
PAL (Phase Alternating Line) is a TV standard originally
invented by German scientists.
This important standard is widely used in Western
Europe, China, India, and many other parts of the world.
Because it has higher resolution than NTSC, the visual
quality of its pictures is generally better.
18. Table 5.2: Comparison of Analog Broadcast TV Systems
18
TV Frame #of Total Bandwidth
System Rate scan Channel Allocation
fps lines width MHz
MHz Y I or U Q or V
NTSC 29.97 525 6.0 4.2 1.6 0.6
PAL 25 625 8.0 5.5 1.8 1.8
SECAM 25 625 8.0 6.0 2.0 2.0
19. 5.1.3 SECAM Video
19
SECAM, which was invented by the French, is the third
major broadcast TV standard.
SECAM stands for Syst竪me Electronique Couleur Avec
M辿moire.
SECAM and PAL are similar, differing slightly in their
color coding scheme.
20. What is Raster Graphics?
20
a raster graphics image is a dot matrix data structure representing a generally
rectangular grid of pixels, or points of color, viewable via a monitor, paper, or other
display medium. (=Bitmap)
A raster is technically characterized by the width and height of the image in pixels
and by the number of bits per pixel (a color depth, which determines the number of
colors it can represent)
Most computer images are stored in raster graphics formats.
Raster graphics are resolution dependent, meaning they cannot scale up to an
arbitrary resolution without loss of apparent quality. This property contrasts with the
capabilities of vector graphics , which easily scale up to the quality of the device
rendering them.
http://vector-conversions.com/vectorizing/raster_vs_vector.html
https://99designs.com/designer-blog/2011/05/02/vector-vs-raster-images/
21. What is Raster Graphics?
21
The smiley face in the top left corner is a raster image. When enlarged,
individual pixels appear as squares. Zooming in further, they can be
analyzed, with their colors constructed by adding the values for red,
green and blue.
22. 5.2 Digital Video
22
The advantages of digital representation for video:
Storing video on digital devices or in memory, ready to be processed (noise
removal, cut and paste, and so on) and integrated into various multimedia
applications.
Direct access, which makes nonlinear video editing simple.
Repeated recording without degradation of image quality.
Ease of encryption and better tolerance to channel noise.
23. 5.2.2 CCIR and ITU-R Standards for Digital Video
23
The CCIR is the Consultative Committee for International
Radio.
One of the most important standards it has produced is
CCIR-601 for component digital video.
This standard has since become standard ITU-R Rec. 601,
an international standard for professional video applications.
It is adopted by several digital video formats, including the
popular DV video.
24. 5.2.2 CCIR and ITU-R Standards for Digital
Video
24
CIF stands for Common Intermediate Format, specified by
the International Telegraph and Telephone Consultative
Committee (CCITT)
now superseded by the International Telecommunication
Union, which oversees both telecommunications (ITU-T)
and radio frequency matters (ITU-R) under one United
Nations body
The idea of CIF, which is about the same as VHS
quality, is to specify a format for lower bitrate.
CIF uses a progressive (noninterlaced) scan.
QCIF stands for Quarter-CIF, and is for even lower
bitrate.
25. 5.2.2 CCIR and ITU-R Standards for Digital
Video
25
CIF is a compromise between NTSC and PAL, in that it
adopts the NTSC frame rate and half the number of
active lines in PAL.
When played on existing TV sets, NTSC TV will first
need to convert the number of lines, whereas PAL TV
will require frame rate conversion.
26. Digital Video
Advantages over analog:
Direct random access --> good for nonlinear video editing
No problem for repeated recording
No need for blanking and sync pulse
Almost all digital video uses component video
The human eye responds more precisely to brightness information
than it does to color, chroma subsampling (decimating) takes
advantage of this.
In a 4:4:4 scheme, each 88 matrix of RGB pixels converts to three
YCrCb 88 matrices: one for luminance (Y) and one for each of the two
chrominance bands (Cr and Cb).
A 4:2:2 scheme also creates one 88 luminance matrix but decimates
every two horizontal pixels to create each chrominance-matrix entry. Thus
reducing the amount of data to 2/3rds of a 4:4:4 scheme.
Ratios of 4:2:0 decimate chrominance both horizontally and vertically,
resulting in four Y, one Cr, and one Cb 88 matrix for every four 88 pixel-
matrix sources. This conversion creates half the data required in a 4:4:4
chroma ratio.
27. Chroma Subsampling(contd.)
4:1:1 and 4:2:0 are used in JPEG and
MPEG
256-level gray-scale JPEG images
aren't usually much smaller than their
24-bit color counterparts, because
most JPEG implementations
aggressively subsample the color
information. Color data therefore
represents a small percentage of the
total file size.
8x8 : 8x8 : 8x8
4 : 2 : 2 4 : 1 : 1
4 : 2 : 0
4 : 4 : 4
8x8 : 8x4 : 8x4 8x8 : 8x2 : 8x2
8x8 : 4x4 : 4x4
8x8 : 8x2 : 0x0
28. HDTV
Name Lines Aspect
Ratio
Opt.
View
dist
P/I Freq.
MHz
HDTV
USA, ana
1050 16:9 2.5H P 8
HDTV
Eur, ana
1250 16:9 2.4 P 9
HDTV
NHK
1125 16:9 3.3 I 20
NTSC息 525 4:3 7 I 4.2
NTSC 525 4:3 5 P 4.2
PAL息 625 4:3 6 I 5.5
PAL 625 4:3 4.3 P 5.5
SECAM息 625 4:3 6 I 6
SECAM 625 4:3 4.3 P 6
息: Conventional
29. 5.2.3 High-Definition TV
29
The introduction of wide-screen movies brought the
discovery that viewers seated near the screen enjoyed a
level of participation (sensation of immersion) not
experienced with conventional movies.
Apparently the exposure to a greater field of view, especially
the involvement of peripheral vision, contributes to the sense
of being there.
The main thrust of High-Definition TV (HDTV) is not to
increase the definition in each unit area, but rather to
increase the visual field, especially its width.
First-generation HDTV was based on an analog technology
developed by Sony and NHK in Japan in the late 1970s.
30. 5.2.3 High-Definition TV
30
MUltiple sub-Nyquist Sampling Encoding (MUSE) was an
improved NHK HDTV with hybrid analog/digital technologies
that was put in use in the 1990s.
It has 1,125 scan lines, interlaced (60 fields per second), and
a 16:9 aspect ratio. (compare with NTSC 4:3 aspect ratio,
see slide 8)
In 1987, the FCC decided that HDTV standards must be
compatible with the existing NTSC standard and must be
confined to the existing Very High Frequency (VHF) and Ultra
High Frequency (UHF) bands.
31. 5.2.4 Ultra High Definition TV (UHDTV)
31
UHDTV is a new developmenta new generation of HDTV!
The standards announced in 2012
The aspect ratio is 16:9.
The supported frame rate has been gradually increased to
120 fps.
32. 5.3 Video Display Interfaces
32
We now discuss the interfaces for video signal transmission from some
output devices (e.g., set-top box, video player, video card, and etc.) to
a video display (e.g., TV, monitor, projector, etc.).
There have been a wide range of video display interfaces,
supporting video signals of different formats (analog or digital,
interlaced or progressive), different frame rates, and different
resolutions
We start our discussion with
analog interfaces, including Component Video, Composite Video, and S-
Video,
and then digital interfaces, including DVI, HDMI, and DisplayPort.
33. 5.3.1 Analog Display Interfaces
33
Analog video signals are often transmitted in one of three
different interfaces:
Component video,
Composite video, and
S-video.
Figure 5.7 shows the typical connectors for them
Fig. 5.7 Connectors for typical analog display interfaces. From left to right:
Component video, Composite video, S-video, and VGA
34. 5.3.1 Analog Display Interfaces
34
Component Video
Higher end video systems, such as for studios, make use of
three separate video signals for the red, green, and blue
image planes.
This is referred to as component video.
This kind of system has three wires (and connectors)
connecting the camera or other devices to a TV or monitor.
35. 5.3.1 Analog Display Interfaces
35
S-Video
As a compromise, S-video (separated video, or super-video, e.g., in S-
VHS) uses two wires: one for luminance and another for a composite
chrominance signal.
The reason for placing luminance into its own part of the signal is that
black-and white information is most important for visual perception.
As noted in the previous chapter, humans are able to differentiate
spatial resolution in the grayscale (black and-white) part much better
than for the color part of RGB images.
Therefore, color information transmitted can be much less accurate
than intensity information.
We can see only fairly large blobs ()悋愀 of color, so it makes sense to
send less color detail.
36. 5.3.1 Analog Display Interfaces
36
Video Graphics Array (VGA)
The Video Graphics Array (VGA) is a video display interface
that was first introduced by IBM in 1987, along with its PS/2
personal computers. It has since been widely used in the
computer industry with many variations, which are
collectively referred to as VGA.
The initial VGA resolution was 640480 pixels.
The VGA video signals are based on analog component
RGBHV (red, green, blue, horizontal sync, vertical sync).
37. 5.3.2 Digital Display Interfaces
37
Given the rise of digital video processing and the monitors that directly
accept digital video signals, there is a great demand toward video
display interfaces that transmit digital video signals.
Such interfaces emerged in 1980s (e.g., Color Graphics Adapter
(CGA)
Today, the most widely used digital video interfaces include Digital
Visual Interface (DVI), High-Definition Multimedia Interface (HDMI),
and Display Port, as shown in Fig. 5.8.
Fig. 5.8 Connectors of different digital display interfaces. From left to right:
DVI, HDMI, DisplayPort
38. 5.3.1 Analog Display Interfaces
38
Composite Video
When connecting to TVs or VCRs, composite video uses only
one wire (and hence one connector, such as a BNC
connector at each end of a coaxial cable or an RCA plug at
each end of an ordinary wire), and video color signals are
mixed, not sent separately.
The audio signal is another addition to this one signal.
39. 5.3.2 Digital Display Interfaces
39
Digital Visual Interface (DVI)
Digital Visual Interface (DVI) was developed by the Digital
Display Working Group (DDWG) for transferring digital video
signals, particularly from a computers video card to a
monitor.
It carries uncompressed digital video and can be configured
to support multiple modes, including DVI-D (digital only), DVI-
A (analog only), or DVI-I (digital and analog).
The support for analog connections makes DVI backward
compatible with VGA (though an adapter is needed between
the two interfaces).
The DVI allows a maximum 16:9 screen resolution of
19201080 pixels.
40. 5.3.2 Digital Display Interfaces
40
High-Definition Multimedia Interface (HDMI)
HDMI is a newer digital audio/video interface developed to be
backward-compatible with DVI.
HDMI, however, differs from DVI in the following aspects:
1. HDMI does not carry analog signal and hence is not compatible with
VGA.
2. DVI is limited to the RGB color range (0255).
3. HDMI supports digital audio, in addition to digital video.
The HDMI allows a maximum screen resolution of 25601600 pixels.
2, 5601, 600
41. 5.3.2 Digital Display Interfaces
41
Display Port
Display Port is a digital display interface. It is the first display
interface that uses packetized data transmission, like the Internet or
Ethernet
Display Port can achieve a higher resolution with fewer pins than
the previous technologies.
The use of data packets also allows Display Port to be extensible,
i.e., new features can be added over time without significant
changes to the physical interface itself.
Display Port can be used to transmit audio and video
simultaneously, or either of them.
Compared with HDMI, Display Port has slightly more bandwidth,
which also accommodates multiple streams of audio and video to
separate devices.
42. Computer Video Format
Depends on the i/p and o/p devices (digitizers) for motion video medium.
Digitizers differ in frame resolution, quantization and frame rate
IRIS video board VINO takes NTSC video signal and after digitization can achieve frame
resolution of 640x480 pixels, 8 bits/pixel and 4 fps.
SunVideo digitizer captures NTSC video signal in the form of an RGB signal with frame
resolution of 320x240 pixels, 8 bits/pixel and 30 fps.
Computer video controller standards
The Color Graphics Adapter (CGA):
320 x 240 pixels x 2 bits/pixel = 16,000 bytes (storage capacity per image)
The Enhanced Graphics Adapter (EGA):
640 x 350 pixels x 4 bits/pixel = 112,000 bytes
The Video Graphics Array (VGA):
640 x 480 pixels x 8 bits/pixel = 307,200 bytes
The 8514/A Display Adapter Mode:
1024 x 768 pixels x 8 bits/pixel = 786,432 bytes
The Extended Graphics Array (XGA):
1024x768 at 256 colors or 640x480 at 65,000 colors
The Super VGA (SVGS):
Upto 1024x768 pixels x 24 bits/pixel = 2,359,296 bytes
43. 5.4 3D Video and TV
43
the rapid progress in the research and development of 3D
technology and the success of the 2009 film Avatar have
pushed 3D video to its peak.
The main advantage of the 3D video is that it enables the
experience of immersion be there, and really Be there!
Increasingly, it is in movie theaters, broadcast TV (e.g.,
sporting events), personal computers, and various handheld
devices.
44. 5.4.1 Cues for 3D Percept
44
The human vision system is capable of achieving a 3D
percept by utilizing multiple cues.
They are combined to produce optimal (or nearly optimal)
depth estimates.
When the multiple cues agree, this enhances the 3D percept.
When they conflict with each other, the 3D percept can be
hindered. Sometimes, illusions can arise.
45. Monocular Cues
45
The monocular cues that do not necessarily involve both eyes
include:
Shadingdepth perception by shading and highlights
Perspective scalingconverging parallel lines with distance and at infinity
Relative sizedistant objects appear smaller compared to known same-size
objects not in distance
Texture gradient the appearance of textures change when they recede in
distance
Blur gradientobjects appear sharper at the distance where the eyes are
focused, whereas nearer and farther objects are gradually blurred
Hazedue to light scattering by the atmosphere, objects at distance have lower
contrast and lower color saturation
Occlusion a far object occluded by nearer object(s)
Motion parallax induced by object movement and head movement, such that
nearer objects appear to move faster.
Among the above monocular cues, it has been said that Occlusion
and Motion parallax are more effective.
46. Binocular Cues
46
The human vision system utilizes effective binocular vision, i.e., stereo
vision or stereopsis (Greek word "stereos" which means firm or solid).
Our left and right eyes are separated by a small distance, on average
approximately 2.5 inches, or 65mm, which is known as the interocular
distance.
As a result, the left and right eyes have slightly different views, i.e.,
images of objects are shifted horizontally.
The amount of the shift, or disparity, is dependent on the objects
distance from the eyes, i.e., its depth, thus providing the binocular cue
for the 3D percept.
The horizontal shift is also known as horizontal parallax.
The fusion of the left and right images into single vision occurs in the
brain, producing the 3D percept.
Current 3D video and TV systems are almost all based on stereopsis
because it is believed to be the most effective cue.
47. 5.4.2 3D CameraModels
47
Simple Stereo Camera Model
We can design a simple (artificial) stereo camera system in which
the left and right cameras are identical (same lens, same focal
length, etc.); the cameras optical axes are in parallel, pointing at
the Z-direction, the scene depth
Toed-in Stereo Camera Model
Human eyes can be emulated by so-called Toed-in Stereo
Cameras, in which the camera axes are usually converging
惠悋惘惡and not in parallel.
One of the complications of this model is that objects at the same
depth (i.e., the same Z) in the scene no longer yield the same
disparity (惠悋惠)
In other words, the disparity planes are now curved.
Objects on both sides of the view appear farther away than the
objects in the middle, even when they have the same depth Z.
48. 5.4.3 3DMovie and TV Based on Stereo Vision
48
3D Movie Using Colored Glasses
3D Movies Using Circularly Polarized Glasses
3D TV with Shutter Glasses