Why interlaced hd




















I don't know what settings are available on your particular set-top box or DVR, but here are some common options and what they will do to the signal:. If your box offers this option, it's probably your best bet. This is probably your best bet if the box doesn't offer native resolution, or if you're not satisfied with the native resolution results.

Note that you'll have to use an HDMI connection for this option to work. It's fine for a i signal, of course, since it sends the signal unchanged to the HDTV. Your p broadcasts are sent unchanged to be upconverted by the HDTV, but your i images lose more than half their pixels--while getting a de-interlacing that's probably inferior to the one your TV would give them.

If your box doesn't offer either of the first two settings, you should pick the option that appears, in your judgement, to do the least harm. My thanks to Waldojim who, in the original forum discussion , identified Eric's problem as interlacing.

He also provided a very good description of interlacing which I tried hard not to steal when I wrote my own. Contributing Editor Lincoln Spector writes about technology and cinema. Even including this technique, progressive video content will not alternate fields and will present a full keyframe that you will never find in interlaced content.

Consumers will be familiar with this terminology due to its proliferation in HD content. Many playback methods, like computer monitors or modern HD TVs, do not support interlacing.

So even if interlacing provided better looking content, a broadcaster would still want to go with progressive delivery due to support for this method. Otherwise, the broadcaster would be displaying interlaced video in a progressive format. Sometimes a broadcaster needs to use an interlaced source for streaming. In other words, taking an interlaced source and make it progressive or watching it in a progressive medium, like a computer monitor.

This need can range from wanting to use an older broadcast to using an analogue camera that supports interlacing. Converting the video involves combining the two fields, that were created as part of the interlacing process, into a single frame.

By default, this process creates a rather ugly artifact on high motion in the video track. The motion between fields can cause visible tearing when displayed as progressive video. Essentially, the video track shows two different line fields where the fast motion is occurring, creating a staggered line appearance as seen in the image below on the figure to the left.

Left: Interlaced video shown in a progressive format. Right: Deinterlaced video more on this later. A lot of analogue cameras, for example, are setup to deliver video in an interlaced manner. Even some modern digital cameras still offer interlaced mode. One method to tell if your camera was setup for interlaced content or not is in the specs. While some will be overt, describing that the camera outputs in interlaced mode, others will state it in their mentioned resolution.

For example, we already discussed that p is an HD feed that is progressive. Chances are good that someone has seen p content much more frequently than the interlaced version.

Most modern analogue cameras, if they are interlaced, should mention it either directly or with the resolution. Thankfully, there is a process called deinterlacing which can solve issues created from presenting interlaced content in a progressive medium. Deinterlacing uses every other line from one field and interpolates new in-between lines without tearing, applying an algorithm to minimize the resulting artifacts.

Deinterlacing is done at the encoder level for live content. Most of the cones with which we perceive color and detail are located just behind the lens of the eye, where they receive the light that comes into the eye from a straight-ahead direction. The rest of the retina is covered with rods, which receive most of the light that does not enter the eye from straight ahead, and are dominant in nonfoveal vision.

Rod vision is coarse but acute; the rods do not perceive color or fine detail, but are much more sensitive to movement than are the cones.

This fact led to several of the big Todd-AO productions from the golden age of movie making being shot with parallel 24 fps and 30 fps cameras. The wraparound Todd-AO screens involved the viewer's peripheral vision so much that there was excessive perceptible flicker at 48 flashes per second.

For Todd-AO projection, the 30 fps film was used to raise the flash rate to 60 fps, while the 24 fps film was used for standard projection to flat screens.

By the way, dogs' eyes have no cones, only rods, so although Fido can detect the tiniest movement, he would probably not care for movies unless he could see them in Todd-AO. In interlaced scanning, each picture or frame is scanned as two half-frames or fields, each field containing every other line of the frame.

In Field 1, all the odd lines of the frame, 1,3,5, etc. The lines of Field 2 fill in the blanks left when Field 1 was scanned, and vice versa. The fields are scanned, transmitted and displayed sequentially. As they are sequentially displayed, the human vision system perceives the odd-numbered lines and the even-numbered lines to be interwoven or interlaced, which integrates them into a complete picture.

This seems, on the surface, to be the best of all possible worlds. Thirty frames' worth of picture information is transmitted each second, while the vertical repetition rate is doubled to 60 light flashes per second. Being people of the world, however, we know that there is no free lunch. The goal of interlaced NTSC was to effectively provide about lines of vertical resolution, while keeping the vertical repetition rate above the critical flicker threshold.

It was rather quickly determined that while the latter goal was met, the former was not. This is true because the full resolution of an interlaced picture is only realized when it is a still picture. The still picture is the best case; when the picture moves vertically between fields, vertical resolution is compromised.

In the worst case, when there is vertical motion in the picture at a rate of an odd multiple of one scanning line per second, an entire field's worth of resolution is lost. This line would move at a rate of two scan lines per frame, so that for the entire time it is in the scanned picture, it would always be located in either an odd-line field or an even-line field, depending on when it entered the scanned picture.

The line might, therefore, fall upon each successive scan line in the field that is being scanned when it arrives there. If so, it would appear to flash on and off at the frame rate as it travels through the picture.



0コメント

  • 1000 / 1000