Once upon a time, television was quite simple. Premium content was shot on film, at Hollywood’s standard of 24 frames a second. Then it was cut on film, and any postproduction was done on film. Even delivery from production house to broadcaster was done on film:
The broadcaster then laced it up on a telecine. For 25 frames a second countries they simply ran the film fast. But in North America, where the frame rate is 30, they had to do something slightly clever: they invented the 2:3 pull-down or cadence.
And what does that mean? Simply put, it means that you get two fields out of one film frame, and three fields out of the next. It means that some television frames are sharper than others: two in five television frames have fields from different film frames. But hey: we were watching standard definition television on soft CRTs, so no-one really cared too much.
And yes, we know that 30fps television is really 29.97fps, so we had to add another layer of complication to drop a frame every now and then to get back in synchronisation.
Back then, viewers in North America had the shifting sharpness of the 2:3 cadence; viewers in other parts of the world had slightly jerky movement and audio that was pitch and tempo shifted.
Life is different today. We are watching HD on very sharp, very clean screens, whether it is the big LCD mounted on the wall of the family room or the tablet held close to our eyes. We expect a still frame to be still, and sharp – today’s viewers would marvel that we even had a pause button on a VHS machine, the result was so bad.
We are also rather more critical about audio. Pop music cares about beats per minute, and changing it by 4% is not popular. Critical listeners of serious music are not only aware of the pitch shifts but hear subtle shifts in the sound field because of phase inaccuracies.
But that is not a problem, because we do not shoot anything on film any more, right?
Sadly, wrong. Our creative colleagues love the film look they get from setting their digital cameras at 24fps progressive. And their edit workstation du jour tells them it is fine to mix frame rates and codecs on the same timeline. So what is the problem?
The problem is that the chances of that smooth 2:3 cadence getting messed up somewhere down the line are quite high. Less than careful transcoding or graphics insertion can also do it. So can quick and dirty editing to add commercial breaks or cut for duration or compliance. So can squeezebacks, although you could argue that if it stops the ends of programs being ruined this way, it’s not necessarily a bad thing.
So what should we do? There are a number of ways we can do frame rate conversion which can respond to problems along the way. The smart thing to do is to make it a QC issue, and use clever software to manage the processing, not least because it will have the good sense to leave the signal alone if there is nothing wrong with it, something that hardware processors cannot do.
Keep reading the blog to find out more next week.