Do you have a strategy to cope with mixed cadence content?

Frame Rate Conversion Doing it in SoftwareIn a previous blog post – “Playing the numbers game with mixed cadence materials” – I outlined some of the problems that we still face when handling material shot in 24fps. Physical rolls of film may be all but history, but people love the film look, so we still have to wrestle with the 2:3 cadence, the bodge we invented to get from 24fps to 30fps to avoid making everyone run around and talk as if they are breathing helium:

And I promised I would return to the subject, and talk about what we can do to get the best possible quality out of media that was shot at a different frame rate to the display rate. In order to figure out the best technique, first we need to know a little about the realm of possible solutions. Here are some common techniques:

Do nothing – if there are no interruptions to the 2:3 cadence, then smile sweetly and watch the pictures not getting degraded any further.

Change the metadata – if you tell the receiving device the content is at a new frame rate, it will read the video at that speed. You can get away with it when converting 24fps to 25fps with a little resampling of the audio, but it really does not work if you try big speed-ups when converting 24fps to 30fps.

Drop or repeat – you can increase the frame rate by repeating some frames; or decrease it by dropping some frames. After all, this is sort of what we do with the 2:3 pulldown. But repeating frames regularly is blindingly obvious and your audiences definitely will notice. Only to be used sparingly and intelligently to repair problems.

Linear interpolation – basically, blend two pictures together to make a new one. But the better the original pictures, the worse the artefacts: fast motion and sharp edges become really smeary due to the blending process – this can be very visible.

Motion compensated frame rate conversion – this is the same sort of process, but this time every pixel in the scene is measured and its motion vectors predicted. The new pixels are a linear blend of the source pixels projected along the motion vectors. Smart processing can predict with a degree of confidence whether a pixel is a moving object or noise. Works really well with a lot of image types, but some repetitive images – like car wheels or pans across regular structures like brickwork are very difficult to process and result in very disturbing results.

Black box magic – clever inventors regularly come up with new approaches and better algorithms. The chances are, though, that like motion compensated conversion, the current gold standard, it will work well with one sort of image type and not with another.

So which do you choose? I suggest the smart answer is that you do not choose one mechanism for a whole clip, but choose the best algorithm for a short sequence within the clip.

If you go out and buy a hardware box – even if it has the best possible processing – then your only option is to have that process either in the circuit or bypassed. And as I started out by saying, quite often we can get away without doing anything.

But if you do it in software, then that software can make the decision of which technique to use, on a scene by scene basis or even a frame by frame basis if necessary. That is the flexibility of software: you can choose how much you use. If the next scene needs something different, then switch to it. How do you switch? You’ll have to read the white paper to find out – we think it’s quite simple and therefore quite clever – but you should decide.

My conclusion, then, is that the perfect world where interlace no longer exists and cadence is a thing of the past has still not arrived. Until that glorious future, we still have 24fps origination in a 29.97i transmission chain and have to be flexible about how we handle it. The good news, though, is that if you make it part of the transcode and quality control process on a software platform like iCR, you can adapt the processing to optimize the output scene-by-scene.

If you want to find out more about the problems and my solution, we have a white paper.

I hope you found this blog post interesting and helpful. If so, why not sign-up to receive notifications of new blog posts as they are published?

CTA Adaptive File based Standards Conversion of Mixed Cadence Material Whitepaper

Share this post:

Subscribe

Sign up for Dalet updates and our quarterly newsletter. Expect #MediaTech insights!

Leave a Reply

Your email address will not be published. Required fields are marked *

Recommended Articles

Unlocking the Future of Media with AI - Join us to Lead the Change 

AI technologies progressed drastically in the last few years. Speech-to-text and face recognition are prime examples of use cases where AI-driven solutions that have existed for many years have now reached an acceptable level of maturity and commercial viability.

Read More
AmberFin_hero

Leveraging Premium Media Processing for Business Success

In today's fiercely competitive media business environment, every company is looking for means to stay ahead of the pack. Smart, highly efficient media processing can be a game-changer. Discover how Dalet AmberFin delivers high-quality content that grows your audience.

Read More

How Cloud-Native MAM is Adopting PAM Features for Seamless Media Workflows 

Recently, the lines between Media Asset Management (MAM) and Production Asset Management (PAM) have become increasingly blurred. This convergence reflects the quickly evolving needs of the industry. In recent years a huge leap forward in technological capability has coincided with rising creative demands and shifting media consumption trends. This has all had a significant effect...

Read More

Unlocking the Future of Media with AI - Join us to Lead the Change 

AI technologies progressed drastically in the last few years. Speech-to-text and face recognition are prime examples of use cases where AI-driven solutions that have existed for many years have now reached an acceptable level of maturity and commercial viability.

Read More
AmberFin_hero

Leveraging Premium Media Processing for Business Success

In today's fiercely competitive media business environment, every company is looking for means to stay ahead of the pack. Smart, highly efficient media processing can be a game-changer. Discover how Dalet AmberFin delivers high-quality content that grows your audience.

Read More

How Cloud-Native MAM is Adopting PAM Features for Seamless Media Workflows 

Recently, the lines between Media Asset Management (MAM) and Production Asset Management (PAM) have become increasingly blurred. This convergence reflects the quickly evolving needs of the industry. In recent years a huge leap forward in technological capability has coincided with rising creative demands and shifting media consumption trends. This has all had a significant effect...

Read More