You are here


From MPEG 1 to HEVC: the battle of the bitrate bulge continues

From MPEG 1 to HEVC: the battle of the bitrate bulge continues

From MPEG 1 to HEVC: the battle of the bitrate bulge continues
Posted: Tuesday, March 11, 2014

From MPEG 1 to HEVCBack in the mists of time – well, 40 years ago – people were contemplating the idea of digital video, and they came up against the stumbling block that the resultant files would be huge. Even though we were talking about standard definition back then, giving every pixel on the screen its own value leads to a lot of bits and bytes:

So the idea of compression was a no-brainer. The challenge was to find a way to do it without the audience noticing, or not noticing too much. What mathematical process could be applied for visually lossless compression?

The first attempts, in the 1980s, used discrete cosine transforms. Ampex – remember them? – they even called their digital recording format DCT.

At that time, we already had a good, visually lossless, compression algorithm for pictures. But it had been developed by the Joint Photographic Experts Group, so it only worked on still pictures. While some early digital video equipment used “motion JPEG” this was never a standard, and the lack of predictability in the coding time for JPEG made it all a bit tricky.

So the same clever mathematicians were convened in a new conference, the Moving Picture Experts Group. Their task was to find a way of compressing video so it would fit in the data rates of a CD. By the time they published their first standard – unimaginatively called MPEG-1 – it was the late eighties, and the techie world was beginning to buzz with a new concept, the Internet.

Oh, and part of the work on MPEG-1 actually led to the death of the CD, not its renewed purpose. Audio coding level 3 was scorned by all the clever people on the MPEG board but it made it into the specification anyway, and some smart marketing people spotted the fact that if they gave MPEG-1 audio coding level 3 a snappier name – MP3, say – it could change the way we listened to music.

MPEG-2 was developed, on the same fundamental principles, with the express aim of creating a high quality broadcast system. It succeeded, of course, and it made multi-channel television possible.

MPEG-2 is an asymmetric system: it is very processor intensive to encode, because you only do it once, but it is relatively easy to decode, because you want to do it in a chip which is not going to cost much to add to the television or set-top box. But for broadcast you needed a real time encoder, which had to be built on the processing capabilities of the day, so MPEG-2 had a natural limitation because its algorithms were designed for the processors of 1996.

Seven years’ worth of Moore’s Law meant that the designers of MPEG-4 had much more processor power to play with, so could adopt some much more complex algorithms. That actually made the project extremely difficult, so in the end MPEG collaborated with the Video Coding Experts Groupfrom the International Telecommunications Union.

That is why what we call MPEG-4 for short is strictly speaking MPEG-4 part 10 or the Advanced Video Codec (AVC) if you are an MPEG fan, or H.264 if you prefer the ITU standard number.

But it achieved the goal of halving the bitrate on MPEG-2, thus making it practical and affordable to broadcast many HD channels, when it was published in 2003.

Another ten years of Moore’s Law and we are ready for another version. The High Efficiency Video Codec or HEVC – or H.265 in ITU-speak – again aims to doubles the compression ratio. When the good mathematicians got their pencils out their motivation was to increase the number of HD channels in each transport stream.

Since then, though, other people have tried to steal the initiative, telling us we need 4k video to the home, thereby soaking up all the gains and more (especially if we increase the frame rate too).

HEVC has huge potential. It introduces some new concepts, like variable block sizes, so it can allocate less power to uniform areas like the blue sky or the green pitch, giving it more resources to process the movement that we actually want to see. But that will require two or three times the horsepower of an H.264 encoder for the same content.

So we are still dependent upon cutting edge hardware running very smart, very fast software to get the best out of the codec. Which is why the AmberFin iCR is so important as a platform, for preparing and processing video to achieve the best possible results. If you want to read more on the topic, check out our HEVC white paper

CTA HEVC Whitepaper

Posted by Bruce Devlin

Popular Posts

Copyright 2017. Dalet Academy by Dalet Digital Media Systems - Agence web : 6LAB