Japanese broadcaster NHK has developed what it says is a world first encoder that will let viewers watch two (or more) different types of content through one broadcast channel.
According to the company, this is the first real-time Versatile Video Coding (VVC) encoder that can handle multi-layering of video content without impacting quality.
NHK said its Science and Technology Research Laboratories (STRL) have been investigating ways to deliver “sub-content” to viewers, enabling them to overlay additional video content. Traditionally, sub-content would be broadcast alongside main content, thus requiring multi-channels to manage the bandwidth and compression.
But NHK’s new encoder can compress sub-content in real-time, using multi-layer encoding, which results in two videos being able to play via a single broadcast channel.

NHK cited examples of where this technology would be of use, including during football matches where fans could follow their favourite player or watch what’s happening on the bench, while also watching the game.
The technology could also be used for channels with sign language, where broadcasters would only be required to transmit one channel, with viewers choosing if they wanted sign-language layered on the screen or not, without having to switch to a different channel.
“As the multi-layer function can allow the broadcaster to transmit any kind of content, for example, for those who prefer captions rather than sign-language, broadcasters can further add captions to the sign-language, making the content even more accessible,” said NHK.
“And if the viewer prefers not to have these additional elements, they can simply switch them off, in which case, only the main programme will remain. All this can be done through the broadcast of one channel,” said the broadcaster.
The encoder generates the source of the enhancement layer using the decoded base layer video, said NHK, and inputs it into the enhancement layer encoding process. This results in making the base and enhancement layers identical, apart from where the sub-content is overlaid, and reduces the need for encoding in “many image areas”.
“This reduces the computational complexity in the enhancement layer encoding process, enabling a real-time encoder without the need of a high-performance processor,” added NHK.