Program for SMPTE15: Persistence of Vision - Defining the Future (Sydney, Australia on 14-17 July, 2015)
Tuesday, July 14
Tuesday, July 14, 09:30 - 10:45
This topic will focus on the HEVC codec and the practical integration into your workflows and facilities. We will look at the codec and any performance limitations and issues, the opportunities and economics of integrating HEVC along with adoption curves and proposed timelines for the industry. If you were ever questioned "HEVC sounds great but what's it really mean for me?", then this topic is one not to be missed.
- 09:30 Changing the Game: A Guide to Cost-efficient Software-based HEVC Video Processing Deployment
- The new HEVC/H.265 codec offers many advantages vital to new revenue-generation opportunities and service game changers across the industry, such as: • Addition of more channels and increased quality at lower bitrates • Improved consumer quality of experience • Multiscreen CDN opex reduction • 4K Ultra HD enablement and faster file downloads for mobile video viewing • Improved streamed content quality • Reduced video storage in device memory. • More storage capacity for large libraries The transition to HEVC video processing infrastructures– particularly for large installed MPEG-2 and H.264 installed bases – must also make economic sense. There are a set of common, tough questions to pose when planning transitions to HEVC-enabled video processing infrastructure: • Can it deliver a fully functional implementation of real-time HEVC encoding at 1080P? • Is it optimized for easy migration from legacy compression codecs? • Can it handle the increased processing power and decision/trade-off complexity required to power HEVC? • Is it optimized to minimize total cost of ownership? • Will it evolve your infrastructure or leave you stranded as the HEVC codec evolves? Software-upgradable solutions can incorporate these new compression approaches much more quickly and cost-efficiently than fixed-hardware encoding platforms, such as ASICs and DSPs. This presentation will provide a practical guide to software-based HEVC deployment through the evolving video ecosystem.
- 09:55 An Analysis of the Impact of HEVC on Existing Media Businesses
- While a new, more efficient codec is unlikely to change everything we know about media, it is likely to have a substantial impact on the viability of existing media businesses. This paper will examine the likely adoption curves and timelines for viable HEVC service delivery in the mobile, STB and TV domains. It will analyse not only consumer devices but also the hardware, software and cloud-based infrastructure required to "end-to-end enable" commercial service delivery with a new codec. The paper will conclude by illustrating a series of HEVC adoption possibilities and timelines for service providers.
- 10:20 Panel Discussion: The Challenges of HEVC
- Our two speakers will be joined by Matt Goldmann and Ian Trow for a panel discussion, with the opportunity for members of the audience to pose questions.
It's overcast and the broadcast world is full of clouds, but where is the silver lining? This topic will focus on issues and solutions to utilising multiple cloud service offerings to enhance your workflows showcasing how others are achieving success.
- 09:30 My Boss Told Me to Transcode in the Cloud
- Conceptually, using cloud infrastructure for media production operations is appealing to many organizations for many reasons. Cloud deployments can provide new capabilities, however there are many different implementation options. This discussion will review the benefits and challenges of various deployment options that should be considered when evaluating whether a cloud implementation is right for your application. Topics to be covered will include: cloud service model comparison; cloud cost model comparison; security considerations; example applications and workflows; and some successful deployment strategies.
- 09:55 Moving high to cloud
- Broadcast technology is constantly evolving, first from tape to file and now from file to cloud. Nowadays, the need of adopting a cloud-based model is rapidly growing among broadcast and media organizations due to the inherit advantages of scalability, accessibility, speed and availability. Moving high to cloud means extending significantly the IT landscape and boundaries of your broadcast workflow, replacing most of the traditional physical elements of your facility (e.g. video recorders, converters and servers) with virtual IT resources interconnected under a cloud environment. The successful application of the cloud concept is a reality already proven by many major companies such as YouTube, which has been using for many years now a fully cloud-based storage spread across multiple locations to give users the best performance, reliability and speed. Like YouTube, any broadcaster can achieve the same benefits from using a cloud, on condition that the implemented framework is reliable, powerful and easy to use. Introducing a private cloud-storage infrastructure not only improves the management and delivery of your content but also gives you a virtualized storage pool with unlimited MAM functionality and capacity, with notably lower operating costs and avoided capital expenses. Upgrading to a cloud solution must be easy to prevent that any hardware change impact on user operations. The benefits of a cloud-storage are countless; in synthesis you will forget about where files are stored as they will be always available in the folders you want with the names you want. The use of multiple cloud devices implies faster transfers as the system will always choose the best path to retrieve files, increased security as no direct access to files is allowed to prevent illegal copies and management errors, and last but not least, it also implies less expense as multiple vendors and technologies can be effortless integrated. Moreover, the importance of having a self-guided GUI is crucial; it will enable users to intelligently and efficiently perform all operations (e.g. import, export, transcoding, metadata insertion) according to the company directions. In this speech listeners will have a grasp on the advantages of cloud computing and the successful application of this technology in video industry: • Discover why cloud computing is better than traditional IT resources • Learn about cloud storage and how it actually reduce costs and risks • Find out what is the impact of cloud computing in your organization • Understand the steps need to smoothly move to a cloud environment The majority of broadcast and production stakeholders are asking what cloud access and mobility means to their business. During this presentation I will talk about the strategies required to deploy smarter workflows based around the cloud, and how their effective implementation will enable organizations to work more flexibly, in the way they want and from the places they need.
- 10:20 'Pivot Around Your Content'
- The network is quickly becoming the platform, displacing bespoke hardware and desktop workstations - this happens when all the attributes of your creative endeavor automatically migrate to a secure cloud location. Users should be able to create at the full fidelity of the interface they choose - from phone to tablet to browser to full desktop software products that are deeply integrated. When overlaid with business logic it enables new business processes to be created - ones that can span your organization no matter how geographically dispersed and integrate the reality of temporary freelancers. This paper focuses on the workflow impact of mobile and cloud and will extend to the potential for IaaS (Infrastructure as a Service).
Tuesday, July 14, 11:10 - 12:50
SMPTE invites members and guests to the opening address and keynote for the SMPTE15 Conference "Persistence of Vision - Defining the Future". Join Barbara Lange, Executive Director of SMPTE as she opens the conference and welcomes Chris Fetner, CEO of Netflix, as the Key Note Speaker.
- 11:10 SMPTE 15 Official Opening: Persistence of Vision - Defining the Future
- When the Society of Motion Picture Engineers was constituted in Washington, DC on 10 August 1916, it set down the following as its objects: Advancement in the theory and practice of motion picture engineering and the allied arts and sciences, the standardization of the mechanisms and practices employed therein, and the maintenance of a high professional standing among its members. Over the ten decades that followed, the Society has embraced the relentless march of technology while maintaining this original vision of its founders. As we enter the centenary year of SMPTE, it is appropriate to honour that Persistence of Vision (pun intended) while gathering to celebrate how we are Defining the Future of the industry through standards development, encouraging membership and providing education resources for the global industry.
- 11:35 The Next 100 years of SMPTE: How it will shape the future of motion pictures
- SMPTE's influence has largely been focused on traditional distribution technologies: film, digital cinema, tape, and broadcast signal paths. OTT platforms are increasingly penetrating the global market and SMPTE needs to evolve more quickly to shape the industry and provide the framework on which another 100 years of motion picture entertainment will be based. IP delivery of moving images is quickly shifting globally to the predominate mechanism for delivery of consumed motion picture entertainment. SMPTE needs to build the next generation of engineers to support the rapid path of innovation that lies ahead, while still supporting the technology that brought us here. Streaming entertainment does not mark the end of SMPTE, but rather the beginning of its next chapter.
Tuesday, July 14, 14:00 - 15:15
This session takes a deep dive into the issues and opportunities across the whole industry that multi-platform delivery brings. We will look at technology challenges, the need for business change and how this integrates across HbbTV and new sources. We will look at case studies and how others including Foxtel, have helped deliver a true multi-platform delivery of services.
- 14:00 HbbTV — Pushing the Traditional Boundaries Out
- TV in Australia is at a massive crossroads. Technology is opening up some big opportunities for not only the traditional free-to-air networks but also other parties who wish to get involved in TV programming, brands such as Red Bull, for example. In 2014, a milestone was created in Australian broadcasting when HbbTV was launched. HbbTV enabled free-to-air broadcasters to offer content to viewers either live or as catchup TV. It was a big win for Australian audiences and a game changer for the traditional networks such as Seven, Nine and Ten, but there are still ongoing public chatter about the demise of FTA networks — Business Spectator went as far as to say "The restructure of free-to-air TV broadcasting has a long way to go. Lets hope the full service model is not an early victim, because no government is going to save a commercial enterprise just to support Australian democracy" (1). As we move deeper into 2015, Australian broadcasters see a slow uptake on FreeviewPlus, a HbbTV collaboration between all the commercial and government backed networks. Challenges, such as DRM implementation and devices with low processor power can now be easily overcome, meaning the potential for HbbTV is only going to increase, as are the audience numbers and therefore advertising revenue that can come from it. This session will breakdown the HbbTV technology, discuss the HbbTV partnership, the multiple challenges that were overcome and how it will open up new opportunities in the future as the evolution of free-to-air television. (1) http://www.businessspectator.com.au/article/2014/10/17/technology/does-full-service-free-air-television-australia-have-future
- 14:25 Challenges of Multi-Platform and Multi-Business Model Content Delivery
- What are the technological challenges and opportunities when new parts of a business need to be at the cutting edge taking big risks while others need stability and efficiency in order to pay the bills and keep the lights on? How do we handle broadcast, streaming (linear and vod) coupled with subscription, ppv, advertising and wholesale business models across multiple different multi-screen products? New technologies such as cloud, XaaS, "big data" etc. etc. that while perfect for the new world can be confronting to the old. This paper seeks to shed light on these issues by drawing on my experiences at Foxtel where change is ever constant, such as the analogue to digital transition, the introduction of class leading PVRs, the development of OTT products and hybrid STB's not to mention the merging and acquisition of several different businesses.
- 14:50 Hybrid Square: OTT delivery model for Australia and New Zealand
- The adoption of HbbTV standards by Australia and New Zealand represents not only a survival strategy for Freeview members in both countries, but also a unique market entry opportunity for niche OTT service operators and telcos alike. The author takes the liberty to introduce a new term: Hybrid Square (Hybrid Broadcast Broadband TV to the power of 2, similar to FM Square), which means integration of HbbTV and independent IPTV/OTT services on a single operator platform. This paper discusses an end-to-end architecture for such a service and covers all delivery ecosystem components: contribution, headend, ABR encoders, set-top boxes, DRM, middleware, etc. A case study of a commercially deployed Hybrid Square service will be provided too.
Another paper will be added to this session once confirmed...This topic will take a playful look at the future and offer several views on what may come. The topic will reflect on learning from the past 100 years, and hypothesis the future state. Things like audience dynamics, types of devices and extended cloud offerings are only the beginning.
- 14:00 What can we learn from the last 100 SMPTE Years? What will tell us about the next 10?
- SMPTE celebrates its 100th Anniversary and a lot has changed in those years. B&W Film, colour film, television with its evolving quality, computing, internet, handheld computing, 3D, Smart TV's and the emerging technologies of data-mining with Deep Data Analysis Does the history of these ideas give us a clue as to what the next big thrust is. Perhaps we should not only look through the front door but also the back door. Be surprised.
- 14:25 Into Thin Air: The Cloud Conflict
- Continued advances in compression techniques have resulted in higher quality content over ever declining links. Furthermore, the internet services continue to speed up. This critical combination is putting additional pressure on the traditional TV players as consumers have more consumption avenues, added distraction, and increased sources of content (for example, user-generated content). Some have said the rise of the internet, Cloud-based encoding and playout centers offer tremendous new opportunities for broadcasters to provide better service at lower cost. However, at the same time, surging smart phone penetration and the consequent escalation in mobile data usage is driving mobile operators to seek out more RF spectrum needed to offer LTE and other 4G services. Broadcasters are inadequately equipped and under-funded compared to the wealth of the telecom industry. Therefore, this spectrum competition from mobile operators spells a bleak picture for the traditional cable, satellite, and terrestrial delivery of TV. So, what does the future look like, and how can you "grab hold of" the cloud? This paper presents a non-vendor, plain language introduction and review of key cloud technologies, presents the political and economic changes now underway, and presents the growth opportunities (or risks of extinction) of the broadcast industry.
- 14:50 Broadcast in the Age of Disruption
- Technology evolution is disrupting traditional television consumption, resulting in varying levels of broadcast audience decline across the globe. Although the TV screen remains the centrepiece of the living room, the way we consume content has gone through a radical change. Changes in business models underpinning content delivery are required. This paper will examine the dynamics of audience shift, how advances in broadband technology have introduced a new channel to deliver content, how this has led to new content service delivery, and how innovation in video-playing devices has created a competitor to the TV. It will focus on how traditional broadcasters and pay TV operators can not only deal with the threat of Internet TV but grow their brand in doing so.
Tuesday, July 14, 15:45 - 17:00
This topic will look at what the broadcast industry is doing to provide expertise into online content delivery, focusing on delivering a high value end user experience. Information at this session will span the Dash 8 codec, the multitude of divergent content players and the delivery of audio mixing capabilities for the visually impaired.
- 15:45 Retrospective Audio Description and dubbing for Web Video
- An innovative solution for audio mixing in the browser for video description and dubbing for both recorded and live internet video. All web delivered video is optimised for a generic presentation, with a single pre-mixed sound track. However this means that the needs of a significant proportion of the potential audience may not be met. Accessibility of video to partially sighted audiences can be enhanced by audible descriptions of the on screen action. Typically this is achieved using an entirely separate video asset where video description has been 'pre-mixed' into the sound track. Unfortunately the small percentage of users requiring this service means that provision of these separate video assets is uneconomical. Ideally to encourage wider adoption, the provision of specialized audio services must be as economic as possible, but without compromising on usability. Screen have developed an audio overlay technique to mix additional audio segments with the delivered video sound track - independently in the player. A Java plug-in allows viewers to select additional audio sub tracks that will be mixed with the main video, in perfect synchronisation with the video playback. The additional audio can be turned on/off by the viewer and the user can select from multiple additional audio services (where available). This technology allows the additional audio services to be *retrospectively* added to the web delivered video as the audio mixing is performed at the browser. In addition the new audio services can be delivered separately from the video, from a different server or domain. Although this is clearly a technique that could have great impact for the retrospective provision of video description services, the audio mixing concept also has great potential in the provision of translation services using dubbing or narration. For certain audiences, dubbing is a more accessible localisation technique, e.g. the young or the visually impaired - who may not be able to read subtitles.
- 16:10 DASH 2015: where are we at and what next
- MPEG-DASH has promise as a multi bitrate format supporting a wide range of features including live, on demand, DRM, captions and more that would supersede existing standards such as Smooth Streaming, HDS and HLS. It now has wide ranging support from major software and device manufacturers with the notable exception being Apple. While it is now possible to have DASH play on just about any device the implementation to date has not been consistent. This is partially due to MPEG-DASH the standard having been developed by many parties with specific existing requirements. One of the issues is that as a specification that has many variations, every player developer is free to support the parts of the specification that they require and as a result what works for one player doesn't work on another. One of the particularly large issues that is emerging in this area is the adoption of DRM in the browser and what is looking to be a very fragmented adoption of Media Source Extensions, Encrypted Media Extensions and Content Decryption Modules. This paper covers an overview on where we are at in regard to existing variations on the DASH standard, what is available in the content generation tool chain, what players we have supporting what elements of the standard and what is developing in the coming year in regard to DRM.
- 16:35 Broadcast Manufacturers Rising to the Challenges of the TV Anywhere Era
- Television broadcasting technology has evolved dramatically over the past 50 years in an effort to keep pace with the demands of higher quality and a broader offering for viewers. Until around 10 years ago, the Internet had little influence on how Broadcasting technology evolved, but in recent times it has opened up a plethora of opportunities for new players in the media space as well as for existing Broadcasters. In the past, suppliers of Broadcasting and Media technology really could safely follow broad technology trends, often handed down from industry or standards bodies, in prioritizing their R&D efforts. Today manufacturers and software providers face a bewildering array of different paths they could take with their offerings, and the pace of innovation has increased dramatically. This presentation discusses how the changes in the industry impact on Vendors and Broadcasters alike, as they both struggle to adapt to unproven business models.
Looking to help build a business case for further integration, then this topic is for you. We look at how to monetise existing assets and how to overcome technical challenges for speed to market, showcasing various models to further integrate your plant using a mix of existing and new technologies.
- 15:45 More for Less — Working Smarter Digitally with your Current Content
- There is an untapped opportunity for broadcasters to leverage their existing digital video content to capture additional advertising revenues, be it pre-roll, mid-roll or post-roll, during live streams. With the fragmentation of audiences across TV, desktop, tablet and mobile, broadcasters have the opportunity to attract more eyeballs to their content across multiple screens, serving targeted adverts to each screen for effective monetisation and ultimately, boosting ad revenues. Currently, broadcasters are using the traditional 30-60 second TVC for their content on digital mediums such as websites and IPTV. This is not the most effective way to advertise to an audience that is not viewing content on a TV. The technology is available today for broadcasters to use and monetise their existing video content, overlaying ads formatted to the new devices in use without increasing the work the broadcaster has to do or, in most cases, the technology that the broadcaster has to use. As broadcasters move more content into online and OTT channels, as the Seven Network did with the Australian Open tennis by broadcasting multiple courts exclusively online, it becomes increasingly important to realise the potential of serving ads to these channels. This session will share key insights into how broadcasters can implement these solutions and explain how to overcome the technological challenges in doing so.
- 16:10 Can't keep up? A virtual approach
- The television business has grown up around technology that performs unique functions at data speeds and densities that demand dedicated hardware platforms to deliver the required performance. Changing market dynamics in the online media marketplace are spilling over into traditional television and creating big challenges for operators. The dedicated hardware approach can not deliver the "time to market" needed to compete. At the same time, the IT industry is transforming at incredible speed as a result of the rapid performance improvements in commodity compute hardware and the shift to network function virtualisation. This paper contemplates whether we are now at the point where the television transmission process can be fully virtualised, and how to manage this transition. We will look at the technology evolution that has brought us to this point, as well as the benefits of virtualisation, which include reductions in both cost and time to market as well as the possibility of sharing hardware between functions and cost effectively coping with peaks in demand for processing.
- 16:35 SCAsat Audio Distribution: Best of Satellite, Best of WAN
- When a new broadcasting corporation was formed from the merging of several existing broadcaster groups, challenges arose in terms of cost, compatibility, and consolidated operations. The mergers had resulted in the continent-wide contribution/distribution audio network being a patchwork quilt of incompatible networks and systems. Centralized, or even efficient management was impossible. Engineers from the broadcaster, along with software, hardware, and IT professionals, working together for over two years, fully developed, extensively tested, and have now deployed an ideal audio distribution/contribution solution. Each individual piece of technology works together providing a cost-effective, reliable, and user-friendly audio and metadata network. The solution incorporates persistently redundant delivery, backup head ends, multiple contribution points, plus full metadata and GPIO, to provide multiple channels of scheduled and ad hoc audio. This paper and presentation describe the parameters, challenges, and technological solutions such that other engineers and broadcasters may integrate or even duplicate the process and results.
Wednesday, July 15
Wednesday, July 15, 09:30 - 10:45
The high dynamic range in our capture devices for 4K cinema now produce exceptional colorimetry, but have you ever considered the wider implication for viewing this content on your personal displays which are still using the standard dynamic range standards developed more than 60 years ago for cathode ray tubes? This session will discuss the problem at hand from the production chain to the personal display outlining a need for change and how some problems can be alleviated.
- 09:30 Marrying High Dynamic Range with HD and Ultra-HD Content: Analysis of Impacts on the Broadcast Chain
- High Dynamic Range (HDR) video has won favor with both industry experts and consumers for the immersive television viewing experience it offers. The profound level of realism it enables through higher peak luminance (display brightness) and color detail is set to transform television in a similar fashion as color did to black&white and HD did to SD, and the industry is progressing quickly towards HDR-enabled screens and devices. Despite this, the standard dynamic range (SDR) used by all TV production today is governed by standards developed over sixty years ago for cathode ray tube (CRT) TVs! In order to transmit and properly render HDR content, the industry now needs alternate transfer functions and related signaling to support a higher dynamic range of video signals. However these new technologies are not compatible with existing display devices and also could lead to several implications on the existing broadcast chain, such as the need for increased bitrates. This paper will present research on the effects of these systems on video compression efficiency, and explore bitrate requirements for providing HDR services using existing video compression technology.
- 09:55 4K/UHD Viewing: The Whole Truth and Nothing But the Truth
- When watching video, we assume we're seeing everything - "the whole truth." But really we're seeing a version of the truth; adjustments have been made based on whatever the display interface can accommodate. But it is possible to see the whole truth on any screen, even in 4K and UHD. You simply need the right display interfaces. This paper will explain how choosing the right cables can make all the difference in getting the video from the player to the screen. Using video from several historical eras playing in 4K 60p, the presentation will demonstrate how to determine the best connections for any situation.
- 10:20 Why is High Dynamic Range Critical to Live Ultra HD and How Might it be Implemented?
- Improved resolution has been the principle justification for the adoption of 4K / Ultra HD to date, leading many to question the benefit of UHD when compared to existing HD 1080p services at typical access bandwidths available to viewers. Since resolution alone isn't enough to give UHD the fidelity expected, there has been renewed interest in implementing High Dynamic Range (HDR) along with the related issue of Wider Color Gamut. To date HDR has been demonstrated on content prepared offline, leaving the question of how HDR can be implemented on live content? This paper will explain why HDR is needed, outline the likely implementations, discuss the impact on existing workflows and implications for existing and future screen requirements.
Underwater cinematographer and designer, Pawel Achtel will have on display the world's first 6K 3D Underwater Camera System. A revolutionary housing for RED EPIC cameras using Nikon Nikonos submersible lenses.
- 09:30 ACS Workshop: Underwater Optics & 3D
- There is more to filming underwater than keeping the camera dry. Traditionally, we placed perfectly good terrestrial lenses behind flat and dome ports hoping for good results underwater. However, how does it affect image quality? We are going to explore and compare various options when it comes to underwater ports and optics exploring some of the challenges we face when trying to match the quality of underwater images with those we can achieve on land. Expect the unexpected...
Wednesday, July 15, 11:10 - 12:50
This session will address spectrum and transmission issues from plant maintenance and standardisation issues through to signal performance of Ultra HD across DVB-T2 and DVD-S2. If you are interested in broadcasting a signal to all homes, then this session is for you.
- 11:10 Extending Radio Broadcast Transmitter Life
- The paper will discuss methods of extending radio broadcaster transmitter life with proper installation, environment and maintenance. Discussed will be proper grounding and installation as well as cooling of the transmitter room. Methods to keep the room and equipment clean will also be included. Noted will be that transmitter life if dependent on proper installation, environment and maintenance.
- 11:35 UHDTV (4k) Transmission over DVB-T2
- UHDTV (4k)technology is evolving quickly - standards are set. Consumer devices and professional Broadcast equipment are available to set up "End to End" broadcast networks which allow the transmission of UHDTV content. The paper will provide an overview of the standards and requirements to send 4k content via a terrestrial Broadcast network using DVB-T2 and HEVC Compression. The topics that are still on the table will be touched such as: Limitation for a terrestrial Broadcast Network using DVB-T2 on Data-rate, Coverage and number of 4k channels , when applying HEVC Compression, High Frame Rate (HFR) and extended Colour Gamut BT.2020. The impact of using maximum possible Bitrates in our spectrum in relation to the Transmission Power will be part of the discussion. 4k is on its way in some countries already - an overview of the 4k trial in Korea.
- 12:00 DVB-T2 Standards - the Devil is in the Detail
- Standardisation is, with good reason, at the heart of nearly every system in Broadcasting where ensuring equipment interoperability is key. Digital Terrestrial platforms have uniquely not been able to enjoy the level of standardisation required to achieve this basic goal when anything more than the simplest real-world solution is required. This means that many customers who have built systems using deterministic PLP replacement, or who have satellite DTH compatible distribution feeds, may not be aware of the extent to which their systems are proprietary, non-interoperable, and in many cases subject to debilitating patents from manufacturer providing solutions. This presentation examines the key problem areas that currently exist and identifies where further standardisation is required. It also discusses the recent activities that have started within the DVB to extend the DVB-T2 standard. and which could close some or all of these gaps. This work requires support not only from manufacturers such as Appear TV, who are committed to support standardisation, but also from the wider community who needs to be more aware of where these problems exist, how serious the implications are and how essential it is that they also embrace the efforts to fix them.
- 12:25 Effect of UHD High Frame Rates (HFR) on DVB-S2 Bit Error Rate (BER)
- Almost ten years ago, television industry was in the same situation as it is now, when HD was the new technology and high compression capability of MPEG-4 made its broadcasting feasible. History is repeating again and the television industry is all geared up for UHD broadcasting with the help of HEVC this time. Rec.2020 for UHD specification was released in 2012 and HEVC's specifications were finalized in 2013. HDMI 2.0 was also released in 2013. 6G-SDI cable is still being developed and new features are still being added in Rec.2020, HEVC and HDMI 2.0. DVB-UHD-1 initial specification was also recently finalized in 2014. Cinema producers, editors, manufacturers and distributors, with the help of these standards are working towards making the UHDTV broadcasting practically possible by 2017 to 2020. Therefore, everything is in its initial stage and any kind of information could be helpful in anticipating the areas to be focused in the future. In this paper, signal performance of HFRs of UHD and HD video transmission has been analyzed using the future broadcast scenario of multiple resolutions (1080p, 2160p), frame rates (25fps, 50fps) and video compression methods (MPEG-4 and HEVC). Different video samples are transmitted through the DVB-S2 module with 8PSK modulation scheme and 5/6 code rate, in the presence of AWGN. Results show that BER decreases as the frame rate increases; UHD videos have higher BER than HD; and HEVC video compression results in a lower BER than MPEG-4. These results will contribute towards developing the DVB-UHD broadcast standard and migration strategies from HD to UHD. It will also encourage the television and media industry to adopt HFRs and HEVC video compression method for UHD, and HD also, in the future due to their significant advantages as discussed in the paper. Delivery method: Power Point Presentation of the paper will include MATLAB simulation graphs depicting significant difference between the bit error rates of different video standards. Authors: Urvashi Pal (Presenter), firstname.lastname@example.org; Horace King, Horace.email@example.com Victoria University, Melbourne, Australia
Wednesday, July 15, 11:10 - 12:40
Want to understand the engineering behind your pictures? This session will focus on deep technical understanding of what you see. Papers include a review of the SMPTE ST2084 standard for high dynamic range, colour space across multiple cameras, spectral variances across display devices and understanding colour space on HDR or Wide Colour Gamut on Ultra HD.
- 11:10 Understanding SMPTE PQ
- With the growing interest in higher dynamic range on the order of 100 -1000X over what conventional systems deliver, the existing gamma curve based on the legacy Cathode Ray Tube was found to be inefficient. The Society of Motion Picture and Television Engineers have published a new standard (ST-2084) which is based on how the human eye actually sees and not a convenient artifact of physics. This new Electro Optic Transfer Function (EOTF) called a Perceptual Quantizer or"PQ" provides a dynamic range of 0 to 10,000 candelas/square meter euphemistically known as "nits" using just 12 bits and each step change in code value is below the human threshold of visibility. This new standard is already being adopted for OTT and next generation media formats. In this session, we will provide a tutorial on what it is, how it works, and how it is being deployed in premium OTT services this year.
- 11:40 Better Colour Conversions for HDR and UHD TV Productions
- The introduction of high-dynamic-range (HDR) color spaces and Ultra-HD's wide-color-gamut (WCG) color space in television creates a need to match colors on HDR and WCG displays with colors on conventional HDTV displays. Many contemporary color conversion methods that apply to conversion from HD to SD, and color conversion methods mandated by current television standards, fail to produce a good color match when converting colors from the HDTV color space to HDR or WCG color spaces. This impairs the quality of HDTV contributions to HDR and UHD productions, as well as the quality of distribution of programs originating in HDR or UHD to HDTV viewers. This paper illustrates the color errors caused by the application of several of these conversion methods, and recommends one method, using display-referred colorimetry, for better color matching between narrow-gamut, wide-gamut and high-dynamic-range displays.
- 12:10 Color Grading with Color Management
- Many productions today shoot with more than one model of camera which create images in different color spaces. Content also often need to be remastered for delivery to different displays (Rec709, P3, Rec2020 HDR). This is a problem that calls out for a comprehensive color management solution, yet there is an overwhelming trend in the industry for colorists to simply take images as they come out of the camera, look at it on their display, and twist the knobs until it looks good. This is something the grading tools were never designed to do, leaving people scrambling to try and find LUTs to solve their problems. There is a better way- using comprehensive color management as a foundation for creative color correction. Comprehensive color management accounts for the color space of your camera, your display, and connects everything through the color space you grade in. The space you choose to grade in can also have a huge impact in the look, quality, and range of the images you can produce with common color grading tools. This paper will recap the fundamental color science principals needed to understand color spaces as they apply to cameras, displays and grading tools and a brief history on color grading workflows and how we got to current practice. The Academy Color Encoding System (ACES) will be presented as a framework for solutions to this problem, including recent case studies.
Wednesday, July 15, 14:00 - 15:15
This exciting talk is one in a series of Distinguished Lectures prepared by the IEEE Broadcast Technology Society. The talk will focus on a Cloud Transmission System that uses spectrum overlay technology to simultaneously deliver multiple program streams with different characteristics and robustness for different services (mobile TV, HDTV and UHDTV) in one RF channel. This lecture will describe the basics of the system, the building blocks and design basics as well as the performance of LDM under different conditions.
- 14:00 Layered Division Multiplexing: Basics Concepts, Application Scenarios and Performance
- This talk is one in a series of Distinguished Lectures prepared by the IEEE Broadcast Technology Society. Cloud Transmission System is a flexible multi-layer system that uses spectrum overlay technology to simultaneously deliver multiple program streams with different characteristics and robustness for different services (mobile TV, HDTV and UHDTV) in one RF channel. In this system, the transmitted signal is formed by superimposing a number of independent signals at desired power levels, to form a multi-layered signal. The signals of different layers can have different characteristics, i.e., different coding, bit rate, and robustness. For the top layer, however, such characteristics are chosen to provide a very robust transmission that can be used for mobile broadcasting service to handheld devices. The bit rate is traded for more powerful error correction coding and robustness such that the receiving Signal to Noise Ratio (SNR) threshold is a negative value, e.g., in the range of -2 to -3 dB. The negative SNR value indicates that the system can withstand combined noise, co-channel interference and multipath distortion powers that are higher than the desired signal power. Such a low threshold makes the top layer highly robust against co-channel interferences, multipath distortion and Doppler effects. The system is one of the candidates for the ATSC 3.0 standard, it is aligned with the COFDM and LDPC techniques used by DVB-T2 and it is one of the key technologies being currently analyzed within the Physical Layer Technical Group of the Future of Broadcast Television (FOBTV). This conference will describe the basics of the system, the building blocks and design basics as well as the performance of LDM under different conditions.
Tasmanian based Ignite Digital will showcase cinema level UAVs/Drones as well as use of the Freefly MoVi gimbals on the ground.
- 14:00 The very latest Drone applications
- Tasmanian based Ignite Digi - Co owners Tom Waugh (Operator) and Chris Fox (Pilot) will present a look at the changes in drone technology, specifically for cinematography. How the stabilisers have advanced and are now capable of carrying Red Epics and Alexa Mini kits. Flying from 8mm to 85mm lenses for full creative freedom. What sort of story telling and shot possibilities has this technology opened up, that were previously very difficult or expensive to achieve? The crossover of technologies, MoVI and similar stabilisers originally designed for drones now used handheld, on a jib, on a car mount, anywhere you want a remote head. We will also discuss the teamwork involved for the two person operation (Pilot and camera operator) and how best to work with the Director and Cinematographer on a film set to achieve the shots efficiently. After the Q&A session Ignite Digi will fly their Alexa Mini setup in the Drone zone outside the SMPTE pavilion.
Wednesday, July 15, 15:45 - 17:00
This session will take a detailed look at IP delivery of contribution video across cloud and 4G systems. Addressed will be issues of latency, problems and solutions in the delivery of contribution and how to address the business side to fund infrastructure.
- 15:45 Looking to the Cloud for Multiscreen Video Contribution Management
- Today's broadcasters are constantly challenged to reliably deliver low-latency, high-quality video to multiscreen audiences on-air and online. One powerful solution is a centralized cloud management system for the routing of contribution content and online publishing services including live feeds, remote transmissions from bonded wireless transmitters, and network feeds. Acting as a type of virtual sub-router, this solution integrates into a wide range of broadcast IP-based workflows and gives the operator a single dashboard for managing contribution assets —thereby offering a powerful toolset for distributing low-latency, high-quality live content across multiple delivery platforms.
- 16:10 Video contribution from anywhere, at low cost
- Live streaming over 4G provides a way to create live video streams of events that are happening now at low cost. A practical guide to experiences creating live streams from sporting and other events and streaming live to the internet to multiple playback devices (OTT). Broadcasting over mobile networks is a bit of a "Wild West" in terms of bandwidth, radio links and reliability, we aim to discuss some of the problems and some solutions to enable anyone to go live from any location. We will discuss careful placement of antennas, the latest 4G modems, new 4G network features and cloud transcoding everything can be made to work on a budget small enough for just about anyone to broadcast anything.
- 16:35 From traditional baseband to video over IP: Do we dare to change? The paradigm shift
- After many years of traditional broadcasting, facility and production companies are facing the transition from baseband to video over IP. This giant leap of innovation and regeneration has serious consequences on the traditional broadcast infrastructure & crusted knowledge from broadcast engineers. This paradigm shift (similar to the shift from the analog to the digital era) involves changes regarding bandwidth & transport, standards & protocols, reliability & stability. Let's evaluate the real impact from this shift on today's classic broadcast methods and see whether the existing infrastructure and the engineering habits are for or against one of the biggest changes in broadcast history.
Join our ACS partners for a panel session with Q&A focusing on television drama production.
- 15:45 ACS Panel Session: Television Drama Q&A
- An expert panel led by Moderator, Renee Brack and including cinematographers, Simon Chapman ACS (Glitch, Nowhere Boys, The Little Death), Martin McGrath ACS (Rake, Jack Irish, The Fatal Shore), Louis Irving ACS ( Tangle, Packed to the Rafters, Dr Blake Mysteries) and renowned Directors, Adrian Wills(Redfern Now) and Ian Watson ( Anzac Girls, Janet KIng, Packed to the Rafters) will explore and discuss the collaborative spirit, development, popularity and technical achievements that continue to drive Television Drama here at home and internationally.
Thursday, July 16
Thursday, July 16, 09:30 - 10:45
If you thought archiving comes at the end of the production chain then you are mistaken. This session will discuss standards and metadata required on capture to ensure quality archiving. Finally we get to hear from the National Archives of Australia to understand what is important to them to ensure your hard earned film content lasts for future generations.
- 09:30 SMPTE AXF: A Comprehensive Solution for Digital Archiving
- Robust archives are vital to the protection of assets for future generations and to leveraging their value. Digital archiving is the obvious direction for the future, but a good solution is not easy to find; the size of moving image archives presents a cost concern, and there are many other challenges. The Archive Exchange Format (AXF) standard developed by SMPTE meets all of the requirements advanced by a wide range of users. AXF will handle archives of essentially unlimited size (spanning multiple media units if required), is independent of the storage medium, the operating system, and the application that creates it, and facilitates migration to new media. Above all, AXF is vendor agnostic; any AXF-compliant archive may be recovered by any AXF-supporting system, and it will also be possible to recover AXF archives using open-source software currently in development. The AXF standard was designed to meet the needs of moving image archives, but is in no way restricted and will support any form of data. For example, the US Library of Congress is already working towards the adoption of AXF for databases of large volumes of geophysical data. AXF supports rich metadata and, as the tools defined by AXF became understood by end users, it became apparent that those tools offered solutions to file management and documentation needs throughout the production workflow, starting on the set. Through a simple, straightforward modification of the XML schema that underlies AXF, it is possible to create manifests of files that carry file metadata needed throughout the workflow all the way to the archive. In essence, use of AXF tools upstream of archives permits application of AXF in "unwrapped" form, with the files being "wrapped" into AXF Objects for archival purposes when desired. The paper will describe upstream applications of AXF and the modifications of the AXF schema necessary to enable those applications. Work on creation of a standard for upstream use of AXF tools is under way in SMPTE. The presentation will summarize the main characteristics of the AXF standard, and describe the continuing work underway in the SMPTE engineering committees.
- 09:55 Maximizing the Potential of Legacy Content in New Media Asset Management Deployments
- Media asset management systems provide excellent tools for managing new content that you ingest and create, but are often limited in their ability to manage legacy content that existed before the asset management system was commissioned. Tape access standards such as LTFS and advanced file wrappers enhance asset portability and interchange going forward, but additional technology is required to address legacy archives and leverage valuable metadata in their associated databases. Meanwhile, the ability to leverage unstructured metadata including related documents and closed captions can provide the asset management system with rich, otherwise-unavailable metadata for legacy content without significant manual effort.
- 10:20 Ensuring A Persistence Of Vision - Preserving Archival Footage For Future Generations
- The Audiovisual Preservation Section of the National Archives of Australia (NAA) is one of the major collections of Australia's national audio-visual heritage. The wider NAA is responsible for the preservation of Commonwealth of Australia federal government records. It describes its brief as the 'collecting and preserving Australian Government records that reflect our history and identity… events and decisions that have shaped the nation and the lives of Australians.' The NAA Audiovisual Section is responsible for the preservation of audiovisual items the NAA has acquired in accordance with its statutory obligations - mostly works produced by Australian government departments and agencies, commissioned by them, or acquired as part of their activities. This presentation is intended to provide an overview of the National Archives of Australia's digitisation process for non-theatrical motion picture film. The Archives has one of the largest collections in Australia comprising approximately 260,000 films, ranging from early nitrate to modern polyester. The vast majority of the collection is 16mm, although nearly all types of film (from original and intermediate components to prints) are represented. These films vary considerably in their overall condition, which affects both long-term preservation and viewing. Discussion will focus on the differing requirements of archival preservation as opposed to restoration activities, as well as the importance of experience and skills in this area.
This session will look at the changes in lighting technology, gaining a deeper understanding of camera raw formats, and then capturing all the action safely and legally using drones. If you love your image capture end to end, then this session will not disappoint.
- 09:30 The Quality of Light and its Effect on HDR Cinematography and Photography
- HDR – high dynamic range imaging is more than just the amount of latitude or dynamic range a camera can capture and a display can display. Colour is a critical aspect of cinematography and photography and as we move towards larger gamut colour spaces such as REC 2020 then so will the accurate capture of colour be more important. Modern digital cinematography cameras such as the ARRI Alexa, Sony F65, RED Dragon and the coming new Varicam 35 from Panasonic all have the ability to produce reasonably high dynamic range imagery – 14 stops and over images. These cinematography cameras also have very large colour gamuts. While the use of HDR cameras is not new, the awareness of the image capture capabilities and the benefits in the drive for better display technology to display very high quality imaging are putting a focus on image quality as a whole. With the technological changes in the acquisition technology, so too has there been a change in lighting technology. While there are endless debates over the various qualities and capabilities of cameras the introduction of solid state lighting hasn't attracted the same focus on pure quality that the cameras have had. In fact a main driver for solid state lighting has been on price, both initial and future running costs. This paper examines solid state lighting technology, analysing the Spectral Power Distribution of a number of current LED lights, including focusing lights and panel lights. Real world testing of the lights carried out on various materials covering the visual spectrum and these are also compared to standard reference tungsten and daylight lights that have been used for many years in cinematography and photography. The ramifications of using LED lights with a poor spectral response is studied with respect to colour inaccuracy, colour over angle and CCT errors and the associated visual effect and the implications for post production.
- 09:55 Camera Raw for High Dynamic Range Workflows
- This paper presents how all camera raw formats work in common, and also the different approaches used by several vendors to preserve the highest fidelity image information, while managing the large amounts of data required to represent those images. Camera raw workflows provide a variety of techniques to convert from the sensor data into RGB image formats suitable for mastering, along with methods for generating a specific "look" to the images. The tremendous dynamic range afforded by a camera raw format permits a wide range of exposure during acquisition, which can be mapped into either the limited standard dynamic range of conventional output image formats or leveraged for use in the new High Dynamic Range (HDR) image formats. However, looks and camera specifications can be deceiving and one must know what is happening within the workflow to obtain the best image quality. This paper was presented to the October 2014 SMPTE Technical Conference in Hollywood. It is offered for this audience with some updated information.
- 10:20 The Safe and Legal Operation of Commercial Drones in Australia
- The use of drones and drone technology is on a rapid increase due to advances in modern technology. This poses considerable safety concerns as inexperienced operators scramble to utilize these technologies to get the edge in their chosen industries. The Motion Picture and Television industries will be heavily influenced by the use of drones and drone technology, and therefore it is vitally important that the industry as a whole has an understanding of the rules, regulations and safety concerns whilst operating these machines commercially. Where once seen as toys, these are now tools of the trade, and as such their use and the safety aspect of such use is something to be respected and taken seriously.
Thursday, July 16, 11:10 - 12:50
This session will review how to handle the large amounts of data associated with 4K, the IMF MXF file format and what it offers and file based workflows for existing satellite and microwave networks and take a look at how cloud services and IP production workflows form part of any new architecture and operational considerations. If you want to understand workflow in your production chain, then this session should not be missed.
- 11:10 Workflow Optimised Storage Key Differentiator in High Resolution Workflows
- High resolution content is everywhere — fickle consumers with giant digital appetites demand the highest available resolution of content on an ever-multiplying array devices with 'always on' high-speed connections. Content services are noisily adopting 'ultra high definition' - and cameras, TV's and more are all moving to 4K resolution or higher. To keep up with this demand, content creators need to evaluate their entire content creation and production workflow and ask: "Can my current production workflow keep up with the pace of operation and the higher resolution content? Is every aspect of my workflow - from editing to content management to storage and delivery meet the challenge?"
- 11:35 Using IMF for international distribution - what does that mean?
- IMF is nothing to do with the International Monetary Fund. It is an Application Specification of MXF being developed within the SMPTE. The goal is to create a delivery format that meets the business needs of shipping versioned content around a country and around the world. This presentation will be in 3 parts - (1) a review of IMF technology and how it works (2) a review of IMF workflows that could exist when an IMF ecosystem exists (3) Some of the savings that might be realised by using IMF. The presentation will consider not only the file formats and processes, but also the preservation of multi-platform captions, metadata and media life cycles within MAM systems as well as the benefits that can be achieved by considering versioning from the initial concept of a programme.
- 12:00 The Media Cloud Platform
- The media and entertainment industry has rapidly adopted a fully digital workflow - where all content assets and essence are held, managed and manipulated in the form of digital files. Creation of content, editing, grading, transcoding and distribution of media today is based on the use of common compute infrastructure and shared storage systems - usually managed through a layer of virtualisation. It is entirely possible to utilise "cloud technology" for these processes - but where does this make financial and operational sense? A lot of content consumption also takes place via "cloud distribution", closely integrated into social media networks and consequently capable of generating very rich information about personal preferences. This creates the opportunity for advanced marketing and content recommendation built on a foundation of big data and analytics. This presentation will explore the business and technology aspects of cloud technology; comparing different architecture options deployed as public, private or hybrid clouds.
- 12:25 IP: Opening up New Perspectives for Live Broadcast Production
- IP's march to ubiquity is well on its way. IP is here to stay and is being adopted by all parts of the broadcast workflow, opening up new challenges as well as a wider pallet of possibilities for live broadcasters to produce and deliver better stories faster to viewers. How can traditional and emerging IP systems co-exist? How best to make the transition? As a leader in live video production systems, we'll cover everything from infrastructure (the transition from SDI to IP), live production models, content aggregation and remote production and the applications of data center and virtualization principles. We'll give context to the current landscape and show how IP must be seen as an enabler to bring more power, scalability, flexibility and economies of scale/profitability to live production.
This session takes a detailed look at audio issues of today. Should audio be de-embedded from SDI, what does the new AES-67 standard deliver and how can we utilise heterogeneous audio networks. We finalise the session with a case study into how one broadcaster overcame their issues to merge multiple broadcast groups.
- 11:10 Ungluing Audio and Video - How Audio over IP Enables the Future
- After decades of work to successfully carry audio and video over the same serial digital interface (SDI), arguably the time has come to split them apart again. Audio embedding and de-embedding was never perfect and remains limited, carriage of metadata which will become more essential with new services is inaccurate and complex, and unbelievably, lip sync issues are worse than ever before. Further, channel based audio is heading for replacement by carriage of the objects that make up the channels, enabling flexibility and enhanced consumer experiences for broadcast and OTT services. SDI is at heart a video format, and it cannot support the future of audio. Alarmingly, the future is ringing the front bell today. AES67, Livewire+ and related standards offer a path to making all of this work – including lip sync! AES and SMPTE are working together, and the results will enable the sub-sample accurate linking of Audio over IP (which has existed in radio for over a decade and is growing in TV) and video, all without requiring it to be glued together until the very end.
- 11:35 Managing Heterogeneous Networks in Broadcast Audio Production Systems
- Broadcast audio production systems are typically centred around sound mixing consoles. Modern consoles often consist of several components including the mixing control surface, processing 'core', and input/output (i/o) frames providing connectivity to audio devices such as microphones, playout systems and routers. Many installations require the sharing of sources between multiple consoles serving multiple studio spaces, and the interconnections between audio devices, i/o frames and cores can range from traditional analogue to Audio over IP. This paper considers the challenges posed by such heterogeneous audio networks, and looks at possible solutions offering ease of use and reliability to the end user.
- 12:00 How AES-67, the new audio-over-IP standard, will bring the convergence of telecommunications, radio and television broadcast studio audio, and intercom
- Traditionally, due to previous practical technical limitations, the audio quality of telecommunications and intercom systems was not as high as studio audio. Applications requiring long distance high quality audio required the use of specialized provisioning and equipment in parallel to the existing telecommunications systems. IP computer networks have long since erased the distinction between the local LAN and global networks. As high speed wide area networks (WANs) with better performance and reliability have come online, the reality of erasing the difference of local vs remote audio presents itself. AES67 is the protocol designed to take advantage of this capability for audio. Using AES67, the audio quality of telecommunications and intercommunications can be the same as in-studio audio, and furthermore, the systems directly interconnected by using the single interoperability protocol. This shift is more significant than eliminating the economic redundancy of parallel systems. It fundamentally enables new workflows, coordinating and combining the efforts of production staff and talent, in geographically combined or diverse locations. Audio traffic is no longer just communication; it can be contribution as well. Combining the ease of making a connection like a phone call, the practically unlimited flexibility of routing of the network, with the pristine high fidelity of digital studio audio, AES67 brings the convergence of telecom, radio and television studio audio and intercom.
- 12:25 Radio Microphones in the Digital Age
- As a result of governments around the world selling off sections of the UHF spectrum for cellular telephony, radio microphones have needed to evolve. Digital transmission technology offers audio performance benefits, but there are tradeoffs need to be made. This paper looks at the spectrum available for radio microphones, and the analog and digital modulation techniques that can be used to exploit that spectrum. The benefits and shortcomings of both analog and digital are examined, against the practical environment of a major live production - the Eurovision Song Contest.
Thursday, July 16, 14:00 - 15:15
This session will look at the emerging IP technologies, how this impacts your contribution networks and internal infrastructure and how the all IP facility of the future may solve these problems.
- 14:00 Video Internetworking
- This presentation discusses how emerging technologies such as SDN, NFV, Cloud and OTN will integrate to support current and future video business requirements. We will explore the potential and the possibilities whilst also investigating the substantial technical challenges that need to be overcome. The presentation will centre on contribution video networks and expand into the adjacent platforms and technologies through to distribution to develop an end to end appreciation of the landscape.
- 14:25 The All IP-Facility
- This paper covers the reality of an all IP facility. Today we have seen the transition to IP occur in multiple places. First, it was signals entering the facility moved from ASI to IP. Next, it was a change from a tape workflow to a tapeless one where files moved over IP from ingest to playout. And finally, the entire core including production has moved from SDI to IP. This transition to IP is necessary in order to meet the new challenges broadcasters face including Ultra HD and distribution to multiple platforms.
- 14:50 IP Base Signal Flows in the Broadcast Facility: The Architecture of Software Defined Networking
- Traditionally, the signal distribution infrastructure in a broadcast facility centers on SDI distribution, routing and processing. SDI was introduced commercial in 1986 and it remains a professional television industry format making its economics and innovation relatively constrained. By comparison, the IT/IP industry has evolved Ethernet technology almost exponentially driven by a much larger, and highly competitive, IT economic ecosystem. So how do we leverage the scalability, flexibility and economics of IP signal flows in the broadcast facility?
If you have heard of the broader issues associated with Ultra HD earlier in the conference schedule and are interested in gaining a much deeper understanding of Ultra HD, then this seminar is essential. Presented over half a day, this session will dive much deeper into the detail of UHD and how to implement it in the real world.
- 14:00 UHD Essentials Seminar - Part 1
- In the first part of the seminar, we explore what will make a compelling UHDTV experience and what technology we have to deliver that experience. We will then dive into some of the physiological and technical aspects including how we resolve detail, how we see color and how blur is created and controlled within the chain. The first part of the seminar covers more theory and principals than the second part. We will cover the capture and display of light and the basics of OETF and EOTF (Opto-electrical transfer functions and Electro-Optical transfer functions). Finally we will look at advanced audio systems and what they need to deliver in a UHDTV world.
Thursday, July 16, 15:45 - 17:00
As our services evolve, we are learning to deal with new issues. This session will review today's issues including broadcast subtitles across web based delivery, WAN transport architecture used in sport production and how this large volume of content can be archived.
- 15:45 Subtitles on the web - ensuring consistency across broadcast and IP platforms
- Ever more content is being delivered over IP connections and viewed on non-broadcast devices such as computers, tablets and mobiles. Yet the position of standards for subtitling on such platforms is confusing to say the least. Proprietary implementations are commonplace leading to inconsistencies that can mar the viewing experience. In a conventional broadcast environment subtitles or captions are delivered as part of a data stream using one of the major standards (DVB, ATSC or ISDB). As new video services increasingly move away from broadcast television however decoding and display is being performed by a range of browsers or media players, there is far less standardisation and a variety of strategies to handle captioning has emerged. For instance, US (Line 21) closed captioning carried in the video signal is supported natively in some players while European formats and international character sets are less well served and a variety of timed-text, side-file solutions have emerged (e.g. SMIL, SRT, SAMI, iTT, WebVTT, TTML and DXFP). This has lead to a situation where support for individual formats and the quality and consistency of the displayed subtitles varies between media players and delivery formats. The inconsistencies may manifest themselves in fonts and languages, text size and colour as well as position. In the worst case subtitles may vanish, if fonts do not support the required international character set. Hard-of-hearing viewers may rely on colour or position to indicate a change of speaker and assist with following the dialogue or to distinguish a sound effect from spoken text. A loss of consistent position can also cause issues in a multi-lingual market where a translation, that on the broadcast version was carefully positioned, no longer fully obscures the original text creating confusion for the reader and a risk of cultural offense. In advance of the W3C's timed-text group working on Internet Media Subtitles and Captions (IMSC-TTML), Screen has developed an innovative solution to this challenge using pre-rendered images displayed using a simple Java Plug-In for the media player. This paper assesses the challenges involved in presenting broadcast quality subtitles on web video players, proposes a workable solution for current implementation and looks to the evolution of standardisation in the near-future.
- 16:10 An Analysis of WAN Transport in an End-to-End Live Streaming Platform for Second Screen Combining Multi-Camera Capture, High Speed Transport, and Cloud Video Processing
- The 2014 World Cup introduced the first large scale system for high-resolution end-to-end live streaming to second screens. The system (by EVS) delivered premium live coverage online and on mobile devices. "Second screen" is not new to global sport but this architecture was a first: the live video feeds captured from multiple camera angles were transferred in real-time using high-performance WAN transport (Aspera) from Brazil to the cloud in Europe (AWS) for real time processing into multiple protocols through a scale-out cloud video platform (Elemental). We will describe the architecture and APIs of the WAN transport to ensure timely and reliable delivery of the live video feeds, the auto scaling software supporting the availability requirements for the transport load, and the challenges posed by cloud storage delays in this real-time environment. Statistics from the event output by a new analytics platform will characterize performance, network conditions, and usage of the platform which is ingesting over 30 TB of live video. Finally, we will introduce a formal model and benchmark results for a new byte stream transport showing how its distance neutral efficiency allows for live and near live video delivery with minimal start-up delay and a glitch-free play out experience over commodity global Internet WANs.
- 16:35 Extending Object Storage to Tape
- As the amount of stored content increases at an accelerating rate, protecting and preserving this content requires effective, fail resistant and enormously scalable storage. Object based storage has emerged as a way to build endurable storage systems that can scale to billions of objects and vast storage capacities while maintaining acceptable latencies for data access. We will review the extension of object storage into magnetic tape archives that will enable 100 year archive solutions while providing long-tail content access support.
If you have heard of the broader issues associated with Ultra HD earlier in the conference schedule and are interested in gaining a much deeper understanding of Ultra HD, then this seminar is essential. Presented over half a day, this session will dive much deeper into the detail of UHD and how to implement it in the real world.
- 15:45 UHD Essentials Seminar - Part 2
- The second part of the seminar looks at some of the practical aspects of implementing the theory in the real world. We look at data transfer, storage, computing, compression, frame rates and color management in a mature SD, HD, UHD value chain. We explore some of the issues and look at where more standards might be needed to create a global system. We will look at some individual proposals for EOTF and compression as well as looking at details of HEVC video and MPEG-H audio encoding. We will present some information from 2014 UHDTV broadcasts to highlight the practicality of today's productions and will finish with details of the SMPTE UHDTV online training course where you can learn in more detail the technology overviewed in this seminar.