Program for SMPTE 2014 Annual Technical Conference & Exhibition

Time Salon 1 Salon 2 Theatre (Chinese 6) Exhibit Hall Hollywood Ballroom Ray Dolby Ballroom Terrace Solano Canyon

Tuesday, October 21

07:30           Morning Coffee  
08:30 Welcome and Introduction            
08:45 Opening Keynote Speaker            
09:45           Break  
10:15 Networked Media in the Facility-Part 1 Video Compression          
12:30         Industry Luncheon (Ticket Required)    
14:15 Networked Media in the Facility-Part 2 Developments in Audio Technology, Part 1: Tools for Immersive Audio          
15:45       Coffee Break      
16:15 Networked Media in the Facility-Part 3 Developments in Audio Technology, Part 2: Delivering on the Promise          
18:00       Welcome Reception      
20:00 Student Film Showcase            

Wednesday, October 22

07:30           Morning Coffee  
08:30 File Based Workflows - Part 1: Tools of the Trade - Conversion, Captions and Compression Dammit, Gamut, I Love You! Cinema Workflow - A Brief Moment in Time        
10:30       Coffee Break      
11:00 File Based Workflows - Part 2: Meaningful Media Management - Taking Us to the Second Screen and Beyond! Higher Frame Rates          
12:30             Fellows Luncheon (Fellows Only-Ticket Required)
14:15 Cloud case studies - the reality of virtualisation Display Technologies: Where Do We Go From Here (And How Do We Measure What We've Already Got?)          
15:45       Coffee Break      
16:15 Developments in Audio Technology, Part 3: Diving into the Details UHDTV: Building The Plane In Flight          
18:00 Annual Membership Meeting            

Thursday, October 23

07:30           Morning Coffee  
08:30 Asset Management-Part 1 IP Streams: Control, Monitoring and Production Advancements in Theatrical Display        
10:30       Coffee Break      
11:00 Asset Management-Part 2: Standards for Archives & Production Workflows Content Accountability, Tracking and Protection          
12:30       Boxed Lunch (Ticket Required)      
14:00 Image Processing Part 1: Methods for creating high quality images beyond HD Evolution of Broadcast Facilities-Part 1          
15:30           Coffee Break  
16:00 Image Processing Part 2 - Reducing distortions in captured images Evolution of Broadcast Facilities-Part 2          
19:00         Honors & Awards Ceremony and Dinner (Ticket Required)    
22:00         Afterparty and SMPTE Jam    

Tuesday, October 21

07:30 - 08:30

Morning Coffee

Room: Ray Dolby Ballroom Terrace

08:30 - 08:45

Welcome and Introduction

Room: Salon 1

08:45 - 09:45

Opening Keynote Speaker

Room: Salon 1
08:45 Opening Keynote Speaker
Chris Fetner (Netflix, USA)
Director of Global Content Partners Operations
Presenter bio: Chris Fetner began his entertainment career as an Operations Manager at a local PBS affiliate in Washington, DC. He went on to serve as a producer for several local and national television programs including the award-winning medical and science show “Frontiers of Medicine”. In 2002, he joined Discovery Communications as a post-production and operations executive seeing them through a transformation to an all-digital workflow. Before joining Netflix in June of 2012, Chris spent nine years with BBC Worldwide in New York as the Vice President of Post Production and Technical services. In that capacity he oversaw all technical operations supporting both the cable network BBC America and their distribution to EST, VOD, and SVOD customers. At Netflix he and his team serve as the primary technical liaisons with Content Owners and Service Vendors supplying the platform. He deploys tools, processes and knowledge to make delivering high quality content simple and efficient to Netflix.
Chris Fetner

09:45 - 10:15

Break

Room: Ray Dolby Ballroom Terrace

10:15 - 12:15

Networked Media in the Facility-Part 1

Room: Salon 1
Chair: Al Kovalick (Media Systems Consulting, USA)

This session, presented in 3 parts, focuses on using packetized methods to move media and metadata in real-time over networks. Presenters will cover Ethernet and IP methods to build production and broadcast environments and consider techniques for establishing common device clocks, video sync, frame accurate switching and AV transport over. A mix of detailed technology review, tutorials, and case studies will also be presented. Don't miss this firsthand look at the media facility of the future.

10:15 Ethernet AVB standards overview and status
Jan Eveleens (Axon, The Netherlands)
This paper provides an up-to-date overview of the IEEE Ethernet AVB (Audio Video Bridging) technology and standards. It describes the basics of the key elements of AVB (time synchronisation, bandwidth management, transport layer protocols, etc) and why AVB is an excellent candidate for next generation digital (live) broadcast infrastructures. Also covered in the paper is a summary of the ongoing activities in the various AVB related IEEE working groups including the work on the second generation of AVB (also referred to as Time Sensitive Networking or TSN). Interoperability will be a crucial factor in the success of video networking in broadcasting infrastructures and the paper introduces the AVnu Alliance and its role in relation to AVB standards compliance testing and product certification.
Presenter bio: JAN EVELEENS EXECUTIVE PROFILE Axon Chief Executive Officer Jan Eveleens is the CEO of Axon (www.axon.tv). Axon is one of the world’s leading suppliers of video & audio conversion/processing/interfacing solutions and compliance recording systems. Prior to joining Axon, Jan worked for Thomson GrassValley as General Manager for their Camera group and was member of the GrassValley executive team. Jan started his career with Philips Electronics where he was active in the broadcast domain and deeply involved in D2-MAC, PALplus, HD-MAC, and DVB/MPEG-2 transmission systems as well as CA systems and watermarking solutions. Currently Jan also serves as Chair of the Members Board of the International Association of Broadcast Manufacturers (IABM, www.theiabm.org). Furthermore he is an active member of the AVnu Alliance (www.avnu.org) where he chairs the Pro Video working group. Jan holds a masters degree in Computer Science.
Jan Eveleens
10:45 Internet Protocol Networks in the Live Broadcast Plant
Ken Buttle (Grass Valley, USA); Sara Kudrle (Grass Valley, a Belden Brand and & SMPTE Western Region Governor, USA); Charles Meyer (Grass Valley, USA)
Data Network Technology has advanced to the point where packet video infrastructure can realistically be considered for different workflows. SMPTE 2022-6 provides full bandwidth video transport in the OSI Protocol Stack and by doing so, further enables the transition from SDI to packets. Encapsulation provides flexibility, extensibility and interoperability not otherwise available with existing SDI baseband video standards. Some argue that the overhead costs associated with encapsulation can be prohibitive for HD-SDI and UHDTV data rates push the cost curve potentially higher. But, these cost curves fall with Moore's Law, and in the long run, the flexibility of using packet video to enable intelligent workflows provides better monetization of content and extends the life of CAPEX investments. Examining the different approaches to facility signal routing and distribution demonstrates the tradeoffs between different network technologies and their suitability for various work flows. Using Ethernet AVB, which includes PTP, is one approach, and COTS IP networks equipment with NTP is another. Using an IP Gateway as part of system design today provides a compelling solution bridging today's SDI with tomorrow's packets. Comparing and contrasting these methods provides insight into their suitability for a given workflow, the costs, and event timelines for implementation. Understanding these timelines is critical to business planning.
Presenter bio: After graduating from the University of Toronto in 1982, I worked on high speed data transceiver IC's and hardware for telecom companies in Ottawa, Toronto, and Sacramento. In the late 90's and early 00's, I alternated between the video and wireless networking industries, doing FPGA and hardware designs for both. I then worked for Grass Valley in the Modular group for ten years. After working for Miranda Technologies for 1 year, I am now back at Grass Valley.
Ken Buttle
11:15 IP Live Production
Toshiaki Kojima (Sony Corporation, Japan); John Stone (Sony, United Kingdom); Paul N Gardiner and Jian-Rong Chen (Sony Europe Ltd, United Kingdom)
SDI infrastructure has been a fundamental building block for video and audio communications within studios for many years. Meanwhile, the bandwidth of generic IP networks has continued to increase alongside falling costs, such that 10G infrastructure is now commonly available. Exploiting this high-bandwidth commodity infrastructure, an IP network could be deployed in the studio to form an IP Live Production system. This paper explores the technical requirements, design considerations and standards approaches for IP Live Production to be able to deliver business benefits compared to current SDI technology whilst retaining familiar SDI-based production practices. This paper also describes a sample implementation of an IP-based AV router showing how the discussed technologies can be applied to realize the same functionality as a conventional SDI router.
Presenter bio: Toshiaki Kojima has worked in both the AV and IT worlds over the past 32 years. He joined Sony in 1982 after graduating from Waseda University, Japan. For the first 20 years of his career he was involved mainly in the design of professional VTRs, from 1 inch to the e-VTR. In 2004 he received a Sports Emmy statuette in recognition of the role of the e-VTR when NBC was awarded a Sports Emmy for coverage of the Athens Olympics. Following the e-VTR project, he moved into R&D with the aim of replacing dedicated professional AV interfaces with those of general IP. This effort is on-going. He is an active participant in various network-related industry and standardisation bodies, including FIMS, the Video Services Forum (VSF) , SMPTE and the Joint Task Force on Professional Networked Streamed Media (JT-NM). He holds 20 Japanese and international patents, and is an MIT Fellow.
Toshiaki Kojima
11:45 IP for contribution broadcasting: the next step of IP ubiquity
Chin Chye Koh (Nevion USA, USA)
While many rights owners and broadcasters now embrace IP for content distribution, the road has been rockier for IP's adoption for content contribution and production. The enormous bandwidth needed to carry high quality live content and the inherent "best-effort" principle underlying the technology seem incompatible with today's need for real-time, no-downtime content transport. But massive affordable bandwidth is now available and technology exists to overcome IP's risks and limitations, enabling IP's economies of scale, built-in flexibility, lower network operating and capital costs, greater flexibility and ability to push more content. This paper will explore the specifics behind IP infrastructures that include built-in service provisioning, connection management, service analytics, network inventory, and fault-, configuration- and performance-management functions. How to create IP networks—encompassing hardware and software components—that include monitoring and management for resilient, reliable and easily managed systems will be detailed. The paper will also look at specific live-event applications.
Presenter bio: Chin Chye Koh holds a Ph.D. and M.Sc. in Electrical and Computer Engineering from the University of California Santa Barbara for work on the perception of visual quality in relation to image and video compression. He received his B.Sc. degree in Electrical Engineering from Washington State University. As Senior Solutions Architect at Nevion USA, he has responsibility for the development of system solutions primarily focused on contribution video transport in managed media networks. Prior to his position as Solutions Architect, Dr. Koh was Product Manager for the Ventura line of modular video transport solutions and before that, Member of Technical Staff responsible for algorithm research and development for video compression and transport solutions. His post-graduate work included positions at Intel Corporation in Arizona and Philips Research in The Netherlands. Dr. Koh was also a research and development engineer at Pepperl+Fuchs, Singapore, where he developed sensor modules for factory automation.
Chin Chye Koh

Video Compression

Room: Salon 2
Chair: John P Maizels (Entropy Enterprises, Australia)

How do you know when a signal has been successfully compressed? Easy: you take some measurements and conclude that you've packed more information into less space and nobody has noticed. That's the magic. In this session four experts give us different views of what it takes to get the best from a payload, and we learn that it's as much art as it is science.

10:15 Perceptual Video Quality Analysis for HEVC in a Packet Loss Environment
Bhupender Kumar (Interra Systems, USA); Shekhar Madnani (Interra Systems, India); Advait M Mogre (Interra Systems, USA); Shailesh Kumar and Muneesh Sharma (Interra Systems, India)
HEVC provides a significant increase in compression efficiency over legacy standards like MPEG-2 and AVC. As is well known, an increase it the compression efficiency typically results in decreased coding redundancy and subsequently, a vulnerability to error propagation resulting from decoding a coded stream under packet loss (PL) conditions. Here, incorporating/formulating a perceptual Video Quality (VQ) model and analyzing VQ over a range of bit rates, PL profiles and a variety of content; would be of interest. Furthermore, selectively managing the newer HEVC data structure parameters like Tiles etc., to optimize VQ in the aforementioned environment would also be valuable.
Presenter bio: Bhupender Kumar is a Senior Engineer at Interra Systems, where he is involved in research and development of advanced techniques for automated assessment of video quality. He has developed algorithms for detection of artifacts due to spatio-temporal discontinuities in video such as video dropouts, defective pixels, blockiness, field order issues etc. His research interests include No-reference video quality estimation, Image/video processing techniques using Computer vision and Pattern recognition and Machine learning. He received his B.Tech. in Electronics and Communications Engineering from National Institute of Technology, Kurukshetra, India.
Bhupender Kumar
10:45 Improving Video Streaming and File Compression Efficiency without Affecting Quality
Yves Faroudja (Faroudja Enterprises, USA)
From archiving and cloud storage to video-on-demand, and from production to distribution, bandwidth can be optimized while preserving video quality in a manner satisfactory to the most demanding content owners and viewers. This paper will provide a snapshot into how the efficiency of digital video compression systems may be improved through the use of a pre-processor (before compression) and a post-processor (after compression decoding). The system may include a support layer in parallel with the conventional compression path. The scheme complements conventional compression standards such as MPEG-2, MPEG-4 Video, and HEVC without requiring modifications to the standard codecs. Readers will learn how available network bandwidth can be increased while preserving image quality. Test results will be provided that demonstrate a reduction of bitrates by 35% to 50% using any existing compression system, via the use of processing technologies.
Presenter bio: Yves Faroudja has been a major contributor to advanced video technologies for broadcast, digital cinema, and home theater. He has been granted over 75 patents and received three Emmy® Awards, including the Charles F. Jenkins Lifetime Achievement Emmy Award. A fellow of the Society of Motion Picture and Television Engineers, Yves was awarded the David Sarnoff Gold Medal, and been the recipient of many other honors. In addition to founding Faroudja Enterprises, he has served on the board of directors for several imaging technology companies.
Yves Faroudja
11:15 State of HEVC Bit Rates in 2014 - Comparing HEVC, H.264 and MPEG-2
John Pallett (Telestream, Inc., USA)
HEVC (also known as H.265) promises to cut bit rates by 50% compared to H.264. But what bit rates are appropriate for different frame sizes, and how does quality compare to H.264 and MPEG-2 in the real world? This paper will provide new and practical data comparing HEVC with MPEG-2 and H.264. It will explore what HEVC bit rates are necessary to achieve quality similar to common H.264 and MPEG-2 distribution formats, measured across sports, news and movies. Measurements from hundreds of test encodes will be combined with an analysis of the features of HEVC, to create recommended bit rates and settings for SD, HD and Ultra HD encoding. This paper continues research presented at NAB 2014, and will present entirely new data for 2014 using the latest encoder technology. This paper will also go beyond SD and HD, with findings and recommendations for Ultra HD encoding.
Presenter bio: John Pallett, Director of Product Marketing at Telestream, has over 14 years experience developing and managing computer graphics and digital media software applications for entertainment, CAD/CAM, and 3D design applications. John is a frequent speaker at NAB Technical Presentations and SMPTE meetings. He holds an M.B.A. from the University of California at Berkeley, and a Bachelor in Computer Science from the University of Waterloo, Canada.
John Pallett
11:45 HDR HEVC Encoder
Raul Diaz and Sam Blinstein (Vanguard Video LLC, USA); Sheng Qu (Dolby Laboratories, USA)
This paper describes the challenges and capabilities of an implementation of a high dynamic range layered codec approach for HEVC Main10 4K content for top tier OTT/VOD movie distributors known as "Dolby Vision." While Dolby Vision is already designed to support both AVC (H.264) and HEVC (H.265), the current tools are designed for batch-mode testing with reference-level encoders that generally do not meet the performance and robustness demands of an automated production system. To create a viable production system, particular attention must be paid to optimization and parallelization of the HEVC pipeline to create the final dual layer streams. The production HDR HEVC encoder must also be integrated into the client encoder system application infrastructure, preserving the clients robustness, testability and quality requirements. The final streams must be tested in a similarly performant decoder platform for manual and automated test and verification. Because Dolby Vision is a novel technology that requires specialized hardware for viewing, careful consideration must be given to the visual verification process with a solution that offers powerful viewing tools and defect testability. Finally, the automated decoder verification must test both the backwards-compatible base layer and the high dynamic range enhancement layer. This paper will cover these issues and the technical solutions that were implemented to address them.
Presenter bio: Raul Diaz has worked in video compression for over 25 years and has numerous patents in the field. With experience in semiconductors, software and corporate management, he has headed up large R&D teams, helped take a startup public and sold his own company. He heads Vanguard Video developing advanced video codecs used by major companies around the world. Raul has a Bachelor of Science in Electrical Engineering from Yale University.
Raul Diaz

12:30 - 14:00

Industry Luncheon (Ticket Required)

Room: Hollywood Ballroom
12:30 Industry Luncheon
Mark A. Aitken (Sinclair Broadcast Group, USA)
Vice President of Advanced Technology
Presenter bio: VP of Advanced Technology SINCLAIR BROADCAST GROUP (SBG) Baltimore, MD Biographical Information Mr. Aitken joined the Sinclair Broadcast Group in April of 1999. He is currently responsible for representation of the groups interests in industry technical and standards issues, DTV implementation (HDTV and Mobile), and represents SBG within ATSC, OMVC, Mobile500 and other industry related organizations. Mr. Aitken is the Chairman of ATSC TSG/S4, the specialist group responsible for Mobile DTV (Mobile/Handheld) standardization, and has been involved in the Broadcast industry’s migration to advanced services since 1987 when he first became involved with the FCC’s ACATS (Advisory Committee on Advanced Television Service) activities. Prior to his involvement with SBG, the COMARK Division of Thomcast (Thomson Broadcast) employed Mr. Aitken. He held many diversified positions within the organization including Manager of the Systems Engineering, RF Engineering and Sales Engineering groups, as well as Director of Marketing and Sales Support which included DTV Strategic Planning responsibilities. While with COMARK, Mr. Aitken was part of the “Emmy Award Winning Team” that revolutionized the Broadcast industry by bringing IOT technology to the marketplace. Mr. Aitken is a member of the AFCCE, IEEE and SMPTE, and serves as a member of the Technical Advisory Group with the Open Mobile Video Coalition (OMVC). He is the author of many papers dealing with innovative RF product developments, advanced digital television systems design and related implementation strategies, holds patents for various RF devices, and was a recipient of the “Broadcasting and Cable” Technology Leadership Award in 2008.
Mark A. Aitken

14:15 - 15:45

Networked Media in the Facility-Part 2

Room: Salon 1
Chair: Al Kovalick (Media Systems Consulting, USA)

This session, presented in 3 parts, focuses on using packetized methods to move media and metadata in real-time over networks. Presenters will cover Ethernet and IP methods to build production and broadcast environments and consider techniques for establishing common device clocks, video sync, frame accurate switching and AV transport over. A mix of detailed technology review, tutorials, and case studies will also be presented. Don't miss this firsthand look at the media facility of the future.

14:15 The Fundamentals of the Professional Networked Media Ecosystem
Al Kovalick (Media Systems Consulting, USA)
Traditional SDI/AES3 transports have served the media industry well for 20+ years. However, they are not IT or cloud friendly. Ethernet/IP is a worldwide standard and able to replace SDI/AES3 with many added benefits. This paper will outline new dimensions of AV streaming and transport using Ethernet/IP. The focus will include; lossless transport methods, compressed essence tradeoffs, physical layer choices, protocol stacks, push versus pull streams, timing and alignment, methods for frame accurate stream splicing, discovery/identity, virtual media bundles, mixed media networks and more. Current standards including ST2022 and AVB will be reviewed. Industry work towards best practices, standards and an interoperable future is reviewed.
Presenter bio: Al Kovalick has worked in the field of hybrid AV+IT systems for the past 20 years. Previously, he was a digital systems designer and technical strategist for Hewlett-Packard. While at HP, he was a principal researcher and architect for a new product-class of signal synthesizer. He was also the principal architect of HP’s first VOD server. Following HP, from 1999 to 2005, Al was the CTO of Pinnacle Systems. After Avid acquired Pinnacle, Al served as an Enterprise Strategist and Fellow for six years. In 2011, Al founded Media Systems Consulting in Silicon Valley. His work focuses on all aspects of networked media systems, file-based workflows and cloud migration for media facilities. Al is an active speaker, educator, author and participant with industry bodies including SMPTE. He has presented over 50 papers at industry conferences worldwide and holds 18 US and foreign patents. In 2009 Al was awarded the David Sarnoff Medal from SMPTE for engineering achievement. Al has a BSEE degree from San Jose State University and MSEE degree from the University of California at Berkeley. He is a life member of Tau Beta Pi and a SMPTE Fellow. Al writes the Cloudspotter's Journal column for TV Technology magazine.
Al Kovalick
14:45 Can COTS Ethernet Switches Handle Uncompressed Video?
Thomas Edwards (FOX Networks Engineering and Operations, USA); Brian Keane (Aperi Corporation, USA)
The carriage of real-time video over Ethernet networks promises significant benefits to the broadcast industry. But there has been some concern about whether commercial-off-the-shelf (COTS) Ethernet switches can meet broadcast quality of service (QoS) requirements. This paper describes the results of a range of large scale static and dynamic tests of Ethernet switches using packet flows that are representative of uncompressed HD video carried using SMPTE 2022-6, looking specifically at packet loss, packet re-ordering, latency, and packet delay variation (PDV). Flow test generators and analyzers include a new flexible FPGA architecture controlled using a RESTful API.
Presenter bio: Thomas Edwards is Vice President, Engineering & Development at FOX Networks Engineering & Operations, where he has worked on advanced technology projects such as mobile DTV, 3D, and the FOX network satellite distribution system. Previous to joining FOX in 2007, he was Senior Manager, Interconnection Engineering for the PBS Interconnection Replacement Office, where he was responsible for the engineering planning of the PBS Next Generation Interconnection System (NGIS). He also has had significant experience with streaming media production and delivery at the Internet service provider DIGEX and the IP-over-satellite company Cidera. Edwards has contributed to the Report of the SMPTE Task Force on 3D to the Home, the NAB Engineering Handbook, and the SMPTE/EBU/VSF Joint Task Force on Networked Media. He holds a Master’s Degree in Electrical Engineering from the University of Maryland, and is a member of IEEE and SMPTE.
Thomas Edwards
15:15 Next Gen Post Production workflows and enabling infrastructure
Brinton Miller (Discovery Communications, USA); Ammar Latif (Cisco Systems, USA); Christian Malone (Discovery Communications, USA)
With the explosive growth in media creation and consumption , Traditional Post production environment is being challenged to accommodate the demands of digital video production needs where common content is targeting different markets and various consumption methods. This paper will discuss emerging post production workflows focused on content auto creation , metadata usage in asset tracking and automation of content sourcing / editing . We will then discuss approaches to build the underlying infrastructure as the enabler for workflow consolidation to increase agility while maintaining efficiency and performance . We will describe a an architecture that is based on x86 stateless computing , IP technologies and open standards to enable dynamic allocation of resources and significantly increase workflow efficiency and performance while providing a strategic path for scalability and cloud deployments
Presenter bio: Ammar Latif is a systems engineer with the Service Provider Media team at Cisco Systems . His current focus is on IP architectures for Digital Media workflows in the content providers space . Ammar has also supported a number of large service provider networks in North America with focus on advanced IP routing technologies . Ammar is a member of SMPTE . He has a Masters of Engineering degree from the University of Toronto and holds CCIE certifications.
Presenter bio: Christian Malone manages strategic and innovative media engineering projects for Discovery’s Global Technology & Operations organization. Supporting Discovery's Media Engineering teams in architecting large-scale media technology systems, Christian also works to improve operations through designing monitoring and systems management solutions. Before Discovery, Christian worked for Apple building their Broadcast & New Media Integrator program and for an integrator designing systems for major broadcast and post production clients. Christian first learned high performance computing and networking while supporting super computers and automated archive tape libraries for NASA's Center for Computational Sciences.
Presenter bio: Brinton Miller is responsible for the media engineering teams at Discovery’s domestic production facilities that provide post-production, content distribution and creative technical services to Discovery’s worldwide businesses.Brinton’s primary focus is the design and deployment of large-scale media technology solutions to support Discovery’s Linear and Non-Linear businesses. Brinton manages a number of teams in Discovery’s Media Engineering organization. The Frontline Engineering team provides support for Discovery’s technical operations in Silver Spring, New York and Los Angeles, The Project Management and Engineering group is a design, integration and project management team that supports projects across all of Discovery’s global technical facilities. The Broadcast Network Engineering team is focused on the design and support of Discovery’s larges scale media networks, storage environments and Media software systems.
Ammar LatifChristian MaloneBrinton Miller

Developments in Audio Technology, Part 1: Tools for Immersive Audio

Room: Salon 2
Chair: Jerry C Whitaker (Advanced Television Systems Committee, USA)

Immersive audio has emerged as a powerful force in storytelling. The addition of spatial information enables new creative possibilities that can provide a greater sense of immersion and a higher level of reality to the cinema experience—and before long, to the home as well. This session examines some of the fundamental elements that go into immersive audio, including sound scene description, management of complex sound scenes, and manipulation of object-based sound elements. The techniques and developments described will propel the "suspension of disbelief" that is at the center of the cinema experience.

14:15 Cinematic Sound Scene Description and Rendering Control
Charles Robinson and Nicolas Tsingos (Dolby Laboratories, USA)
Surround sound has been making cinematic storytelling more compelling and immersive for over 30 years. Recently, a new format for distributing and rendering surround sound has been deployed that gives the sound mixer and director new ways to express their story thru sound. The format carries audio elements as well as parameters (metadata) that embody the artistic intent by specifying the translation from the audio elements to loudspeaker signals. In this paper, we present the underlying model for sound scene description and describe how metadata can be used to control its rendering. We illustrate the practical value and application of the format thru analysis of recent movie soundtracks.
Presenter bio: Charles Robinson received BSEE and MSEE degrees from the University of Illinois, where he specialized in signal processing and began his professional career just as real-time digital audio signal processing was becoming a practical reality. Since joining Dolby Research in 1995 areas of research have included, acoustics, audio coding, interactive audio and spatial audio with applications to broadcast, gaming and cinema. Mr. Robinson has authored or coauthored over a dozen patents in audio signal processing, contributed to two Emmy-award winning products, and is a member of AES and IEEE.
Charles Robinson
14:45 Immersive Audio Systems and the Management of Consequent Sounds
William Redmann (Technicolor, USA)
Immersive audio is appearing more frequently in modern cinematic storytelling. In traditional sound mixing, scenarios can occur in which a first sound has a tight semantic coupling to a second sound, for example a gunshot and ricochet, or a handclap and its reverberation. In immersive sound systems, such precedent and consequent sounds may be directed to different locations so as to envelope the audience. When consequent sounds are not managed, the psychoacoustic principle known as the "Haas Effect" can result in portions of an audience misunderstanding the placement of precedent sounds, momentarily disrupting their experience. A series of immersive sound examples demonstrates the resulting problem and the effectiveness of a proposed management technique, applicable to object-based, wave field synthesized, and ambisonic reproduction.
Presenter bio: Started his career mixing technology and entertainment by pursuing a Masters Degree in Engineering at UCLA while building practical electronic props for Battlestar Galactica and Buck Rogers. Twice Director of Technology at Walt Disney Imagineering, developing and fielding ride systems including Indiana Jones at Disneyland Park, and twenty interactive attractions seen at DisneyQuest, Orlando. Currently a Fellow and Sr. System Architect at Technicolor, he assembled Technicolor’s Digital Cinema Interoperability Testing Center (ITC) and worldwide hard drive distribution platform. By now, he’s got 30 US patents issued and over 40 pending in fields including: Digital Cinema, Virtual Reality Audio, Online Media Production, Distributed Network Streaming Media and Interactive Systems, 3D Displays, Online Communities, Content Distribution to Mobile Devices, Healthy Play for Children, Travel Planning, and Electric Vehicle Infrastructure. His favorite two: A roller coaster you design then ride, and a keyboard for dolphins to communicate with people, and both really worked.
William Redmann
15:15 Object-Based Audio: Opportunities for Improved Listening Experience and Increased Listener Involvement
Robert Bleidt (Fraunhofer USA, USA); Arne Borsum and Harald Fuchs (Fraunhofer IIS, Germany); S. Merrill Weiss (Merrill Weiss Group LLC, USA)
A new TV audio system based on the MPEG-H Audio standard is being designed and tested to offer interactive and immersive sound, employing the standard's audio objects, height channels, and Higher-Order Ambisonics features. Object-based interactive audio offers users the ability to personalize their listening experience, setting their preferred language and dialogue level, or selecting elements to "hear their home team" or listen to their favorite race driver's radio. A four-stage process is introduced for implementing the complete system in TV networks. Additionally, the plant design, creative, and operational implications of producing content are discussed, based on the design and field testing of the system. Consumer reproduction implications are also presented, such as a "3D Soundbar" prototype, the control of loudness in the system, and rendering for playback on both traditional and new media devices.
Presenter bio: Robert Bleidt is General Manager of the Audio and Multimedia Division of Fraunhofer USA Digital Media Technologies. Before joining Fraunhofer, he was president of Streamcrest Associates, a product and business strategy consulting firm in new media technologies. Previously, he was Director of Product Management and Business Strategy for the MPEG-4 business of Philips Digital Networks and managed the development of Philips Emmy-winning asset management system for television broadcasting. Prior to joining Philips, Mr. Bleidt served as Director of Marketing and New Business Development for Sarnoff Real Time Corporation, a video-on-demand venture of Sarnoff Labs. Previously, he was Director of Mass Storage Technology and inventor of SRTC's Carousel algorithm. Before joining Sarnoff, Mr. Bleidt was President of Image Circuits, a consulting engineering firm and manufacturer of HDTV research equipment.
Robert Bleidt

15:45 - 16:15

Coffee Break

Room: Exhibit Hall

16:15 - 17:45

Networked Media in the Facility-Part 3

Room: Salon 1
Chair: Al Kovalick (Media Systems Consulting, USA)

This session, presented in 3 parts, focuses on using packetized methods to move media and metadata in real-time over networks. Presenters will cover Ethernet and IP methods to build production and broadcast environments and consider techniques for establishing common device clocks, video sync, frame accurate switching and AV transport over. A mix of detailed technology review, tutorials, and case studies will also be presented. Don't miss this firsthand look at the media facility of the future.

16:15 Network Delivered References - Under the Hood and Across the System
Paul Briscoe (Consultant & SMPTE, IEEE, Canada)
The SMPTE ST 2059-1 and -2 Standards will enable IP network delivery of all of today's reference signals to media equipment. This paper first explores how the IEEE-1588 Precision Time Protocol is used to deterministically generate the same signals as used today. Next, how the master works SMTPE-specific ways, and how terminal equipment converts network precision time into real or virtual SMPTE (and other) reference signals is discussed. How the new PTP (grand)master behaves with respect to existing discrete-signal master generators is then examined, leading to an overview of the evolution of legacy systems to incorporate network reference. These systems are described in detail in terms of maintaining interoperability as equipment and facilities repurpose and evolve, and all-IP systems using network reference are also described. Finally, the paper looks at a use case of seamless evolution of a facility from HDSDI and discrete reference signals to all-IP reference and media transport.
Presenter bio: Paul Briscoe Bio Paul began his career in the broadcasting industry in 1980 at the CBC in Toronto. Specializing in the then-new arena of digital television, he was one of the designers of the Toronto Broadcast Center, with particular focus on the plant routing system, computer graphics facilities and overall systemization and timing. Prior to CBC (and during a brief hiatus), he was involved in technology startups and provided system and product design consultation to various clients. He jumped ship from CBC in 1994 to join Leitch Technology as Product Engineer, defining products for the new digital era. Over his 19 years at Leitch (subsequently Harris Broadcast, now Imagine Communications), he was a Project Leader, Development Group Leader, R&D Manager, Manager of Strategic Engineering and Principal Engineer. He left Harris Broadcast in November, 2013, and now provides system, technology, design and standards consultation to the ever-evolving media industry. He has several patents granted and in process, is a member of SMTPE and IEEE, and is an active participant on numerous SMPTE standards committees. A lifelong Radio Amateur, Paul is also an avid curler in the winter and cyclist and gardener in the summer.
Paul Briscoe
16:45 Generating synchronous video signals from just time
J. Patrick Waddell (Harmonic Inc., USA)
The concept that a video generator can create a synchronous ("genlocked") output signal with only the knowledge of the correct current time seems radical to many. Yet SMPTE is preparing to publish a pair of key standards which define exactly how to do just this. This paper will explain the background and the methodologies being used to permit what may become the most important transformation in the history of television - the move from coax to Ethernet.
Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC's TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC's Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
J. Patrick Waddell
17:15 PTP deployment in Large Networks - Traps and Pitfalls
Nikolaus Kerö (Oregano Systems, Austria); Thomas Kernen (Cisco, Switzerland); Tobias S. Müller (University of Applied Sciences Technikum Wien & Oregano Systems, Austria)
The IEEE 1588 Precision Time Protocol (PTP) is a proven method to distribute highly accurate time information over Ethernet networks allowing devices to synchronize their clock to a common time source to less than 100ns. Deploying and operating PTP in large networks with a high number of PTP nodes with differing demands for synchronization accuracy requires detailed planning of the network topology itself. If the network infrastructure is shared with other applications, especially those with considerable bandwidth demands, the PTP performance will be impacted. PTP precision relies on the optimal configuration of all network devices as well as the appropriate settings of all relevant PTP parameters. The principles for designing and configuring large broadcasting networks are described together with PTP performance optimization techniques. The benefits and efficient use of PTP enabled network devices are covered as well. For typical network topologies, measurement results show the effect of individual optimization tasks.
Presenter bio: After receiving a Masters Degree in Communication Engineering with distinction from the Vienna University of Technology, Nikolaus led the ASIC design division at the university's Institute of Industrial Electronics, successfully managing numerous research projects and industry collaborations. His research activities centered on distributed systems design, especially highly accurate and fault- tolerant clock synchronization. In 2001 he co-founded Oregano Systems Design & Consulting Ltd. as a university spin-off. While offering embedded systems design services to customers, Oregano transferred research results into a complete product suite for highly accurate clock synchronization under the brand name syn1588®, for which Nikolaus manages both development and marketing. He is an active member of the IEEE1588 standardization committee and the SMPTE 33TC standard group and holds frequent seminars on clock synchronization for both industry and academia.
Presenter bio: Thomas Kernen is a Consulting Systems Engineer in Cisco's European Enterprise Networking architecture team. His main area of focus is defining architectures for transforming the broadcast industry to an All IP Video infrastructure. Thomas is a member of the IEEE Communications and Broadcast Societies, and the Society of Motion Picture & Television Engineers (SMPTE). He is active within a number of trade and industry organisations including the Digital Video Broadcasting (DVB) Project, the SMPTE Standards Committees and the European Brodcasting Union (EBU) working groups. Prior to joining Cisco, Thomas spent ten years with various telecoms operators, including a FTTH triple play operator, for whom he developed their video architecture.
Nikolaus KeröThomas Kernen

Developments in Audio Technology, Part 2: Delivering on the Promise

Room: Salon 2
Chair: Jerry C Whitaker (Advanced Television Systems Committee, USA)

New audio services represent new opportunities for content producers, and new ways of enjoying programming for consumers. This new technology, of course, is of little value if it doesn't find its way through the long and sometimes very complex chain that stretches from the microphone on the stage to the speakers in the home. This session examines some important elements that comprise parts of this chain, including interchange, distribution and delivery of immersive audio; a detailed examination of loudness vs. speech normalization; and methods of reducing audio transmission impairment. Designing an advanced audio system and getting it to work efficiently in a wide variety of applications is a major challenge. Join us as we examine ways to address this challenge.

16:15 Immersive & Personalized Audio: A Practical System for Enabling Interchange, Distribution & Delivery of Next Generation Audio Experiences
Jeffrey Riedmiller and Sripal Mehta (Dolby Laboratories, USA); Prinyar Boon (Dolby Europe Limited, United Kingdom); Nicolas Tsingos (Dolby Laboratories, USA)
Recent advancements in cinema audio continue to bring more lifelike experiences to theatre audiences. These advancements are also driving a transformation across the remaining audio ecosystem and are poised to enable a richer experience in the living room and on-­‐the-­‐go. This paper will explore, propose and contrast several practical methods that enable accessible, immersive and personalized experiences from production through playback across broadcast, cable, satellite, IPTV and OTT platforms.
Presenter bio: Jeffrey Riedmiller is currently Senior Director of the Sound Group in the Office of the CTO at Dolby Laboratories in San Francisco. He is responsible for leading a creative and global team responsible for the development and innovation across all of Dolby’s technologies in sound. These include: Dolby’s audio coding systems (Dolby Digital (AC-3), Dolby Digital Plus (E-AC-3), Dolby E as well as a unique suite of audio signal processing technologies utilized throughout professional and consumer media and electronics industries worldwide. Joining Dolby in 1998; he worked intensively on the launch of multichannel audio across numerous digital television (DTV) and HDTV services throughout North America. He is also the visionary creator of several revolutionary product and technology innovations that have defined how television loudness is accurately estimated and controlled worldwide. Two well-known examples are the Dolby LM100 Broadcast Loudness Meter with Dialogue Intelligence and the DP600 Program Optimizer which received multiple Emmy Awards for Outstanding Achievement in Engineering Development in 2004, 2009 and in 2011 respectively. Riedmiller is an active member of the Institute of Electrical and Electronics Engineers (IEEE), Society of Cable Telecommunications Engineers (SCTE) and its Standards Committees. He has authored and presented several technical papers over the past 16 years; some of his most recent published works includes an AES paper on loudness normalization for portable media devices and a chapter on audio for digital television in the 10th Edition of NAB’s Engineering Handbook. Previously he served as co-chairman for the National Cable & Telecommunications Association (NCTA) Engineering Committee – Audio Quality Subcommittee, as well as Associate Editor for Transactions on Broadcasting, the journal for the Institute for the Electrical and Electronics Engineers (IEEE) – Broadcast Technology Society.
Jeffrey Riedmiller
16:45 Loudness vs. Speech Normalization in Broadcast
Thomas Lund (TC Electronic A/S, Denmark)
An empirical study of the differences between level-normalization of feature films, TV drama and regular broadcast using the two dominant methods: loudness normalization and speech ("dialog") normalization. Instead of adding to the continuing debate of the subjective merits of one method over the other, technical aspects such as headroom-requirements and measurement uncertainties are examined. The paper is an extension of a recent article in the SMPTE Motion Imaging Journal. Listening examples will be provided for the presentation, which will not mention or promote any commercial equipment.
Presenter bio: Thomas Lund joined TC Electronic in 1997 and currently holds the position as CTO Broadcast & Production. He was among the first to document the sonic consequences of the ‘loudness wars' in music, and has been responsible for developments in areas such as spatialization, localization, true-peak detection and loudness. Drawing on a background in medical science, Thomas has helped broadcast standards break free of proprietary technology and towards the transparent and facts-based solutions of today. Thomas has contributed to audio standardization in Scandinavia, Europe, Japan, within AES, ITU, EBU and ATSC.
Thomas Lund
17:15 Digital Audio Transmission Impairment and Link Failure: Test Data, and Recommendations for Improved Industry Standards and Reference Designs
Jon D. Paul (Scientific Conversion, Inc., USA)
Broadcast and cinema applications for digital audio transmission with sample rates (FS) over 48 kHz (e.g. FM digital composite) have experienced signal dropouts and link failures over cables longer than 30 m. Engineers may assume that digital transmission works, regardless of source, cable, destination or sample rate. The cables, interface ICs, EMI filters and components for digital audio transmission are designed according to AES3-4, AES2id-2012 and ANSI/SMPTE 276M-1995 standards. The paper presents field observations and test results for 100 m lengths of many types of balanced and unbalanced cables at FS 48-192 kHz, revealing huge variations in cable transmission and received spectra and eye-patterns. The author includes a review and recommendations to update the standards for sample rates, rise times, bandwidths, eye pattern masks, and circuit designs, to apply to higher transmission rates. The result will be improved and more reliable signal transmission for all sample rates and cable lengths.
Presenter bio: Jon Paul is from Manhattan. He has an MSEE from City College of New York. Starting in 1968, Jon designed real-time spectrum and FFT analyzers. In 1972, he worked as Chief Engineer at Eventide, N.Y. where he designed some of the first analog and digital studio sound processors such as digital delay lines. In 1983, Jon started Scientific Conversion, Inc. to consult in the fields of power electronics, digital audio and high voltage power supplies. In 1983 Jon designed and manufactured the first 12 kW HMI ballasts for cinema lighting, introduced at the 1984 NAB and Los Angeles Olympic Games. Starting in 1989, Scientific Conversion has focused on research and manufacture of digital audio transformers for the broadcast, studio and cinema markets. Jon has written 3 AES papers about digital audio transformers. Jon is the holder of seven US Patents for energy saving electronic ballasts and telecommunications. His US patent 5,051,799, was for the world’s first digital microphone. Starting in 2002, this patent has been successfully litigated and licensed to all major mobile providers and handset manufacturers in the USA. Jon was one author of the AES-42 Standard for Digital Microphones. In 2008, Jon started the non-profit Paul Foundation to fund scholarships and endowments in the fine arts. The foundation now concentrates on funding research into Parkinson’s disease, which recently led to a significant new laboratory model and three new line of research. Starting in 1980, he founded the Crypto-Museum, a collection of vintage posters, WWII technology, cipher and spy equipment. Jon is an internationally recognized expert, writer and speaker on the connections between WWII cipher machines and modern DSP, video and audio technology. Jon travels extensively in Europe and is an avid amateur photographer.
Jon D. Paul

18:00 - 20:00

Welcome Reception

Room: Exhibit Hall

20:00 - 22:00

Student Film Showcase

Room: Salon 1

Wednesday, October 22

07:30 - 08:30

Morning Coffee

Room: Ray Dolby Ballroom Terrace

08:30 - 10:30

File Based Workflows - Part 1: Tools of the Trade - Conversion, Captions and Compression

Room: Salon 1
Chair: Sara Kudrle (Grass Valley, a Belden Brand and & SMPTE Western Region Governor, USA)

File based workflows are evolving and encompassing more and more formats, options, rules and and methods for transport. This session will first take an overview of happenings within the Joint Task Force on File Formats and Media Interoperability and then consider emerging tools which are enriching our workflows, such as automatic frame rate conversion, closed captioning management and compression of legacy interlaced formats using HEVC.

08:30 Addressing Issues in File Based Workflows: the Joint Task Force on File Formats and Media Interoperability
Clyde Smith (FOX NE&O, USA); Thomas Bause Mason (NBCUniversal, USA); Harold Geller (Advertising Digital Identification, LLC (Ad-ID), USA); Christopher J Lennon (MediAnswers, USA)
The Joint Task Force on File Formats and Media Interoperability is jointly sponsored by the North America Broadcasters Association (NABA), the Advanced Media Workflow Association (AMWA), European Broadcasting Union (EBU as an observer), the Society of Motion Picture and Television Engineers (SMPTE), The International Association of Broadcast Manufacturers (IABM) and Ad-ID, representing the American Association of Advertising Agencies and the Association of National Advertisers (ANA). The Vision of the Joint Task Force is that; New and more efficient file based workflows may be enabled through improving the specification and exchange of professional media, between organizations. This paper will report on the activities and findings to date by the task force. It will review use case collection, requirements gathering and analysis as well as the current working group process and progress.
Presenter bio: Clyde Smith is the Senior Vice President of New Technologies for FOX Network Engineering and Operations. In this role he is supporting Broadcast and Cable Networks, Production and Post Production operating groups in addressing their challenges with new technologies focusing on standards, regulations and lab proof of concept testing and evaluation. Prior to joining FOX he was SVP of global broadcast technology and standards for Turner Broadcasting System, Inc. where he provided technical guidance for the company's domestic and international teams. He previously held positions as SVP of Broadcast Engineering Research and Development at Turner, SVP & CTO at Speer Communications, and Supervisor of Communications Design and Development Engineering for Lockheed Space Operations at the Kennedy Space Center. Smith also supported initiatives for Turner Broadcasting that were recognized by The Computer World Honors program with the 2005, 21st Century Achievement Award for Media Arts and Entertainment and a Technology and Engineering Emmy Award for Pioneering Efforts in the Development of Automated, Server-Based Closed Captioning Systems. In 2007 He received the SMPTE Progress Medal and 2008 he received the Storage Visions Conference Storage Industry Service Award.
Presenter bio: Chris Lennon serves as President & CEO of MediAnswers. He is a veteran of over 25 years in the broadcasting business. He also serves as Standards Director for the Society of Motion Picture and Television Engineers. He is known as the father of the widely-used Broadcast eXchange Format (BXF) standard. He also participates in a wide array of standards organizations, including SMPTE, SCTE, ATSC, and others. In his spare time, he's a Porsche racer, and heads up a high performance driving program.
Presenter bio: Harold S Geller is Chief Growth Officer of, Advertising Digital Identification LLC (Ad-ID) a US Based advertising-metadata system, the UPC code for Ads across all platforms, which is a joint venture of American Association of Advertising Agencies (4A’s) and the Association of National Advertisers (ANA). Harold speaks and writes extensively regarding interoperability, digital workflow and metadata in advertising and is the co-author of four white papers on the subject. Harold’s advertising career spans nearly 30 years, in the United States and Canada. He has worked in media buying/planning, account management, financial, and technology roles at MindShare, Ogilvy & Mather, and McCann Erickson, and the defunct Ted Bates and Foster Advertising. Harold is a graduate of radio and television broadcasting from Seneca College (Toronto, Ontario, Canada).
Clyde SmithChristopher J LennonHarold Geller
09:00 Re-inventing the wheel or choosing the right one for the job? Frame rate manipulation for the file-age
Bruce Devlin (Dalet, United Kingdom); Simon Adler (Dalet, USA)
"You shouldn't re-invent the wheel" I was told by one of my university professors. "Why not"? I replied. "I want a lightweight wheel optimised for a bicycle, not a skateboard nor a 747". My professor taught me a valuable lesson, no-one likes a smart-arse. In this modern, file based, multi-platform world where more performance is required for a smaller budget, Dalet will present the (AT)3 concept - AmberFin's Advanced Adaptive Temporal Transform Toolbox. Choosing the frame rate or interlace nature of content is a business problem fulfilled by technology; this paper will explore how the right tool selection can minimize costs and maximize quality depending on the requirements of the output media. The paper will consider, OTT deliverables, Internationalisation and Versioning requirements as well as CPU, GPU, cloud and the fault-tolerant technology to make the correct tool choice and deliver those requirements in a cost-sensitive file-based world.
Presenter bio: Based in Dalet's LA office, Simon is responsible for the Market Development of Dalet and AmberFin solutions across the US. A senior broadcast technology executive with 15 years' experience and a founder member of AmberFin, Simon has been managing and overseeing complex broadcast workflows for customers, partners and system integrators to offer a holistic service both internally and externally. Simon came to AmberFin from Snell and Wilcox where he was Head of the Project Team, managing the pre and post sales support teams for major global projects.
Presenter bio: Bruce Devlin has been working in the Media industry for 25 years. In his career he has designed RF antennas, circuit boards, FPGAs, ASICs, hardware systems, video algorithms, compression algorithms, software applications, software systems and media workflows. He has worked for the BBC, Thomson, Snell & Wilcox, AmberFin and is currently the Chief Scientist at Dalet where he is responsible for strategy and excellence in all aspects of media. Although a technologist at heart Bruce never forgets that profitable, successful media companies, that show great content, pay for all the technological toys he invents. Bruce is an alumni of Queens’ College Cambridge, member of the IABM, fellow of SMPTE, has won many awards, authored many patents, co-designer of MXF and has written books and standards that you all know, love and sometimes hate.
Simon AdlerBruce Devlin
09:30 Automating closed caption verification, timing, and language Identification
Drew Lanham (Nexidia, USA)
The rapid increase of content due to growth in OTT, foreign distribution, and broadcast channels, combined with recent regulatory requirements related to closed captions, have created scalability and cost-management challenges faced by both video program owners and distributors worldwide. Caption verification, caption timing, and language identification provide challenges too significant to be addressed using current manual approaches, but can be automated to ensure that all caption files correctly appear against the right media in the right language at the right time. This paper describes the technology behind automated speech analysis and how it should be implemented within a scalable QC solution.
Presenter bio: Drew leads Nexidia’s Media business unit, overseeing the group's product management, strategy, business development and sales. Prior to Nexidia, Drew was Vice President of Business Development for Good Technology. Prior to Good Technology, Drew held various leadership positions at Yahoo, including Vice President & General Manager of Access (now Connected Life), a business unit which underpinned Yahoo’s premium subscription service business and accounted for more than half of Yahoo’s premium subscribers, eventually exceeding $1B in revenue for Yahoo. Drew was one of two principals who negotiated and implemented Yahoo’s strategic alliance with SBC Communications (now AT&T). In the role of Vice President of Business Development at Yahoo, he developed and managed complex alliances ($20M+) and distribution/licensing deals in media, technology/software licensing, wireless, and content components spanning all business units. Prior to Yahoo, Drew was the Co-founder & Vice President, Business Development at Encompass (acquired by Yahoo in 1999). Drew holds degrees in Economics and Finance from Baylor University and is based in Palo Alto, California.
10:00 HEVC efficiency assessment for contribution services of interlaced content
Juan Jose Anaya (SAPEC, Spain); Damian Ruiz (Universitat Politècnica de València, Spain)
In January 2014 a new family of HEVC profiles named "Range Extensions", have been approved in order to cover the needs of high quality production environments, such as primary distribution, contribution services, and edition/post-production. The new "Main 422@10" profile will become the successor of the successful "Hi422P" profile of the H.264/AVC, achieving a high efficiency for emerging formats beyond HD, such as the 4k and 8k formats. However HD interlaced content is nowadays the mainstream for broadcasting production, but HEVC profiles do not include specific tools for interlaced content as H.264/AVC did. This paper addresses the issues involved in HD interlaced contribution services under HEVC "Main 422@10" profile, with the aim to identify the true HEVC efficiency compared with Hi422P profile of H.264/AVC. The simulation results will report the bandwidth saving and the quality improvements, that broadcasters and network operators can achieve for HD interlacing encoding using new HEVC profiles.
Presenter bio: Damián Ruiz Coll received the M.S. degree in Telecommunications Engineering from Polytechnic University of Madrid (UPM), Spain in 2000. He is PhD candidate in Computer Science, doing a doctoral research stay at the Florida Atlantic University (FAU), of United States, during 2012. Nowadays, he is working as a researcher at Mobile Communication Group (MCG) of the Institute of Telecommunications and Multimedia Applications (iTEAM), and his research focuses on the real time optimization of HEVC (High Efficiency Video Coding), for broadcasting and mobile networks. He participates as a member of DVB (Digital Video Broadcasting), FOBTV (Future of Broadcast TV), and “Beyond HD” group of EBU (European Broadcasting Union). He has more than 15 years of experience as engineer at Public Spanish Broadcaster (RTVE), and he was involved in R&D projects, including collaborations with international committees as DVB, EBU, DigiTAG, and the “HDTV Spanish Forum” of Ministry of Industry. He has collaborated at several EBU video coding Test Plan for Video Quality Assessment of new generation of Production and Contribution codecs.
Damian Ruiz

Dammit, Gamut, I Love You!

Room: Salon 2
Chairs: Arjun Ramamurthy (20th Century Fox, USA), Kevin J Stec (Dynamic Digital Depth, Inc., USA)

This session looks into the challenges of extending the color gamut and dynamic range for next generation imaging systems including UHDTV. Issues such as how to maintain creative intent, how to color match between HDTV and wide gamut UHDTV displays, how to assess the quality of color conversions and how to view the results of color and dynamic range conversions.

08:30 Color management for wide-color-gamut UHDTV production
Kenichiro Masaoka, Takayuki Yamashita, Yukiko Iwasaki and Yukihiro Nishida (NHK Science & Technology Research Laboratories, Japan); Masayuki Sugawara (NHK, Japan)
UHDTV is a wide-color-gamut system, as standardized in Recommendation ITU-R BT.2020 and SMPTE ST 2036-1, that covers most real object colors and encompasses the gamuts of HDTV, Adobe, and DCI-P3. The development of wide-gamut displays and high-quality gamut mapping are major challenges in the workflow of UHDTV production today. While monochromatic light sources, such as lasers, are ideal for UHDTV wide-gamut displays, wide-gamut LCDs with non-monochromatic backlight sources, such as quantum dot LEDs, may well be used from the viewpoint of both cost and performance. Furthermore, a high-quality gamut mapping algorithm between UHDTV and HDTV for live broadcast production is essential. This paper offers solutions to these challenges.
Presenter bio: Kenichiro Masaoka received his B.S. in electronics engineering and M.S. in energy engineering from the Tokyo Institute of Technology, Japan. He joined NHK (Japan Broadcasting Corporation) in 1996. He is a Principal Research Engineer for the Advanced Television Systems Research Division, NHK Science and Technology Research Laboratories, Tokyo. He received his Ph.D. in Engineering from the Tokyo Institute of Technology in 2009. He worked with Professors Mark Fairchild and Roy Berns for a six-month residency as a Visiting Scientist at the Munsell Color Science Laboratory (MCSL) at the Rochester Institute of Technology (RIT) in 2012. His research interests include color science, human vision, and digital imaging systems. He is a member of IEEE and the Institute of Image Information and Television Engineers of Japan (ITE).
Kenichiro Masaoka
09:00 Quality Assessment Framework for Color Conversions and Perception
François Helt (Highlands Technologies Solutions, France); Valerie La Torre (HighlandsTechnologies Solutions, France)
Gamut conversion has become an important topic in the audiovisual industry. As larger gamut and higher dynamic range are introduced it seems necessary to provide fast and high quality gamut conversion methods. The objective of a gamut conversion should be to provide to the audience a perception of the program carrying the same artistic intents than with the initial working display. Perceptual facts as well as perception variability for a given display must also be taken into account. Built on a preceding research on quality framework, a measurement structure for the evaluation of gamut mapping is presented. It results in a measurement vector allowing color comparisons between two rendition channels.
Presenter bio: Short Bio for Francois Helt Background in Mathematics and Film making, 28 years experience in Professional Video and Film, Designing Image Processing software since 1981, R&D manager of teams dedicated to special effects software including film scanners and film printers drivers since 1991. Technical manager of the European project "Limelight" aimed at the design of a complete digital film restoration system from 1994 to 1997. Founder and CEO of DUST company specialising in digital restoration and processing of film from 1997 to 2002. Author of automatic digital film restoration software. Designer of colour conversion and calibration software for Digital Cinema from 2004 to 2006. Technical and Application manager for Digital Cinema in Doremi since December 2006. Chief SCientific Officer for Highlands Technologies Solutions since 2013 Conferences and lectures: - SPIE Conference San Jose, California, February 1992, "High definition tape to film transfer" - 134th SMPTE technical conference, Toronto, Canada, November 1992, "High definition tape to film transfer" - CVPP, Atlanta, November 1997, "Deterioration Detection for Digital Film Restoration" - IBC September 1998, Amsterdam Workshop on film transfer - IBC September 1999, Amsterdam Workshop on digital convergence - Association of Moving Image Archivists Conference Los Angeles, November 2000, "Digital film restoration" (The Reel Thing - technical meeting) - IEE London, January 2001, "Advances in Digital Restoration for Addressing the Vinegar Syndrome Effects" - Festival Cinema Ritrovato, Bologna Italy, July 2001, "Digital restoration applied to the vinegar syndrome" - Association of Moving Image Archivists Conference, Portland, November 2001, "Vinegar Syndrome" (at "The Reel Thing" technical meeting) - 8th World Multiconference on Systemics, Cybernetics and Informatics (SCI 2004), Orlando, USA, July 18-21, 2004, "Bayesian framework for digital restoration of Film: A real case study and the role of perception" - SPIE Electronic Imaging 2005, San Jose, USA, January 16-20, 2005. "Image Quality Evaluation in the Field of Digital Film Restoration" with M. Chambah, C. Saint Jean. - SMPTE Fall 2009 technical conference, Los Angeles, USA, October 2009 “Proposal for practical screen luminance uniformity measurement” - SMPTE Fall conference 2010, Los Angeles, USA, October 2010 “Method and good estimators for projection uniformity measurement” - SMPTE Fall conference 2010, Los Angeles, USA, October 2010 “Quality Metrics in long-term preservation and restoration paradigms” - SMPTE Fall conference 2011, Los Angeles, USA, October 2011, “Matching The Human Visual System, Balancing Bit Depth, High Dynamic Range And Coding Efficiency” - SMPTE Fall conference 2012, Los Angeles, USA, October 2012 “Practical Quality Assessment for Digitized Film Content” - Association of Moving Image Archivists Conference, Seattle, December 2012, “Transmittance Film Scanning” - SMPTE Fall conference 2013, Los Angeles, USA, October 2013, "French Cinema goes IMF"
François Helt
09:30 A better color matching between HD and UHD content
Lars Borg (Adobe, USA)
The introduction of a wide-gamut color space in Ultra-HD television creates a need to match colors on wide-gamut UHD displays with colors on conventional HD displays. Many contemporary color conversion methods that apply to conversion from HD to SD, and color conversion methods mandated by current television standards, fail to produce a good color match when converting colors from HD or narrow-gamut UHD color spaces to the wide-gamut UHD color space. This paper illustrates the color errors caused by the application of several of these conversion methods, and recommends one method, using display-referred colorimetry, for better color matching between narrow-gamut and wide-gamut displays.
Presenter bio: Lars Borg is a Principal Scientist at Adobe, with over 20 years of experience in color management. Lars develops solutions, specifications and standards for digital imaging, image processing, digital cinema (ACES), color management, CinemaDNG, high dynamic range, wide color gamut, UHDTV, video compression, metadata, holds over 30 patents in these areas, and is active in SMPTE, ISO and ICC standards developments.
Lars Borg
10:00 High Dynamic Range Intermediate
Gary Demos (Image Essence LLC, USA)
There is a need to exchange high dynamic range wide gamut moving images that embody creative intent. Simple definitions using CIE 1931 chromaticity having extended gamut may not suffice. It may be desirable to extend gamut beyond P3, even for material mastered using P3 displays and projectors. It is also clearly useful to master to a dynamic range that may exceed a given mastering device. Reduced gamut and dynamic range used temporarily during mastering provides a means of checking image integrity outside of device gamut and range limits. However, additional information about mastering emission spectra is also useful, yet is difficult to interpret as gamut widens further. This paper seeks to explore these issues, and to propose approaches to defining an HDR Intermediate that can be exchanged with increased confidence with respect to creative mastered intent.
Presenter bio: Gary Demos is the recipient of the 2005 Gordon E. Sawyer Oscar for lifetime technical achievement from the Academy of Motion Picture Arts and Sciences. He has pioneered in the development of computer generated images for use in motion pictures, and in digital film scanning and recording. He was a founder of Digital Productions (1982-1986), Whitney-Demos Productions (1986-1988), and DemoGraFX (1988-2003). He is currently involved in digital motion picture camera technology and digital moving image compression. Gary is CEO and founder of Image Essence LLC, which is developing wide-dynamic-range codec technology based upon a combination of wavelets, optimal filters, and flowfields.
Gary Demos

08:30 - 10:00

Cinema Workflow - A Brief Moment in Time

Room: Theatre (Chinese 6)
Chair: Kevin Wines (Doremi Labs, USA)

Fans of the new TV series Cosmos will agree the producers have presented the vast intricacies and marvels of our universe in a remarkably engaging and understandable way. Physicists have gained very high level and consistent knowledge of the incredibly complex mechanisms of nature over the last decade.

Ah, if only Cinema Workflows were as simple to understand…It kind of reminds us of Dark Matter - We know it's there. We know it affects us in innumerable ways. And we can't consistently describe its attributes or how it affects everything else around it. Further, just like our knowledge of the universe, Cinema Workflows are ever-changing; giving new challenge to our skills and understanding of the process.

In this session, our presenters will help us explore and better understand a few the "Dark" areas of Cinema Workflow. And hopefully, in the end, we will be one step closer to a "Unified Theory" of Cinema Workflow.

08:30 Camera Raw Workflows - Like Film, but Digital
Ed Reuss (Unaffiliated, USA); Lars Borg (Adobe, USA)
If you liked working with film, then the techniques to control exposure will feel very familiar to you in camera raw formats. This paper presents how all camera raw formats work in common, and also the different approaches used by several vendors to preserve the highest fidelity image information, while managing the large amounts of data required to represent those images. Camera raw workflows provide a variety of techniques to convert from the sensor data into RGB and YCbCr image formats suitable for mastering, along with methods for generating a specific "look" to the images. Similar to film, the tremendous dynamic range afforded by a camera raw format permits a wide range of exposure during acquisition, which must be mapped into the limited dynamic range of the final output formats in the workflow.
Presenter bio: Edward Reuss is an independent consultant specializing in video, audio and Wi-Fi networks, particularly for very low latency applications. Earning his MSEE at Colorado State University, Ed started in test and measurement, for Hewlett Packard (Agilent), Tektronix and Wavetek. He worked at General Instrument on the Eurocypher project for British Satellite Broadcasting (BSB). After several years developing scientific instruments at Scripps Institution of Oceanography, he was a Director of Systems Engineering at Tiernan Communications, developing real-time MPEG-2 video encoders for DSNG and network distribution. He switched to consumer products as a Principal Engineer in Plantronics' Advanced Technology Group, where he developed several advanced technology prototype headsets incorporating DSP, Bluetooth and Wi-Fi. Since then, he has consulted for several clients, including GoPro, Clair Global, TiVo and Intel. Ed is active in the SMPTE Standards Community, a senior member of the IEEE and voting member of the IEEE 802.11 Working Group.
Presenter bio: Lars Borg is a Principal Scientist at Adobe, with over 20 years of experience in color management. Lars develops solutions, specifications and standards for digital imaging, image processing, digital cinema (ACES), color management, CinemaDNG, high dynamic range, wide color gamut, UHDTV, video compression, metadata, holds over 30 patents in these areas, and is active in SMPTE, ISO and ICC standards developments.
Ed ReussLars Borg
09:00 Advances in fully immersive theatrical sound mixing workflows
Tom Graham and Rich Nevens (Avid, USA); Jonathan Wales (Sonic Magic, USA)
Today's sound supervisors and sound designers face an increasingly challenging production environment, with more complex sessions, a greater number of deliverables, faster project turnarounds and growing expectations for simultaneous audio and video production. This session will explore the latest technologies, techniques and workflows that enable sound teams to overcome these challenges and maximize creativity. Integrated audio and video workflows enable the sound team to be involved much earlier in the process, working directly with the director and film editor immediately after principle photography to maximize flexibility and creativity. Technologies that enable sound teams to continually evolve mixes, perform multiple tasks in a single session, easily conform changes, and quickly offline render stems - resulting in more creative time - will be explored, as well as techniques for mixing in 7.1 and Dolby Atmos and delivering multiple formats simultaneously.
Presenter bio: Thomas Graham is a 19-year veteran of Avid and specializes in Post Audio and Professional Mixing in the marketing department. He has freelanced on over 40 feature films as an orchestral scoring recordist/editor, such as Collateral, Ice Age, Star Trek Nemesis, Solaris, Serenity, and The Spirit to name a few. He has also worked on several major label album projects with artists such as Ray Charles, Quincy Jones, Brandy and Al Jareau. Graham holds a bachelor of fine arts degree in Sound Recording Technology from the State University College of New York at Fredonia and worked as an ‘old school’ analog recording engineer for several years afterwards in the New York area before making his home in Los Angeles in 1992.
Presenter bio: Rich has a Bsc degree in Electrical Engineering and worked at Post Logic studios prior to joining Euphonix when they came out of a garage in Palo Alto in the early 90’s. Became VP of Sales and then left to join Digidesign just prior to the launch of Icon in 2004. Currently Director of the WW Pro Audio Solutions team at Avid.
Presenter bio: After producing records in London, Jonathan moved to Los Angeles to pursue his dream of working on movies. Having worked at Universal Studios, Jonathan’s passion for pushing the limits of technology left him wanting to build his own facility. Thus Sonic Magic Studios was born - since becoming one of the foremost independent sound facilities. Jonathan is an extremely sought-after re-recording mixer and combines this passion with a deep desire to push forward the boundaries of innovation.
Tom GrahamRich Nevens
09:30 Options for Camera Raw in the Digital Workflow
Keith Hogan (Pixspan, Inc., USA)
Raw camera data is an important component in the digital workflow. But, with higher resolution, the ability to get all of the sensor information out of the camera is challenged by the speed of existing interfaces. Additionally, the cost of storing this data for a high shooting ratio projects is cost prohibitive. This paper explores the workflow options that are available for compressing Bayer/Mosaic Pattern data. Topics will include the data layout of the Bayer Pattern for specific cameras resulting in interface speed requirements for different resolutions, compressibility of Bayer Pattern data vs. Debayered DI images (entropy differences between Camera Raw and DI images and the impact on Compression Ratio), dynamics of uploading Camera Raw from On-Set to the Cloud, and options when applying compression at various stages of the Production and Post Production workflows.
Presenter bio: Keith Hogan is the CTO for Pixspan, where he develops technology for making high resolution images as small as possible. He is currently working with studios and post production houses to advance the workflow for video storage and transmission. He has a background in Big Data and search as a former VP at Ask.com, as well as a Director of Software Development for networking products with Motorola/Telogy. Keith lives and works in the Washington D.C. area.
Keith Hogan

10:30 - 11:00

Coffee Break

Room: Exhibit Hall

11:00 - 12:30

File Based Workflows - Part 2: Meaningful Media Management - Taking Us to the Second Screen and Beyond!

Room: Salon 1
Chair: Sara Kudrle (Grass Valley, a Belden Brand and & SMPTE Western Region Governor, USA)

As our workflows extend to other devices and second screens, we need to get an even better handle on managing our media. This session will start with a method of facilitating complex workflows by first identifying our media in a meaningful manner. From there, this session will expand and explore the management and migration of media to other devices and second screens.

11:00 Identifying Media in the Modern Era - The Domains of Media Identity
Steven Posick (ESPN Inc., USA)
Identity is the property of an object that distinguishes it from all other objects within a domain, a domain being a physical or logical system by which objects are bounded. Objects may exist within many domains simultaneously, however the object must posses an Identity for each domain to which it is bound. Media, like any other object, may also be bound to many domains, each domain representing or defining some aspect of that Media. For instance, Media can exist within a physical domain where each Media instance represents a unique physical asset or file, while simultaneously belonging to a logical domain in which each Media instance represents a dataset describing a unique set of sights and sounds, without a physical representation. These Domains of Media Identity define the relationships that connect real world events to physical Media and provide the various groupings required to facilitate complex Media workflows.
Presenter bio: Steven Posick, associate director, Enterprise Software Development, joined ESPN in 1995. He is a veteran senior systems architect, designer, developer, and security professional, with more than 24-years-experience in Information Technology and a 10 year focus on Media: Identity, Management, and Control. His responsibilities have included the management of production workflow application development, broadcast control systems, broadcast system security and the development of open standards. Steven has participated in several SMPTE committees as an Ad hoc Group chair and/or document editor, including the recently published SMPTE standard for Media Device Control over Internet Protocol Networks (SMPTE ST2071), the Archive eXchange Format, and the Study Group on Media Production System Network Architectures.
Steven Posick
11:30 Today's smarter workflow: managing and delivering assets to all devices
Petter Jakobsen (CTO, Vizrt, USA); Isaac Hersly (Vizrt Americas, USA)
The media landscape, with second-screen and online delivery increasingly significant, has changed dramatically. Along with these changes comes the need to fundamentally alter media asset workflows. Managing content from a central repository, where operators can produce files for broadcast, mobile and web on one seamless workflow is the future. This paper will detail how integrated web-based technologies allow asset delivery to any platform in one workflow, enabling content packaging and optimized delivery speed while wrapping in branding and graphics. We'll show technologies that allow content updating right up to the point of transmission, empowering journalists' creativity. Techniques such as storing the edit decision list (EDL) and graphic information as metadata, with the video and graphics playlist then sent to the control room, with the final piece played back in real time on-air - automatically sized and distributed online and to mobile devices - dramatically save storage space and time.
Presenter bio: In 1999, Hersly established what is now the Vizrt Americas organization. Prior to that, he was President and COO of a publicly traded company that developed graphics and similar production tools for the broadcast market. Until 1986, Isaac was VP Engineering of the ABC owned television stations.
Isaac Hersly
12:00 Remote Content Access and the Rise of the Second Screen"
James Stellpflug (EVS, VP Sports Products - Americas, USA); Stephane Houet (EVS, Product Manager, Belgium)
Technological advances are allowing operators, editors and producers to remotely access media production servers for content editing and repackaging for distribution and archiving - in real-time, anywhere in the world. Advanced remote production gives unprecedented capability to work remotely, enhance content on the fly, and deliver original content to users' second screens. The paper will analyze the technology, challenges, results, and opportunities behind new capabilities that are changing the media landscape. Real-world examples, including multimedia distribution for the 2014 FIFA World Cup—delivering live streams, multi-angle clips, stats, and social network feeds to viewers' connected screens through broadcaster apps—will be explored. The complex World Cup workflow, encompassing live streaming of six HD camera angles and up to 24 multi-angle replays instantly pushed to a central cloud-based platform, access to 3,000 hours of stored content, on-the-go transcoding and distribution to FIFA's Media Rights Licensees (MRLs), drives an estimated 50 million downloaded apps.
Presenter bio: James has more than 20 years of industry experience in facility design and integration, mobile satellite communication, and mobile TV production, with roles that had him overseeing several major technology transitions, including the move from traditional video production to the world of file-based workflows and technologies. James has been with EVS Broadcast Equipment since 2000.
James Stellpflug

Higher Frame Rates

Room: Salon 2
Chair: Jim DeFilippis (TMS Consulting, USA)

This session will address the question, "Is Faster Better?". The papers will walk us through the challenges, benefits and solutions when working at frame rates beyond 60Hz, including both video and "HFR" cinema formats. It will address frame rate conversion from 120fps to 50, 60(59.94), and 100 fps video formats, as well as 60, 30, and 24 fps d-cinema formats.

11:00 120fps as a Universal Production Format for Motion Pictures
David Richards (Moving Image Technologies, USA)
HFR (High Frame Rate) has emerged as a release format for movies like The Hobbit, Avatar 2, and others. This paper presents a method of photography compatible with release in multiple formats, including standard 24 and 30fps in addition to HFR. The paper describes a technique to enable artifact-free output of 60fps, 30fps, and 24 fps masters from the same original photography. In addition, the method also allows the selection of the desired effective shutter angle after principle photography is completed. In this way the creative team can trade off different levels of temporal aliasing versus motion blur on a scene-by-scene basis, should any problems show up in Post. Some people are attempting to claim intellectual property in connection with similar processes. The author desires to present these methods in an open industry forum to forestall similar attempts.
Presenter bio: David Richards has over 20 years of experience in the cinema industry. He spent six years in engineering and engineering management positions at Christie Digital Systems in Cypress, California, prior to being a co-founder of Moving Image Technologies in 2003, where he serves as Vice President of Engineering. Mr. Richards has been active in the Society of Motion Picture and Television Engineers (SMPTE) since 1986, serving on several engineering committees and as an officer in the Hollywood Section. He is Past Chair of the SMPTE Hollywood section ('96-'97), and was Program Chair for the first and second SMPTE Film Conferences, held in 1997 and 1998. He continues to participate in SMPTE engineering work including the D-Cinema 21DC Committee and Film 20F Committee. He chaired the (now obsolete) P3 Projection Technology Committee from 2005 to 2009. He is the author of several papers for SMPTE conferences as well as articles for various trade publications.
David Richards
11:30 High Frame Rate Video Conversion
Paola Hobson (InSync Technology Ltd, United Kingdom)
SMPTE is discussing high frame rates within the UHDTV Study Group, and in particular, conversions from high frame rates to today's integer and fractional standards. Although it seems an easy problem to up and down convert between frame rates when there is a simple multiplier e.g. between 119.88Hz and 59.94Hz, in this paper, we show that adequate quality results cannot be obtained by simple frame doubling or frame dropping. We present a high quality, low complexity frame rate conversion method, suitable for all high frame rate conversions e.g. 120Hz to 59.94Hz or 50Hz to 120Hz, thereby obviating the need to include fractional frame rates in the UHDTV standard.
Presenter bio: Paola Hobson has extensive experience in communications and media industries, including public safety and consumer mobile products, and professional broadcast systems. She has a track record in delivery of innovative new products and services, strategic partnership development, and business growth. Paola joined InSync from Snell where she was Senior Product Manager, and prior to that was Manager of Applications Research Centre at Motorola. Paola’s key skills are in requirements-driven business case development for new products and services with specific focus on financial and market analysis, as well as support of sales and marketing teams in customer demonstrations and exhibitions. Paola holds BSc, Ph.D and MBA degrees.
Paola Hobson
12:00 Quality Advancements and Automation Challenges in file-based Conversion: Noise-Reduction, Deinterlacing, High Frame Rates, and Compression Efficiency
Keith Slavin and Chad Fogg (ISOVIDEO LLC, USA)
The broadcast industry has been gradually transitioning from SDI-based hardware production to much more flexible file-based processing. As IP infrastructures continue to expand, IP-based studio production is on the horizon. Conventional SDI-based processing falls short in terms of processing flexibility, functional capacity, expandability and file workflow integrability. Some of the newer file-based processing also falls short in one or more of the following aspects - processing quality, processing delay, functional capacity, automation and efficient scalability. This paper gives an overview and brief discussion on an innovative clustered server system and its applications, with emphasis on advancements in various improved functional capacities. Also it briefly discusses transitioning to HFR 1080p production while migrating to UHD. Various videos showing processing quality are also given.
Presenter bio: Keith Slavin is a founder and CTO of isovideo LLC, leading the company's effort in developing professional quality, GPU accelerated software-defined video processing technologies. These include industry-first, professional quality stream-based/file-based 4K/UHD/HD/SD standard conversion/transcoding/transport server cluster Viarte, that performs sophisticated, highest quality, automated and extremely flexible video processing tasks, such as frame rate conversions/deinterlacing/inverse telecine, with mixed/broken cadence handling, scaling/denoise and transcoding, etc. for the film, TV broadcast, and digital media industry since 2011. Prior to isovideo, Keith spent over 30 years working as an engineer at several companies, including BBC Engineering Research, and principal engineer at Tektronix Inc, Micron Technology, and Nvidia. He has won two Emmy awards for outstanding contribution to the VM700 video measurement system and Profile Disk Recorder at Tektronix Inc. as well as one of five Game-Changer Awards by the IABM/NAB 2013 for viarte, isovideo LLC. Keith holds 50 US patents in various fields from digital signal processing, encryption, error correction, non-linear correction, computer architecture, compilers, user interfaces, and control systems. He also published a peer-reviewed mathematical paper “Q-Binomials and the Greatest Common Divisor“ in the Integers Electronic Journal of Combinatorial Number Theory, and more recently published/co-authored several technical conference papers on topics of digital cinema, and deinterlacing video prior to H.264 and HEVC Compression.
Keith Slavin

12:30 - 14:00

Fellows Luncheon (Fellows Only-Ticket Required)

Room: Solano Canyon
12:30 Fellows Luncheon
Darcy Antonellis (Vubiquity, USA)
Chief Executive Officer
Presenter bio: Darcy serves as Vubiquity’s Chief Executive Officer. Vubiquity is the largest global provider of multiplatform video managed services and technical solutions servicing clients in 40 Countries that reach more than 100 million households and numerous local languages. Vubiquity works closely with both content owners and service providers to enable anytime, anywhere access to content. With expertise, direct content licensing capability and technology platforms, Vubiquity helps its customers connect to and monetize opportunity in the areas of video-on-demand, subscription services, Electronic Sell-Through, linear content delivery, TV Everywhere, advanced advertising, and data services. Prior to joining Vubiquity, Darcy was President, Technical Operations and CTO at Warner Bros. Entertainment Inc., a post she held since January 2008. In this role, she influenced and directed Warner Bros.’ technology strategy and vision to leverage growth and new business opportunities often associated with advancements in media and entertainment-related product and service innovations. Before joining Warner Bros., Darcy worked for CBS in a number of lead management roles that encompassed responsibilities for the Network’s operations in New York and Washington Operations for the CBS News Bureau in Washington, D.C. Darcy is a veteran of three Olympics for CBS Sports and a three-time Emmy winner for technical production and engineering, and also worked in news, heading up D.C. operations with assignments that included being stationed in Saudi Arabia and Kuwait during the Gulf War. As a multinational Patents holder, she serves on advisory boards of technology companies and universities. She is also a board member of the Global Advisory Council for the professional Women’s Tennis Association (WTA), providing counseling on new media and technology. A recipient of a number of industry-related awards, some have included Broadcasting and Cable’s Technology Leadership Award, Video Business’ Women Elite, The Hollywood Reporter’s Digital Power 50 List, the inaugural recipient of the NAB TVNewsCheck Women in Technology Leadership Award and India’s NASSCOM Innovator Leader Award. Darcy holds an MBA with concentration in finance from Fordham University and honors for work in the field of electrical engineering from her undergraduate alma mater, Temple University.
Darcy Antonellis

14:15 - 15:45

Cloud case studies - the reality of virtualisation

Room: Salon 1
Chair: Richard J Welsh (Sundog Media Toolkit, United Kingdom)

Deployment of content services in the cloud has moved rapidly from "if and when" to "how". In this session we will explore a wide range of real world implementations of cloud based services, investigating the important topics of integration, deployment and security. From production to real-time delivery of live content, we will lift the lid on practical roll-out of services, learn about the technical challenges faced and hear how they were overcome. We will cover all elements of cloud architecture including public vs private infrastructure, applications and networks, service monitoring, scaling systems and security controls. As the media industry migrates services and technology to the cloud, we hope to answer the big questions of performance and protection.

14:15 Alternate to Big Cloud Providers: Case Studies on Private/Hybrid Cloud Use
Brian Campanotti (Front Porch Digital, USA)
While our industry moves to embrace the dream of the "cloud," a pragmatic approach must be taken. While public cloud solutions continue to gain traction, leveraging purpose-built private cloud services focused on Service Level Agreement (SLA) characteristics are finding a successful niche. This paper will highlight customer stories where purpose built, private cloud solutions helped content owners and media organizations benefits by the cloud, while ensuring protection, accessibility and unmatched security for their most valuable assets. It will also specifically address the overall media lifecycle where the scale and cost benefits of public cloud offerings are in their "sweet spot."
Presenter bio: Brian Campanotti is the Chief Technical Officer for Front Porch Digital, leading industry invention and advancement in cloud-based and on-premises global content storage management (CSM), media asset management (MAM) and content publishing, migration and preservation solutions. He is responsible for innovations in the area of cloud-based solutions for “big data” focused on media-centric content handling, delivery, storage and preservation. He was one of the primary inventors of the Archive eXchange Format (AXF) and has been active in standards body activities helping to promote innovation and openness in the industry for more than two decades. Mr. Campanotti and his team have won Emmy® Awards for their work in content collection preservation technologies and for innovation in serial digital video technology. Mr. Campanotti has founded several start-ups and began his career at the Canadian Broadcasting Corporation (CBC) in Canada and holds a degree in Electrical Engineering from the University of Toronto.
Brian Campanotti
14:35 End-to-End Live Streaming Platform for Second Screen Combining Multi-Camera Capture, High Speed Transport, and Cloud Video Processing
Michelle Munson (Aspera, USA)
This World Cup introduced the first large scale system for high-resolution end-to-end live streaming to second screens. The system (by EVS) is delivering premium live coverage online and on mobile devices. "Second screen" is not new to global sport but this architecture is a first: the live video feeds captured from multiple camera angles are transferred in real-time using high-performance WAN transport (Aspera) from Brazil to the cloud in Europe (AWS) for real time processing into multiple protocols through a scale-out cloud video platform (Elemental). We will describe the architecture and APIs of the WAN transport to ensure timely and reliable delivery of the live video feeds, the auto scaling software supporting the availability requirements for the load, and the challenges posed by cloud storage delays in this real-time environment. Statistics from the event output by a new analytics platform will characterize performance, network conditions, and usage.
Presenter bio: Michelle Munson, President and co-founder of Aspera, Inc., is co-inventor of the core technology and responsible for overseeing the companies? direction. Aspera specializes in creating innovative data transport technologies that solve the fundamental problems of network data delivery. Before founding Aspera in 2003 with Romanian colleague, Serban Simu, she was a software engineer in several research and start-up companies including the IBM Almaden Research Center in San Jose, California. A Fulbright Scholar, Ms. Munson holds B.S. degrees in electrical engineering and physics from Kansas State University, as well as a master's in computer science at the University of Cambridge.
Michelle Munson
14:55 Securing Media Content and Applications in the Cloud
Bhavik Vyas (Amazon Web Services & SMPTE Member, USA); Usman Shakeel (Amazon Web Services, USA)
Moving to the cloud? Do you know how to best secure your assets and applications? For media companies, security is paramount. Few things can more directly impact your company's bottom line. As the move to store, process, and distribute digital media via the cloud continues, it is imperative to examine the relevant security implications of a multi-tenant cloud environment. This talk is intended to answer questions around securely storing, processing, distributing, and archiving digital media assets in the AWS environment. The talk also covers the security controls, features, and services that AWS provides its customers. Learn how AWS aligns with the MPAA security best practices and how media companies can leverage that for their media workloads.
Presenter bio: Bhavik has worked in the IT and communications field for over 15 years, at leading technology companies like HP, Agilent, Reliance Communications and Aspera. Bhavik started his career in product management at HP in Scotland testing GSM & CDMA cellular networks, and has held a variety of sales engineering, business development and product marketing roles. Bhavik spent ~4 years at Aspera (recently acquired by IBM), as the Director of Cloud Services & Partnerships, where he managed relationships with partners like AWS, EMC, HP, Microsoft, IBM and Adobe and worked with leading M&E companies like Netflix, Amazon Instant Video, Sony, WB, UFC, UEFA and Deluxe in deploying cloud solutions. He joined AWS in mid 2012 where he is now responsible for the AWS M&E ISV Partner Ecosystem. Bhavik has a B.Eng. (EE) degree from Heriot-Watt University, Edinburgh; and an MBA from Golden Gate University, San Francisco.
Presenter bio: Usman Shakeel has over 15 years IT experience working in various development, architecture and sales roles. Usman is currently a Principal Solutions Architect at Amazon WebServices and has been with AWS for over 4.5 years. Usman's key focus is on developing and architecting solutions for use cases common in the Media industry as well as Digital Content Security in the cloud. Usman has been involved in several large scale projects involving lift and shift as well as new implementations of Media workflows on AWS for some of the major Hollywood Studios. Usman has a M.Math. (CS) degree from The University of Waterloo, ON.
Bhavik VyasUsman Shakeel
15:15 Is the future of content protection cloud(y)?
Eric Diehl (Technicolor, France)
As cloud computing continues to evolve and become a critical element in the production and post-production processes of the movie industry new strategies for protecting content will have to emerge. The lack of sufficient content protection may hinder the adoption of cloud in the industry, which could deprive the community of the many operational and financial benefits that this new approach to delivering technology services can offer. This presentation analyzes the associated risks. The first part of the presentation will be a brief introduction to the different cloud architectures such as public cloud, private cloud or hybrid cloud with a rough estimation of their respective exposure to risk. The second part of the presentation will address the major threats associated with cloud such as data breaches, account hijacking, denial of services, or malicious insiders. Most of these threats are generic for any cloud application. Nevertheless, content protection has some particular challenges. While many people believe that cloud increases exposure to risk, there are elements of cloud that allow organizations to improve their security posture. We will review these cloud-based security improvements for the audience. The last section will explore a particular type of architecture: hybrid cloud. This type of architecture is suitable for the distribution of content such as screeners. It carries all the benefits of public cloud while ensuring a high level of security for pre-theatrical content. After this talk, the audience may decide whether this future will be cloudy or sunny.
Presenter bio: In 1985, DIEHL Eric received his degree of engineer of the «Ecole Nationale Supérieure d'Electronique et Radio-électricité de Grenoble (ENSERG)». In 1987, he joined THOMSON Corporate Research. He worked in the fields of Pay TV, security, home networks and multimodal user interfaces. In 1998, he managed Rennes's security laboratory . In 2009, he led the Security & Content Protection labs. This team of 30 experts designed technologies such as advanced key management, audio and video watermark, video fingerprinting, network security, and secure coding. These technologies, under the brand name ContentArmor™, are applied all along the digital video chains. Since 2012, he is VP security business services at the Technicolor Security Office, and VP security systems & technology at Technicolor Technology & Research. He filed more than 90 patents in security, Pay TV, and User Interface. He published many papers, and one book dedicated to DRM (Securing Digital Video, Springer, 2012) and is writing a second one (Ten Laws of Security, Springer, to be published in 2015). He regularly blogs at http://www.eric-diehl.com/.
Eric Diehl

Display Technologies: Where Do We Go From Here (And How Do We Measure What We've Already Got?)

Room: Salon 2
Chair: Peter H Putman (ROAM Consulting LLC, USA)

It's all about this display! These are very interesting times for display technology with UHDTV taking the stage, LCD panel prices plummeting, and consumers watching on everything from smartphones and tablets to ever-larger flat panel televisions. But plenty of challenges remain. In this session, we'll learn about the myriad of display performance measurements and what they represent (such as luminous energy, luminous power, luminous intensity, illuminance, luminance, luminous exposure, and luminous efficacy!) The topic of next-generation display interfaces will also be addressed, covering the latest versions of HDMI, DisplayPort, and the numerous variations on each standard. We'll wrap things up with a discussion of quantum dots, a new backlighting system for achieving the wider color gamuts required for UHDTV.

14:15 A Tutorial on Photometric Dimensions and Units
George Joblove (Consultant, USA)
Numerous units are used to measure light, representing a variety of photometric dimensions. In the study of image capture, reproduction, and display, the differences between these various dimensions are important but often a source of confusion. This tutorial enumerates and describes these dimensions and their corresponding units, and explains their appropriate applicability and usage. These dimensions include luminous energy, luminous power, luminous intensity, illuminance, luminance, luminous exposure, and luminous efficacy. The S.I. base unit candela is explained, and the derivations of the S.I. units for the other photometric dimensions are described, and they are contrasted with their corresponding radiometric units.
Presenter bio: George Joblove has played key and pioneering roles in the development and application of digital technology to the entertainment industry throughout his career, and currently advises technology- and media-oriented clients as an independent consultant. He has three decades’ experience in the strategizing, management, and development of technology in the service of the art, craft, and business of visual entertainment. He served as Executive Vice President of Advanced Technology at Sony Pictures Entertainment, where his focuses included facilitating the application of digital asset management, 4K, and 3-D across production, postproduction, and distribution. Previously he was at Sony Pictures’ visual-effects and animation unit, Imageworks, where he served as Chief Technology Officer. Prior affiliations include Industrial Light & Magic (where he co-founded and led its digital-effects department) and Warner Bros. George received a Scientific and Engineering Academy Award in 1994. He has two patents in the field of 3D photography and cinematography. George is a member of the Academy of Motion Picture Arts and Sciences, and served as Co-Chairman of its Science and Technology Council. He is also a member of SMPTE and VES, and an associate member of the American Society of Cinematographers. He holds a B.S. in computer science, and an M.S. in computer graphics, both from Cornell University.
George Joblove
14:45 Next-Generation Display Interfaces
Peter H Putman (ROAM Consulting LLC, USA)
The increasing popularity of tablets and smartphones - a/k/a "bring your own devices," or BYODs - has led to the development of a new type of display interface - one that can transport not only video and audio, but serial data (USB, PCI Express), Ethernet, control signals, and phantom power. Because space is at a premium on new ultra-thin notebooks, Chromebooks, tablets, and smartphones, these interfaces are very small and connect with as few as five pins. This paper will discuss the trend toward smaller, faster, and denser display interfaces and identify the different formats currently in use on BYODs and notebooks, along with the signal formats they support.
Presenter bio: Pete Putman is a technology consultant to Kramer Electronics USA; engaged in product development and testing, technology training, and educational marketing programs. Pete is also a contributing editor for Sound and Communications magazine, the leading trade publication for commercial AV systems integrators. He publishes HDTVexpert.com, a Web blog focused on HDTV, digital media, wireless, and display technologies. Pete holds a Bachelor of Arts degree in Communications from Seton Hall University, and a Master of Science degree in Television and Film from Syracuse University. He is an InfoComm Senior Academy Instructor for the International Communications Industries Association (ICIA), and was named ICIA's Educator of the Year for 2008. He is a member of both The Society of Motion Picture and Television Engineers (SMPTE) and Society for Information Display (SID).
Peter H Putman
15:15 Quantum dots and Rec. 2020 - bringing the color of tomorrow closer to reality today
Jimmy Thielen (3M Company, USA); James Hillis (3M, USA); John Van Derlofske, Dave Lamb and Art Lathrop (3M Company, USA)
The International Telecommunications Union (ITU) has recommended a television broadcasting standard (Rec. 2020) for ultra-high definition (UHD) television that is aimed at providing a better visual experince. They recommend a color gamut that exceeds previous broadcasting standards and is currently only achievable by laser-based display technologies. Quantum dot enabled LCDs provide one alternaltive with potential to meet Rec. 2020's standard color gamut while taking advantage of existing manufacturing capacity. We examined how existing quantum dot and LCD technology could be optimized to meet the Rec. 2020 color standard. Our analysis revealed that up to 94% gamut coverage can be achieved.
Presenter bio: Bachelor degrees from the University of St. Thomas in Electrical Engineer and Physics. Masters degree from the University of Minnesota in Management of Technology. Nine years experience in the LED and display industries at 3M. Four published patents. Publications in the American Journal of Physics, Society of Information Displays Symposium and Photonics West Proceedings.
Jimmy Thielen

15:45 - 16:15

Coffee Break

Room: Exhibit Hall

16:15 - 17:45

Developments in Audio Technology, Part 3: Diving into the Details

Room: Salon 1
Chair: Jerry C Whitaker (Advanced Television Systems Committee, USA)

We've come a very long way in moving digital audio from a new, complex technology to an essential element of everyday life. With all of the progress made so far, many challenges remain. This session will examine leading-edge work on audio data management and analysis, and consider what's next is advanced digital audio consoles. We will wrap up the audio session with a fascinating look at the origins of audio and video compression—technologies that literally reshaped content production and consumption. What does the past say about the future? We'll find out

16:15 Utilizing Unique Information from File-based Media for Automated File Detection
Michael Babbitt (Dolby Laboratories, USA)
Currently, there are no reliable and automated methods to recognize program boundaries and accurately identify program content for the purposes of loudness measurement, air-checks, competitive analysis, etc., while not overlapping or including portions of adjacent programs (primary media content) or other commercials (secondary media content) that will skew the results and render them unreliable. The author proposes a reliable and automated method utilizing accepted broadcast methodologies and approaches in addition to innovative out-of-band data carriage methods to accurately detect and identify programs, providing a method to reliably discriminate between primary media content and secondary media content for the purposes of loudness measurement, competitive analysis and other common broadcast tasks. The carriage of data using out-of-band techniques eliminates the risk of reverse engineering or hacking and can be leveraged to carry other types of data useful for broadcasters to achieve a high quality of service for both viewers and program partners.
Presenter bio: As the Senior Professional Support Manager at Dolby Laboratories, Mike Babbitt works with content providers, operators, networks and local stations to create, deploy, implement and understand the use of Dolby technologies and multichannel audio programs based upon their specific infrastructure and transmission requirements, as well as helping post production professionals understand and navigate network delivery requirements for broadcast content and implement Dolby technologies for both standard and high definition packaged and OTT media. Mike has assisted in the multichannel broadcast of many high-profile live events like the Grammy Awards, Major League Baseball games and NFL broadcasts, and has been recognized for this service several times by the Television Academy of Arts and Sciences, as well as sharing Emmy Awards for the development of Dolby E and the DP600 Program Optimizer. Mike also travels the world speaking on issues facing broadcast professionals like loudness control, audio metadata and Dolby technologies, and has led Dolby's training efforts for multichannel television program production. Mike has been in the audio industry for 30+ years and with Dolby Laboratories for more than 15.
Michael Babbitt
16:45 Have things calmed down?
J. Patrick Waddell (Harmonic Inc., USA)
Engineers are familiar with the Law of Unintended Consequences. This presentation tells the story of several unintended consequences related to audio which are in play today. The transition to digital delivery systems liberated audio from less than 40 dB of dynamic range. The creative community has been exploiting that increased dynamic range. That is the good news, but it is also the bad news. Without standards in place for digital operating points, sound mixers and producers have used levels varying over at least 30 dB. This has created consumer irritation. The industry realized these issues needed to be addressed. Some time ago, both the ATSC and the EBU established groups to provide guidance to the content creators and system operators. This presentation provides an assessment of both the ATSC and EBU documents and similar work done in other regions.
Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC's TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC's Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
J. Patrick Waddell
17:15 The Origins Of Audio and Video Compression: Some Pale Gleams From The Past; A Historical Perspective On Early Speech Synthesis and Scramblers
Jon D. Paul (Crypto-Museum, USA)
The paper explores the history that led to all audio and video compression. The roots of digital compression sprang from Dudley's speech VOCODER, and a secret WWII speech scrambler. The paper highlights these key inventions, details their hardware, describes how they functioned, and connects them to modern digital audio and digital video compression algorithms. The first working speech synthesizer was Homer Dudley's VOCODER. In 1928, he used analysis of speech into components and a bandpass filter bank to achieve 10 times speech compression ratio. In 1942, Bell Telephone Laboratories' SIGSALY was the first unbreakable speech scrambler. Dudley with Bell Labs, invented 11 fundamental techniques that are the foundation of all digital compression today. The paper concludes with block diagrams of audio and video compression algorithms to show their close relationship to the VOCODER and SIGSALY.
Presenter bio: Jon Paul is from Manhattan. He has an MSEE from City College of New York. Starting in 1968, Jon designed real-time spectrum and FFT analyzers. In 1972, he worked as Chief Engineer at Eventide, N.Y. where he designed some of the first analog and digital studio sound processors such as digital delay lines. In 1983, Jon started Scientific Conversion, Inc. to consult in the fields of power electronics, digital audio and high voltage power supplies. In 1983 Jon designed and manufactured the first 12 kW HMI ballasts for cinema lighting, introduced at the 1984 NAB and Los Angeles Olympic Games. Starting in 1989, Scientific Conversion has focused on research and manufacture of digital audio transformers for the broadcast, studio and cinema markets. Jon has written 3 AES papers about digital audio transformers. Jon is the holder of seven US Patents for energy saving electronic ballasts and telecommunications. His US patent 5,051,799, was for the world’s first digital microphone. Starting in 2002, this patent has been successfully litigated and licensed to all major mobile providers and handset manufacturers in the USA. Jon was one author of the AES-42 Standard for Digital Microphones. In 2008, Jon started the non-profit Paul Foundation to fund scholarships and endowments in the fine arts. The foundation now concentrates on funding research into Parkinson’s disease, which recently led to a significant new laboratory model and three new line of research. Starting in 1980, he founded the Crypto-Museum, a collection of vintage posters, WWII technology, cipher and spy equipment. Jon is an internationally recognized expert, writer and speaker on the connections between WWII cipher machines and modern DSP, video and audio technology. Jon travels extensively in Europe and is an avid amateur photographer.
Jon D. Paul

UHDTV: Building The Plane In Flight

Room: Salon 2
Chair: Peter H Putman (ROAM Consulting LLC, USA)

Advancements in UHDTV continue as we "build the plane in flight." Even though consumers can already buy 4K TVs at reasonable prices and content producers and delivery systems are ramping up, not all of the parts of the 4K ecosystem are in place yet. In this session, we'll learn about a system to produce simultaneous 8K, 4K, and 2K video in real time from a single 4K camera. We'll also hear about the challenges of transporting 4K (12 Gb/s) video over single-link coax and how it could be accomplished. The session will wrap up with a discussion about viewing 4K and UHD content in a largely 2K world as the infrastructure for 4K evolves.

16:15 Development of Super Hi-Vision (8K) Baseband Processor Unit "BPU-8000"
Kenichiro Ichikawa (Japan Broadcasting Corporation, Japan); Seiji Mitsuhashi (NHK (Japan Broadcasting Corporation), Japan); Mayumi Abe (Japan Broadcasting Corporation (NHK), Japan); Akira Hanada (Japan Broadcasting Corporation, Japan); Mitsutoshi Kanetsuka (Content Creation Solution Business Div. & Sony Corporetion, Japan); Kohji Mitani (Japan Broadcasting Corporation, Japan)
We have developed a system that works in combination with a Sony F65 camera (equipped with an 8K image sensor) to produce simultaneous 8K video, 4K video, and downconverted HD video output signals in real time. We intend to use this device to facilitate the use of commercially available high-spec 4K equipment in 8K live broadcasts and program production.
Presenter bio: Kenichiro Ichikawa received his B.S. degree from the Keio University in 2002 and M.S. degree from the Keio University in 2004. Following graduation, he joined NHK (Japan Broadcasting Corp) and built his career as a video engineer through studio program production and live telecasts. He is currently involved in the development of Super Hi-Vision Systems, particularly video and master control systems. He belongs to Super Hi-Vision System Design & Development Division.
Kenichiro Ichikawa
16:45 Further Developments in 4K (12 GHz) Single-Link Coaxial Cable
Stephen H Lampen (Belden, USA)
4K video (UHD 3840x2160, or full 4096x2160) present a significant challenge to copper coaxial cable. Of course, these signals represent many times the bandwidth of existing HD or even 3G video. This paper will outline the hurdles these 4K signals present, and some of the existing solutions, such as dual link or quad link cables. Included will be an analysis of the connectors for these ultra-high bandwidth signals, and some suggested changes to standards that would be helpful in the development of this product line.
Presenter bio: Steve Lampen has worked for Belden for twenty-one years and is currently Multimedia Technology Manager and also Product Line Manager for Entertainment Products. Prior to Belden, Steve had an extensive career in radio broadcast engineering and installation, film production, and electronic distribution. Steve holds an FCC Lifetime General License (formerly a First Class FCC License) and is an SBE Certified Broadcast Radio Engineer. On the data side he is a BICSI Registered Communication Distribution Designer. In 2010, he was named “Educator of the Year” by the National Systems Contractors Association (NSCA), and in 2011 was named “Educator of the Year” by the Society of Broadcast Engineers. His book, "The Audio-Video Cable Installer's Pocket Guide" is published by McGraw-Hill. His column "Wired for Sound" appears in Radio World Magazine. He can be reached at steve.lampen@belden.com
Stephen H Lampen
17:15 Viewing 4K and UHD in an HD World
Josef Marc (Archimedia Technology Inc., USA)
HD infrastructure will not, and should not, convert to UHD/4K/8K overnight. The next few years will be experimental and educational. People will need to see, hear, and understand UHD/4K/8K video on HD screens, and HD video on UHD/4K/8K screens. Drawn from 1 1/2 years of real-world experience in production, postproduction, mastering, quality control, and exhibition, this paper details the practical implementation of SDI, HDMI, DisplayPort, and DVI with UHD/4K/8K video on HD screens, and HD sources on 4K/UHD screens.
Presenter bio: Before Josef Marc co-founded Archimedia Technology with Mark Gray and Chi-Long Tsang, he brought his expertise to bear at many other companies throughout his career. At Front Porch Digital and SAMMA, he designed media archives, asset management, mass digitization, and online video publishing systems. He led the technical aspects of launching Ascent Media's Verizon FiOS TV, and managed the project office for Sony's role in launching DirecTV. As a consultant to Sony Corp., he co-wrote a book on interactive TV and Web media. He designed archives at The United Nations International Criminal Tribunal for Rwanda, and hosted workshops for the Association of Moving Image Archivists. At Sony Systems Integration Center, he managed installations of host broadcaster origination centers for CBS' Olympics broadcasts, The Game Show Network launch, JumboTrons control room installation, etc. He was also the chief technology officer of ConnectOne, a triple-play competitive local exchange carrier offering IP video, telecommunications, and Web services. He is a member of the SMPTE and the Association of Moving Image Archivists.
Josef Marc

18:00 - 18:30

Annual Membership Meeting

Room: Salon 1

Thursday, October 23

07:30 - 08:30

Morning Coffee

Room: Ray Dolby Ballroom Terrace

08:30 - 10:30

Asset Management-Part 1

Room: Salon 1
Chair: Paul Chapman (FotoKem Industries Inc., USA)
08:30 2014 Survey of Digital Storage in Professional Media and Entertainment
Thomas Coughlin (Coughlin Associates, USA)
Results from an on-line survey of SMPTE (and other media and entertainment) professionals from March to May of 2014 showed some trends for the use of digital storage in professional content capture, editing and post-production, content delivery, as well as archiving and digital preservation. The survey is compared to results from four prior surveys over the last six years to reveal the evolution of storage technology for professional video including increased use of cloud storage for post production, distribution and archiving, the continued growth of flash memory in content capture, developing trends for content distribution and the growing use of active archives.
Presenter bio: Tom Coughlin, President, Coughlin Associates is a widely respected storage analyst and consultant. He has over 30 years in the data storage industry with multiple engineering and management positions at companies such as Ampex, Polaroid, Seagate, Maxtor, Micropolis, Syquest, and 3M. Tom has over 60 publications and six patents to his credit. Tom is also the author of Digital Storage in Consumer Electronics: The Essential Guide, which was published by Newnes Press in March 2008. Coughlin Associates provides market and technology analysis (including reports on several digital storage technologies and applications and a newsletter) as well as Data Storage Technical Consulting services. Tom is active with IDEMA, the IEEE Magnetics Society, IEEE CE Society, and other professional organizations. Tom was Chairman of the 2007 Santa Clara Valley IEEE Section and currently chair of the IEEE Region 6 Central Area. He was former Chairman of the Santa Clara Valley IEEE Consumer Electronics Society and the Magnetics Society. In addition to the IEEE and IDEMA, Tom is a member of SMPTE, ACM, APS, AVS and AAAS. Tom is the founder and organizer of the Annual Storage Visions Conference, a partner to the annual Consumer Electronics Show as well as the Creative Storage Conference that was recently held during the 2008 NAB. Tom is also an organizer for the Flash Memory Summit and the Data Protection Summit. He is also a Leader in the Gerson Lehrman Group Councils of Advisors. For more information go to www.tomcoughlin.com. Tom has a PhD in Electrical Engineering from Shinshu University in Nagano, Japan and a MSEE and a Bachelor in Physics from the University of Minnesota in Minneapolis.
Thomas Coughlin
09:00 LTFS Transforms LTO tape into Nearline Storage: Accelerating 4K Media Workflows
Tridib Chakravarty (StorageDNA, USA)
With increasing amounts of 4K content in digital file-based workflows, LTO tape has emerged as a cost-effective way to store raw, 4K content. However, LTO tape is typically an offline, non-accessible archive medium, and data on LTO must be restored prior to use. Restoring immense amounts of high-resolution data (UHD, 4K+) from LTO causes a bottleneck in file-based workflows. With LTFS technology, it is possible for applications to obtain free, open and direct access to content on LTO tape. This "disk-like", direct access nature of LTFS can transform LTO tape into nearline storage for new and faster high-resolution media workflows.
Presenter bio: Tridib Chakravarty ("tC") possesses deep knowledge of the storage and archiving industry and he is responsible for the overall strategy and vision of StorageDNA's product line. Prior to founding StorageDNA, he helped map new strategic initiatives for Quantum’s Advanced Technology Group. tC also participated in the development of Quantum's de-duplication and CDP (continuous data protection) product line. While at Panasas, he was part of the initial team that developed PanFS, one of the world's most advanced clustered file systems. tC earned his BS and MS degrees in computer science from Carnegie Mellon University.
Tridib Chakravarty
09:30 Concept for a File Based Content Exchange Ecosystem using Scalable Media
Heiko Sparenberg and Siegfried Foessel (Fraunhofer IIS, Germany)
We introduce a concept of a file delivery ecosystem, exploiting the features of scalable (hierarchical) media like JPEG 2000. The innovation of this concept is the possibility that the recipient may start to work with the transmitted content even before completion of the transfer due to the scalability feature. For this, a reduced sub-variant from the scalable sources will be transmitted in the first phase until the whole sequence reaches its destination and the recipient is able to get a preview. Subsequent phases will add more and more information to the sub-variant transmitted in the first phase. Due to a concept called Substitution Strategy — which has been presented at ATC 2012 — the software running at the destination is able to rebuild the file-structure of each file and to simulate the remaining data so that the images can be used for further processing before the overall transmission is completed.
Presenter bio: Heiko Sparenberg, born in 1977, received his Diploma degree in Computer Science in 2004 and a Master degree in 2006. He joined Fraunhofer IIS in Erlangen as a software engineer in 2006. Today, Heiko is Head of the group Digital Cinema and responsible for several software developments, e.g. the easyDCP software suite for digital cinema. His research topics are scalable media-file management, post-production software in the field of Digital Cinema and image-compression algorithms, with focus on scalable codecs including JPEG2000 and H264.SVC.
Heiko Sparenberg
10:00 A Holistic Approach to Digital Preservation
Bjørn H Brudeli (Piql AS, Norway)
Migration-based preservation strategies do not address the underlying need for securing data integrity and future access of valuable data. EU and the Norwegian Research Council have funded three pan-European R&D projects developing a turn-key solution with all components needed for writing, storing and retrieving digital data. The result converts photosensitive film into a digital storage medium, just like it was done with magnetic tape. Binary codes are written on a medium with proven long-term qualities, and digital data can be stored alongside images and/or readable text. The materials have undergone longevity testing by IPI, and technology source codes are open.
Presenter bio: Bjørn H. Brudeli is an B.Eng of Data/Telecommunication and B.Eng in Media Technic. He has been working in the moving picture industry for more than 15 years, mostly as a CTO in both Norwegian and European companies. He has been with Piql since 2013, where he holds the position of Product Manager for Piql Preservation Services.
Bjørn H Brudeli

IP Streams: Control, Monitoring and Production

Room: Salon 2
Chair: Thomas Edwards (FOX Networks Engineering and Operations, USA)

This session presents a spectrum of use of IP in broadcast: from video streams, to control & monitoring, and in applied real-world use in remote production.

08:30 0ff-the-shelf IP Routing Switchers in the Hybrid IP/SDI Television Broadcast Environment
John Shike (Snell Inc, USA); Martin Holmes (Snell, USA)
With the trend to IP streams as a key part of the broadcast environment, IP routing becomes an essential component. Off-the-shelf IP Routing Switchers are agnostic and transparent to transported media allowing any transport streams, codec types, and uncompressed media, and provide longevity of router investment with changing standards and formats. The challenges of an asynchronous system where timing is distributed but signals are not co-timed can be met with the required media processing, clean switching , synchronization broadcast services all at the system edge. This type of infrastructure can be operated alongside existing SDI systems and controlled with traditional broadcast control services in a seamless operational environment.
Presenter bio: Martin Holmes is the Vice President of Technology at Snell. He is involved in the design, integration and implementation of customers' digital facilities arising from the transition to file-based and I/P operations. In this role, he has been involved in the successful build and launch of the world's largest digital broadcast facility, as well as many other pioneering projects. He brings a multi-disciplinary engineering approach with a strong understanding and technical expertise in systems design, project management, integration and control to large-scale systems, working closely with system integrators and other vendors to deliver connectivity and full integration of a Snell system. His areas of focus are: the automation of playout and master control environments, optimized broadcast workflow and the integration of mixed SDI and I/P based operations.
Martin Holmes
09:00 The Control of Media within an Internet of Things using SMPTE ST2071
Steven Posick (ESPN Inc., USA)
The "Internet of Things" (IoT) refers to an Internet like structure consisting of uniquely identified objects that expose services. These services are typically designed using traditional Object Oriented methodologies that encourage the coalescence of features into a single consolidated view. This may work well for homogeneous environments but can be problematic for heterogeneous environments, such as media control systems, where objects may be modular and change their behavior dynamically at runtime. To better represent objects within these environments, and the IoT, the SMPTE ST2071 standard allows objects to be described using sets of uniquely identified features, known as Capabilities. Capabilities can be used in much the same way as building blocks to construct object behaviors and the objects can change their behavior dynamically by changing the set of Capabilities exposed. In addition, the use of Capabilities also allows objects to be discovered within the IoT by the features they support.
Presenter bio: Steven Posick, associate director, Enterprise Software Development, joined ESPN in 1995. He is a veteran senior systems architect, designer, developer, and security professional, with more than 24-years-experience in Information Technology and a 10 year focus on Media: Identity, Management, and Control. His responsibilities have included the management of production workflow application development, broadcast control systems, broadcast system security and the development of open standards. Steven has participated in several SMPTE committees as an Ad hoc Group chair and/or document editor, including the recently published SMPTE standard for Media Device Control over Internet Protocol Networks (SMPTE ST2071), the Archive eXchange Format, and the Study Group on Media Production System Network Architectures.
Steven Posick
09:30 Monitoring Video Services in an IP Connected World
Applying traditional methods of video service monitoring to highly evolved and changing delivery methods to customers using all manner of viewing devices anytime/anywhere just doesn't work. Video service monitoring must evolve with video service delivery methods in a reasonable, cost effective, scalable manner. This paper examines how to approach monitoring in next generation IP-enabled services such as CloudTV, cDVR, and cVOD and how to take advantage of the new architectures being deployed. It will examine what issues will be encountered using a traditional monitoring and data collection approach. It will then look at alternative approaches using new paradigms on what to monitor, how to monitor it, and what data not to store. We propose that doing less checking of individual faults and shifting more towards system health monitoring of the data network through automated testing strategies will be a way to proactively handle multiple potential faults at the same time.
Presenter bio: Chuck has had a varied career in many aspects of service delivery, from operations and maintenance to systems engineering and installation to his current role in Applied Research at Comcast. Applied Research has been dedicated to improving service quality, focusing on video quality by working closely with vendors to improve encoder performance. Chuck’s particular focus had been on service monitoring and measurement of delivered video quality of service. He has been actively involved with the Comcast VM-12 architecture team developing Company-wide standards and practices for video service monitoring and metrics reporting. Chuck has contributed to Comcast’s Video Summit, discussing monitoring of new services delivered by new cable delivery as well as the advantages of measuring and reporting on delivery and on quality performance.
Presenter bio: Joe has had a varied career in the broadcasting field. Started as a software engineer developing applications that performs functions for playing content in various formats. This allowed a hands on experience in developing products for vendors. Joe was then a system engineer to implement video compression systems in complex workflows. Workflows were designed for post-production with archiving mezzanine files. Other systems were designed for on-air/DTH delivery. Moved to implement end to end solutions with the emphasis on video compression, delivery and monitoring. The experience that Joe gathered was in the satellite and cable industry infrastructures. Head-ends exposure for delivery were different but the end goal was to deliver content to traditional home displays or mobile devices. Joe’s experience with developing applications, involvement in various head-ends architecture, video delivery and monitoring video at different levels led him in his current role at Comcast in the Applied Research group. Beside working on linear and VOD systems, Joe works on new technologies such as HEVC, IMF standards and finding a better and more efficient way to monitor IP video.
Chuck WesterJoseph Badro
10:00 Taking Remote Production to the Next Level - CBC's Coverage of the 2014 Sochi Olympic Games
Brian Johnston and Jeffrey Vella (Canadian Broadcasting Corporation, Canada)
CBC's coverage of the Sochi Olympic Games included over 3,000 hours of programming that was made available in English and French on television, radio, the web, and many digital platforms. Faced with increased budget pressures CBC developed an innovative remote production model that provided the best possible audience experience at the lowest possible cost. - All switching, mixing, cutting and editing was done in Canada with only minimal technical staff and infrastructure deployed in Sochi. - Video and audio signals were carried in a compressed form, using IP technology, from the Olympic venues all the way to Canada. This improved picture quality, reduced latency allowing for seamless double-enders and reduced cost. - Lightweight, browser-based desktop client applications allowed content to be searched, browsed, shot-listed and edited from desktops at any location This paper will provide an overview of the technology used to create a superior Olympic content experience for Canadians.
Presenter bio: Brian Johnston is a Supervisor in Media Engineering for English Services at the CBC and is responsible for Engineering of CBC Sports Properties and Remote Production Facilities. Over the last 4 years Brian has led the Engineering team responsible for the design and implementation of CBC’s new remote production facilities and integration with large Sporting events such as The FIFA World Cup of Soccer and the Olympics. Brian holds an Honors BSc Degree in Computer Science from The University of Western Ontario. He joined the engineering team at CBC as a student in 2002 and has progressively assumed more senior roles within the department. His previous accomplishments include work on CBC’s centralized television, radio and digital presentation facility, digital tape archive system, post and news production shared editing platform, file based workflow development, consolidated storage platforms for media and transcoding infrastructure.
Brian Johnston

08:30 - 10:00

Advancements in Theatrical Display

Room: Theatre (Chinese 6)
Chair: Peter Ludé (RealD, USA)

Cinema projector technology is currently undergoing a period of dramatic innovation. Rather than using traditional xenon short-arc lamps, new laser illuminated projector systems are being developed to provide enhanced images, including an expanded image dynamic range and wider color gamut. This session will explore how laser illumination holds the potential for blacker blacks and brighter highlights, as well as practical limitations of the technology. You will learn how laser light can be used to expand the color gamut, and the impact of metamerism in the perception of color. Laser speckle is an undesired attribute of this new technology, but measuring speckle is challenging. You will learn the latest advancements in speckle measurement techniques necessary for image quality assurance. In addition, this session will include a presentation on the findings of an important new study on viewer preference, which explores the perceptual impact of expanded image dynamic range in the cinema.

08:30 Viewer Preferences for Cinema Luminance Dynamic Range
Suzanne Farrell and Scott Daly (Dolby Laboratories, USA); Timo Kunkel (Dolby Labs, Inc., USA)
Recent studies on viewer preferences conclude that viewers are looking for increased display capability, with several groups finding preferences toward higher dynamic ranges on small screens. One study found a luminance range of 0.005 to 20,000 cd/m2 (22 stops) just met the preferences of 90% of the viewers. In translating results from small screen to larger cinema screens, it is assumed that the preferred cinema brightness is much dimmer. This assumption, however, includes multiple confounding aspects (e.g., illumination of the audience, field of view). We have studied viewer preferences for the cinema isolating luminance range from other factors. Using a 6kW cinema projector, and a 13 foot 2.8-gain screen, we produced a maximum screen luminance of 2,500 cd/m2, and concluded that 22 stops of dynamic range (twice the dynamic range of existing cinema), would meet the preferences of all but the upper 10% most critical.
Presenter bio: Suzanne is a software engineer for Dolby where she has focused on carrying out viewer preference studies on image quality. She graduated from Rochester Institute of Technology with a degree in Motion Picture Science.
Suzanne Farrell
09:00 Development of an Accurate and Repeatable Measurement Method for Speckle in Laser Illuminated Projectors
Rick Posch (CR Media Technologies, USA); Peter Ludé (RealD, USA)
With the emergence of lasers as a replacement for xenon arc lamps in ultra-high brightness digital projection applications, it is desirable to preserve the best possible image. Speckle is one aspect of image quality that has been the subject of recent attention. It is fundamentally difficult to measure, due the unique physical characteristics of coherent light and the absence of a single focal plane for the speckle image. This paper describes the development of a speckle measurement method, with reference to work recently completed by the Laser Illuminated Projector Association (LIPA). The identification of obstacles to measurement, along with explanations of how each was managed, will be of interest to those who will measure speckle, and for the science of image quality metrology in general. Ultimately the addition of speckle to the industry's current suite of image quality measurements will promote the successful deployment of lasers for high-quality theatrical projection.
Presenter bio: Rick Posch is owner and CTO at CR Media Technologies. As Director of Product Marketing for Laser Light Engines, Inc., Posch was responsible for introducing laser illuminated projection into the cinema and themed entertainment markets. Previously, he was at Bose Corporation, where he led the design and deployment of advanced demonstration technology that incorporated digital video projection, multichannel surround audio, and theater automation. Posch is also a veteran of the Electronic Design Automation (EDA) industry. He was Director of Applications Consulting for EDA pioneer Synopsys, Inc., bringing to market new methods for the design and verification of custom integrated circuits. When Posch was at AT&T Bell Laboratories, he developed electronics and semiconductors for telecommunications systems that were among the first to use lasers and optical fiber.
Rick Posch
09:30 Design Considerations for Cinema Exhibition using Laser Illumination
Jim Houston (Starwatcher Digital, USA); William Beck (Barco, Inc., USA)
Laser light sources used in digital projectors have the potential to increase quality for several image parameters including high brightness, dynamic range, and color space. This paper discusses advantages of laser illumination for digital projection as well as considerations for design of future cinema parameters such as luminance standards, high dynamic range projection and a wider color space, including the practicality of Rec. 2020 as a projection gamut. Practical concerns and limits of lasers are examined in the context of near term implementations, and suggestions for future theater and projector design are discussed.
Presenter bio: Jim Houston, principal of Starwatcher Digital, consults on a broad range of issues in digital production for motion pictures and television. Previously, he was Vice President of Technology and Engineering for Sony Pictures where he designed and built the 4K post production facility, Colorworks, and the Sony Production Backbone asset system. He has received two Academy Science and Engineering awards and previously developed digital production facilities for animation, visual effects, and post-production at Walt Disney Feature Animation, Sony Pictures Imageworks, Pacific Ocean Post, Mainframe Entertainment, Pacific Title & Art Studio, and Postworks L.A. Early in his career, he developed computer-aided design products for Gould Computer Systems, and user interface systems for RIACS/NASA Ames Research Center. Jim Houston is a member of the Science and Technology Council of the Academy of Motion Picture Arts and Sciences, the ASC Technology Committee, and the Society of Motion Picture and Television Engineers.
Presenter bio: Bill is a photonics visionary with diverse start-up, general management and technology marketing experience. He has been central to the development and application of laser illumination systems for Digital Cinema and other performance projection applications. Bill Beck supports Barco as “The laser Guy” to further develop and promote laser projection in Cinema and other applications in key Barco markets. Bill co-founded and represents Barco in the Laser Illuminated Projection Association (LIPA) where he served as past chairman. He frequently writes and presents on the subjects of lasers, laser projection and fiberoptics. Before joining Barco, Bill founded BTM Consulting, LLC, providing expertise in the laser projection space. Before this, he was founding CEO of Laser Light Engines, a pioneer in the early development of RGB laser illumination. Bill earned a BA from Dartmouth College and an MBA from Rensselaer Polytechnic Institute. He is a member of SMPTE, LIPA, ISDCF, EDCF and NEFC
Jim HoustonWilliam Beck

10:30 - 11:00

Coffee Break

Room: Exhibit Hall

11:00 - 12:30

Asset Management-Part 2: Standards for Archives & Production Workflows

Room: Salon 1
Chair: S. Merrill Weiss (Merrill Weiss Group LLC, USA)
11:00 Open Standards Approach for Video and Film Archiving and Preservation
Brian Campanotti (Front Porch Digital, USA)
This paper explores significant advances in film and video digitization and open standards-based solutions for long-term archiving and preservation. Rather than rely on specific IT-centric technologies, application-specific advancements focus on this demanding area while ensuring long-term accessibility and no functionality limitations. Attendees will learn about recent advancements in technologies, and will specifically focus on real-world workflow and usability of content storage management solutions in demanding file-based environments, and include a deep dive into emerging disruptive long-term storage technologies. Finally, we will discuss leading-edge, open storage standardization work currently underway for long-term content preservation and accessibility.
Presenter bio: Brian Campanotti is the Chief Technical Officer for Front Porch Digital, leading industry invention and advancement in cloud-based and on-premises global content storage management (CSM), media asset management (MAM) and content publishing, migration and preservation solutions. He is responsible for innovations in the area of cloud-based solutions for “big data” focused on media-centric content handling, delivery, storage and preservation. He was one of the primary inventors of the Archive eXchange Format (AXF) and has been active in standards body activities helping to promote innovation and openness in the industry for more than two decades. Mr. Campanotti and his team have won Emmy® Awards for their work in content collection preservation technologies and for innovation in serial digital video technology. Mr. Campanotti has founded several start-ups and began his career at the Canadian Broadcasting Corporation (CBC) in Canada and holds a degree in Electrical Engineering from the University of Toronto.
Brian Campanotti
11:30 Media Archiving, Standards & the Library of Congress
James Snyder (Library of Congress, USA)
The Library of Congress is one of the world's largest media collections and is digitizing its entire collection, including over a million video recordings, three million audio recordings, 255 million feet of film, and hundreds of thousands of video games. Nearly every known format, both common and rare, must be accommodated. Over 100,000 new items are added each year via Copyright registration. This paper will describe the challenges the Library faced in planning and building its media migration plant, how it chose its preservation and access file formats, how it uses international standards, is dealing with issues such as metadata and long-term sustainability, the lessons learned after 5 years in production, and how the Library is working with content producers and other archives to deal with the many media file types used today and the new.
Presenter bio: James Snyder is a digital media engineering, data & media archiving, preservation, production and project management specialist. His 34 years' experience includes television, film, radio, internet & data technologies and covers the gamut from traditional analog to cutting edge digital data, audio and video technologies. His career spans work in the commercial, non-commercial and government sectors, and has a lifelong fascination with media, film, technology, engineering & history. Mr. Snyder currently serves as the Senior Systems Administrator for the Library of Congress' National Audio-Visual Conservation Center (NAVCC) located on the Packard Campus for Audio Visual Conservation in Culpeper, Virginia (http://www.loc.gov/avconservation/packard/). He is responsible for all the audio, video and film preservation and digitization technologies, including long-term planning & implementation, long-term data preservation planning & implementation, technology services to the United States Congress and organizations on Capitol Hill, as well as standards participation and technology liaison with media content producers.
James Snyder
12:00 Applying AXF Tools from the Set through Production Workflows
S. Merrill Weiss (Merrill Weiss Group LLC, USA)
The Archive eXchange Format (AXF) was developed to enable multiple files to be treated as a unit for operational storage and longer term preservation purposes. To support such applications, methods were developed to establish hierarchical relationships between files through use of folders established in the AXF structure. There also is an extensive set of metadata that is useful in managing and documenting the files stored in AXF Objects. As the tools defined by AXF became understood by end users, it became apparent that those tools offered solutions to file management and documentation needs throughout the production workflow, starting on the set. Through a simple, straightforward modification of the XML schema that underlies AXF, it is possible to create manifests of files that carry file metadata needed throughout the workflow all the way to the archive. In essence, use of AXF tools upstream of archives permits application of AXF in "unwrapped" form, with the files being "wrapped" into AXF Objects for archival purposes when desired. The paper will describe upstream applications of AXF and the modifications of the AXF schema necessary to enable those applications. Work on creation of a standard for upstream use of AXF tools is under way in SMPTE.
Presenter bio: S. Merrill Weiss is a consultant in electronic media technology and technology management. In a 46+ year career, he has spent over 36 years involved in work on SMPTE standards. He participated in the earliest work on digital television and has been responsible for organizing or chairing many SMPTE technology-development and standards efforts since. Among other duties, he served four years as Engineering Director for Television; he co-chaired the joint SMPTE/EBU Task Force; and he currently chairs the Working Group on the Archive eXchange Format. Merrill is a SMPTE Fellow and has received the SMPTE David Sarnoff Gold Medal and the Progress Medal. He also was a recipient of the NAB Television Engineering Achievement Award, the ATSC Bernard Lechner Outstanding Contributor Award, and the IEEE Matti S. Siukola Award. Merrill holds four U.S. and two foreign patents. He is a graduate of the Wharton School of the University of Pennsylvania.
S. Merrill Weiss

Content Accountability, Tracking and Protection

Room: Salon 2
Chair: Arjun Ramamurthy (20th Century Fox, USA)

The focus of our supply chain is geared toward producing content and the distribution of the said content to consumers. Along the way, we need to ensure that the content is secure, and especially more so when considering highly collaborative workflows, geographically dispersed workgroups, and production in the cloud. Additionally, when the content is consumed it is vitally necessary to have accurate monitoring on where, how and when it is consumed.

This session brings together these vital aspects in three papers. The authors of the first paper will discuss the modalities of media management and measurement. The second paper will discuss how piracy can be curtailed using forensic watermarking, and finally, the last paper, will bring us solace by letting us know that knowing that while cyberattacks will occur, we can survive them by making our production pipelines cyber reliant.

11:00 How Do We Measure Up?
Christopher J Lennon (MediAnswers, USA); Harold Geller (Advertising Digital Identification, LLC (Ad-ID), USA); Clyde Smith (FOX NE&O, USA)
When it comes to media, measurement is what it's all about. We are all in the business of monetizing our media assets, but without accurate measurement of their consumption, we really can't do a good job of this. 2014 has seen some very important advances in technologies that enable not only the management, but also identification, and measurement of media consumption. SMPTE is standardizing the representation of Ad-IDs and EIDRs, as well as how to handle those in MXF files. They are also working on the embedding of these identifiers in content in such a way that it can survive all means of transformation and distribution to viewers. Couple this with work that has already been done, and we are on the cusp of having a very powerful toolset for media management and measurement.
Presenter bio: Chris Lennon serves as President & CEO of MediAnswers. He is a veteran of over 25 years in the broadcasting business. He also serves as Standards Director for the Society of Motion Picture and Television Engineers. He is known as the father of the widely-used Broadcast eXchange Format (BXF) standard. He also participates in a wide array of standards organizations, including SMPTE, SCTE, ATSC, and others. In his spare time, he's a Porsche racer, and heads up a high performance driving program.
Presenter bio: Harold S Geller is Chief Growth Officer of, Advertising Digital Identification LLC (Ad-ID) a US Based advertising-metadata system, the UPC code for Ads across all platforms, which is a joint venture of American Association of Advertising Agencies (4A’s) and the Association of National Advertisers (ANA). Harold speaks and writes extensively regarding interoperability, digital workflow and metadata in advertising and is the co-author of four white papers on the subject. Harold’s advertising career spans nearly 30 years, in the United States and Canada. He has worked in media buying/planning, account management, financial, and technology roles at MindShare, Ogilvy & Mather, and McCann Erickson, and the defunct Ted Bates and Foster Advertising. Harold is a graduate of radio and television broadcasting from Seneca College (Toronto, Ontario, Canada).
Presenter bio: Clyde Smith is the Senior Vice President of New Technologies for FOX Network Engineering and Operations. In this role he is supporting Broadcast and Cable Networks, Production and Post Production operating groups in addressing their challenges with new technologies focusing on standards, regulations and lab proof of concept testing and evaluation. Prior to joining FOX he was SVP of global broadcast technology and standards for Turner Broadcasting System, Inc. where he provided technical guidance for the company's domestic and international teams. He previously held positions as SVP of Broadcast Engineering Research and Development at Turner, SVP & CTO at Speer Communications, and Supervisor of Communications Design and Development Engineering for Lockheed Space Operations at the Kennedy Space Center. Smith also supported initiatives for Turner Broadcasting that were recognized by The Computer World Honors program with the 2005, 21st Century Achievement Award for Media Arts and Entertainment and a Technology and Engineering Emmy Award for Pioneering Efforts in the Development of Automated, Server-Based Closed Captioning Systems. In 2007 He received the SMPTE Progress Medal and 2008 he received the Storage Visions Conference Storage Industry Service Award.
Christopher J LennonHarold GellerClyde Smith
11:30 Toward Real-Time Detection of Forensic Watermarks to Combat Piracy by Live Streaming
Ken Rudman (Civolution, USA); Mathieu Bonenfant (Civolution, France); Mehmet Celik (Civolution, ? ); Joe Daniel (Civolution, USA); Jaap Haitsma (Civolution, The Netherlands); Jean-Paul Panis (Civolution, France)
Over the past several years, anti-piracy analysts have documented the transition of casual content piracy away from Torrent networks to streaming sites which provide immediate access to not only live TV broadcasts, but also file-based pirated content, such as movies and TV episodes. Since Live Content such as professional sports or Pay Per View events have only one release window, they will lose market value immediately if they can be streamed live, thus there is a recognized need to act quickly in the case of piracy and hence to shorten the time needed to extract the forensic watermark payload to as close to real-time as possible. In working toward inline detection of forensic watermarks from streaming content, we seek to enable a new tool designed to identify the pirate source in minutes. With the ability to detect a session-based forensic watermark directly from a video stream, it is possible for an operator to disable a set-top box or streaming client while the transmission is still in progress. • For pay TV and online content distribution: the latest applications of its session-based watermarking in the head-end, on the CDN edge, and in the end consumer's device
Presenter bio: Ken Rudman has over 15 years’ experience in product development, product management and product marketing, primarily in video content distribution, search and advertising. At Technicolor, Ken led product management for the Prisma Content Delivery platform which powered LOVEFiLM’s streaming video service until they were acquired by Amazon. In his role at Civolution, Ken oversees product marketing for the NexGuard line of forensic watermarking tools and works directly with customers in the Digital Cinema, Film and Television Production, PayTV and OTT VOD industries to derive maximum value from Civolution’s industry-leading forensic watermarking suite. Ken has been an active leader in the Lean Startup and Agile Product Management world for many years and has spoken at numerous industry events, including Digital Hollywood, AT&T Developer Conference and Founder Labs, where he also served as a mentor.
Ken Rudman
12:00 Not every Cyber attack can be stopped, but they can be survived
Chris Morales (NSS Labs, Inc., USA)
The entertainment supply chain is evolving, and will utilize more public "cloud" infrastructure as content developers find more seamless collaboration and time condensing solutions. Divergence from air-gapped/isolated networks will indeed provide this efficiency, but with a potentially hidden cost. With critical assets moved into a more publicly accessible network, breaches of security resulting in theft or disruption of production processes become a more likely scenario. At this critical point between architectures, organizations must identify where attacks can occur, how information may be exfiltrated, and how to minimize damages. Organizations must be operationally prepared for rapid response. This is cyber resilience.
Presenter bio: Chris Morales, Practice Manager, Architecture and Infrastructure, has over 17 years of IT and information security experience and joins NSS from 451 Research where he was Senior Analyst, Enterprise Security. At NSS, his areas of research include mobile security, data security, vulnerability management, malware detection and host protection. Prior to 451, Morales was the Technical Partner Manager at Accuvant, where he developed position strategies for new offering areas such as mobile device security, data security and malware threat analysis. He developed integration strategies for security products in key client accounts in his role as a Security Architect at McAfee, and he also served as a Security Architect with IBM Internet Security Systems. Earlier in his career, Morales held the role of Senior Systems Administrator with Delta Technology, and he also cofounded a company that developed business finance software and small-business networks.
Chris Morales

12:30 - 14:00

Boxed Lunch (Ticket Required)

Room: Exhibit Hall

14:00 - 15:30

Image Processing Part 1: Methods for creating high quality images beyond HD

Room: Salon 1
Chair: Siegfried Foessel (Fraunhofer IIS, Germany)

Displays today are able to reproduce higher resolutions, higher frame rate and higher dynamic range than ever before. The question is how we can generate, up-convert or preserve high quality images from image acquisition up to the display. In our first presentation a new algorithm for improving the edges in images will be presented. It can be used for up-converting or sharpening of images. Our second presentation takes care of the issue that motion during capturing of images can destroy a high resolution and will give recommendations how to preserve the details. The third presentation will cover the question which imager technology is available today and how it can be used in different production and delivery workflows.

14:00 Real Time Super Resolution for 4K/8K with Non-linear Signal Processing
Seiichi Gohshi (Kogakuin University, Japan)
8K and 4K systems may be the ultimate in high-resolution video, but the imaging, editing and transmitting equipment is insufficient in resolution to fully express the vast capabilities of 4K/8K displays. Although many image enhancement technologies exist, they merely emphasize the edges already in the original image. Our novel super resolution technology uses non-linear signal processing to create naturally appearing thin edges that do not exist in the original image and frequency elements exceeding even the Nyquist frequency in real time. Besides its use to up-convert existing content, it can be used during production to solve the problem of defocusing. Delivery method: presentation and video clip. The demonstration will be held in the booth of Keisoku Giken. Keisoku Giken is going to have a booth in SMPTE 2014.
Presenter bio: Seiichi Gohshi is a professor of Kugakuin University. He received his BS degree, MS degree and PhD degree from Waseda University in 1979, 1981 and 1997. He joined Japan Broadcasting Corporation (NHK) in 1981. He started his research at NHK Science Technical Research Laboratories (STRL) in 1984. He helped to develop the HDTV broadcasting system, transmission systems, signal processing systems. He was the project leader of the Super Hi-Vision (8K) transmission system and successfully conducted the first Super Hi-Vision transmission test at IBC 2008. He also developed a watermark system that was used in movie theaters. He joined Sharp Corporation as a division deputy general manager in 2008 and developed high resolution systems. He is currently a professor of Kogakuin University. His research interests are video and image signal processing especially for super resolution and forensic technologies.
Seiichi Gohshi
14:30 4K: Model for motion control to ensure true 4K detail at capture
Pierre Routhier (Technicolor, USA)
The advent of 4K bears the promise of a resolution 4 times larger than 2K Digital Cinema. For images to take advantage of this increase, capture systems must be configured and operated to maximize detail. One of the critical components for maintaining image detail is motion blur, dictated by optical flow (motion, as seen by the sensor) and shutter speed (to ensure smoothness of motion). In this paper, the author defines mathematical models and practical methods to optimize motion during 4K capture and provides several examples and tips based on his experience from actual productions on how to achieve true 4K detail.
Presenter bio: Former Aerospace Engineer Pierre (Pete) Routhier, Eng., M.Eng., is a specialist in advanced imaging and Stereoscopic 3D. As Vice-President of 3D strategy for Technicolor, he has developed innovative solutions and workflows to support major studios in the field of native 3D, CG 3D and 2D to 3D conversion and has worked on several Hollywood productions. He is currently focusing his research efforts on the acquisition of high-quality, advanced images, in the fields of High Frame Rate, High Dynamic Range, Wide Color Gamut and Ultra High Definition in partnership with major studios and broadcasters.
Pierre Routhier
15:00 Beyond HD - The status of the image acquisition solutions for the next generation broadcasting formats
Klaus Weber (Grass Valley & A Belden Brand, Germany)
Many discussions inside the broadcasting community are focused on next generation broadcast formats. For content producers and distributors, the question remains, "what will be the best solution for the next generation broadcast format?" Is it just the double pixel count in horizontal and vertical directions? Will a higher frame rate and/or a higher dynamic range and extended color range provide viewers a higher value? Or which combination of these improvements needs to be included in a next generation broadcast format? The answer will likely depend on the type of production and/or content delivery. All these points have a direct influence on the imager technology and the paper explains the different potential solutions of "4K" or UHD image acquisition including their strength and limitations with the focus on live broadcast productions.
Presenter bio: Klaus Weber is responsible for the worldwide product marketing of the imaging products for Grass Valley a Belden Brand. The imaging products include all of the LDX Camera Systems used in a wide range of broadcast application. Klaus’s past experience includes customer support, technical and operational training, and regional sales management for broadcast cameras. Klaus has over 30 years of industry experience, with the last 20 focused on various duties around marketing and business development for the Grass Valley a Belden Brand camera factory in Breda, The Netherlands. He is the author of several technical articles and white papers addressing the different camera related technologies and topics. In addition, Klaus has presented several technical papers at various industry events, as well as participating in industry round table discussions in many different countries around the world.
Klaus Weber

Evolution of Broadcast Facilities-Part 1

Room: Salon 2
Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
14:00 IP to the Camera: Completing the Broadcast Chain
Jim Jachetta (VidOvation - Moving Video Forward, USA)
Extending studio data networks to cameras located anywhere around the globe is the logical conclusion of current standards work targeting future studio infrastructure. By consolidating all signals onto IP/Ethernet, significant cost savings are achieved by reducing the people and equipment needed at remote venues. Studio-based staff can perform the functions of mobile-truck-based production teams in live sports, broadcast and ENG applications. This paper will provide an in-depth examination of the technologies used in the Stagebox camera-back system developed by BBC R+D including: - Transport of 2-way HD video, multichannel audio and ancillary services including timecode, talkback, tally, and camera control over a single Ethernet connection - Multi-camera synchronization enabled by IEEE 1588 PTP - Support for live workflows, including editing, archiving, and live-to-air production This paper will also discuss several live events around the globe that have already been successfully produced using this system, including "lessons learned" along the way.
Presenter bio: Jim Jachetta, CEO of VidOvation Corporation, has over 20 years of experience providing video and data communications systems to government, broadcast television, fortune 500 and medical clients globally. Former Senior Vice President and Principal at MultiDyne Video and Fiber Optic Systems, Mr. Jachetta has a Master of Science Degree in Electrical Engineering from NYU Polytechnic University. Earlier in his career as a Project Engineer at Micra Corporation, Jachetta designed hybrid micro-electronics for military and aerospace video guidance systems. In addition, Jim Jachetta has authored multiple articles and white papers including a chapter on Fiber Optic Transmission Systems in National Association of Broadcasters Engineering Handbook. Jim rounds out his design and implementation expertise as a co-author on two patents on video fiber optic communications.
Jim Jachetta
14:30 Confidence Monitoring: Any Time, Any Where, Any Way
Steve Farmer (Wohler Technologies, USA)
Audio and video confidence monitoring has progressed beyond analog with the evolution of broadcasting toward networks that use digital baseband, compressed video, or IP distribution technologies. To increase operational efficiency, modern broadcasters are adopting technologies with maximum scalability. With today's sophisticated data networks, Wi-Fi infrastructures, 4G mobile access and the Internet, it is now possible to decentralize monitoring of these critical signals for delivery to a range of devices such as smart phones, tablets and desktop PCs. This paper describes the technology enabling this advanced remote monitoring application and discusses the operational and financial benefits that can be enabled.
Presenter bio: Farmer joined Wohler in 2014, bringing with him experience gained over a lengthy engineering career. He founded both DSMB Technology, which develops custom video, audio, and communication products, and of Claratech Limited, which later acquired both BAL Broadcast and Faraday Technology Corporation. He earlier worked for Drake Electronics (later part of Vitec Group), developing communications and talkback systems for studios and outside broadcast facilities. Eventually becoming director of engineering for both Drake and Clear-Com in the Bay Area, Farmer participated in business development, acquisitions, change management, and product management. His career also includes roles as principal engineer at GEC-Marconi Future Systems Laboratory and senior design engineer at Northern Telecom Defence Systems Division. Farmer earned his degree in electronic systems engineering from the University of Essex, and he holds two patents: one for a digital wireless communication system and the other related to the transmission of digital audio.
Steve Farmer
15:00 IPTV in CNN's Newsroom: A Productivity Breakthrough
Bob Baker (Turner Broadcasting, USA); Wes Simpson (Telecom Product Consulting, USA)
CNN has recently installed a 700 channel IPTV system for delivering live video feeds to every desktop/laptop and multiple displays in every part of the Atlanta newsroom. This system has provided numerous user and company benefits including: - Each user can view multiple live streams of their choice on their PC display - Content can be streamed live to/from New York and London operations - Dramatic cost reduction as compared to upgrading coaxial RF video system - Eliminated need for complex, centralized multiviewer systems to feed user displays - Reduced time and cost of adding new channels This paper will describe the system in depth from source to viewer, including compression system (H.264), container formats (MPEG TS), IP routing and switching (using IP Multicasting), interactive channel guide, and Set Top Box/desktop player implementations. Key technical issues and solutions will be discussed, along with example cost/benefit calculations and bandwidth consumption analyses.
Presenter bio: Bob Baker, Director, Transport Engineering, Turner Broadcasting Bob Baker serves as the Director of Transport Engineering at Turner Broadcasting. He is responsible for incoming satellite and IP contribution for CNN and compression for the eighty national and international outgoing networks. He also manages Fiber Transport Engineering, covering the United States. Bob has traveled the world for CNN, using flyaway uplink transmit systems for transport back to Atlanta – including Tiananmen Square in 1989 in Beijing, and for the twenty year Vietnam War anniversary in Saigon. Bob’s is a key design contributor for CNN and Turner Entertainment. He and his team designed and managed the integration of twelve multiplex redundant compression systems for eighty networks at two locations, feeding redundant teleports, including a backup in New York. He also designed and managed CNN’s 700 channel IPTV system for incoming feeds, outgoing networks, and remote feeds from cities such as New York, London, and Hong Kong. Bob and his engineers now use the public internet to bring in contribution feeds for CNN and also distribute feeds to Asia. He also works with Turner Entertainment to bring in many major sporting events, utilizing up to four levels of redundancy. In his free time, Bob also works with many ministries, engineering live events around the world, including events for Billy and Franklin Graham, and has directed and engineered for an international TV ministry program for twenty-two years. He also acts as Engineer in Charge on many TV trucks for live sporting events. Bob is an active SMPTE member, speaks at the Atlanta Chapter several times a year, and recently served on the NATAS Emmy Technical committee.
Presenter bio: Wes Simpson is President of Telecom Product Consulting, which he founded in 2000 to provide high quality research, marketing, business development, training and product management services to companies wishing to capitalize on the expanding market for high performance video telecommunication products and services. Wes is a frequent television industry speaker at events such as VidTrans, SMPTE, NAB, and IBC, and he is a regular columnist for TV Technology. Recently, Wes has developed and delivered well-received training seminars for the VSF, the IEEE BTS and SMPTE’s Regional Seminars. He has written two books which have both been released as second editions by Focal Press: “IPTV and Internet Video” in 2009 and “Video Over IP” in 2008. Wes has over 30 years of experience in the design, development, and marketing of products for video and telecommunication applications. He holds a BSEE from Clarkson University and an MBA from the University of Rochester. Wes was recently elected to be the Secretary/Treasurer of the Connecticut Subsection of SMPTE.
Bob BakerWes Simpson

15:30 - 16:00

Coffee Break

Room: Ray Dolby Ballroom Terrace

16:00 - 17:00

Image Processing Part 2 - Reducing distortions in captured images

Room: Salon 1
Chair: Siegfried Foessel (Fraunhofer IIS, Germany)

Image capturing is always restricted by physical and technological limitations. This can be the sampling process or the existing capture and storing technology at the time of capturing. The first presentation will investigate the influence of sampling to the quality of later displayed images. The second presentation will demonstrate what image quality can be reconstructed by today's technology from images captured and stored 40 years ago during the lunar orbiter missions.

16:00 A psychophysical study isolating judder using fundamental signals
Scott Daly (Dolby Laboratories, USA); Ning Xu and James Crenshaw (Dolby Laboratories, Inc., USA); Vickrant J Zunjarrao (Microsoft, USA)
There are a number of well-known observations of movie content being displayed at different frame rates. While the terms are not entirely solidified across the industry, there are 4 main degradations of the signal as compared to unsampled motion (i.e., real-world motion). These are: 1.non-smooth motion (which is also most often referred to as judder, or strobing), 2.The appearance of false multiple edges ), 3.flickering (a counterphase spatiotemporal frequency along moving edges), and 4.motion blur In natural imagery, all four of these effects generally appear to the viewer. The spatiotemporal window of visibility [ref: Farrell, Watson, Ahumada] has proved successful in describing when motion looks distorted from the real-world smooth motion. However, that model allows a prediction of detection performance, but doesn't address the appearance or magnitude of motion distortions. In addition, there are also well-known image capture and display parameters involved with frame rate questions, such as exposure duty cycle (shutter angle), object speed, and object contrast. There are also known interactions of these capture and display parameters with brightness and contrast, which are also generally linked in the display of imagery. For example, the Ferry-Porter law of psychophysics indicates the temporal frequency bandwidth of the visual system increases with increasing adapting luminance. We aimed to isolate the non-motion component, or judder, in a psychophysical study by using fundamental test signals, such as the Gabor signal. Two interval forced choice methodology was used to generate interval scales of judderness. Results will be presented for the viewer's assessment of the magnitude of judder, or judderness, as a function of these key parameters tested in isolation. Refs: 1. Farrell, Ahumada, Watson - window of visibility (original paper) 2. Watson - recent version presented at SMPTE (journal or conf?)
Presenter bio: Scott Daly received a B.S. EE degree in 1980 from North Carolina State University, and then worked for a number of years with early high-resolution laser scanning systems at Photo Electronic Corporation in West Palm Beach, Florida. Shifting from hardware to wetware, he obtained an M.S. in Bioengineering from the University of Utah in 1984, where he was engaged in retinal neurophysiology, completing a thesis on the temporal information processing of cone photoreceptors. He then worked from 1985 to 1996 in the Imaging Science Division at Eastman Kodak in the fields of image compression, image fidelity models, and image watermarking. The years 1996-2010 were spent at Sharp Laboratories of America in Camas, Washington, where he led a group on display algorithms. Eventually becoming a research fellow and leader of the Center for Displayed Appearance, he had opportunities to apply visual models towards digital video and displays, with numerous publications on spatiotemporal and motion imagery, including starts in human interaction with wall-sized displays, audio perception and stereoscopic displays. These topics led him to recently join Dolby Laboratories in 2010 to focus on overall fundamental perceptual issues, and toward applications whose aim is to preserve artistic intent throughout the entire video path to reach the viewer. He is currently a member of IEEE, SPIE, and SID.
Presenter bio: Dr. Ning Xu is senior member of IEEE and Senior Staff Researcher at Dolby Laboratories, Inc.. His research of interest includes image and video processing, computer vision, and machine learning.
Scott Daly
16:30 A Quality Metric For High Dynamic Range
Gary Demos (Image Essence LLC, USA)
The Peak Signal to Noise Ratio (PSNR) metric has long been utilized for codec evaluation and development, and other uses. However, for High Dynamic Range (HDR), the PSNR metric is not suitable. A more appropriate characterization of coding and image quality is to split image brightness into ranges (such as factors of two), and then determine the standard deviation within each such range. Once the standard deviation (sigma) has been determined, the two and three sigma population of pixel differences is shown as percentages of pixels. This is necessary because codec pixel differences do not typically follow a normal Gaussian error distribution. The value of sigma at each brightness range, together with the percentage proportions of two and three sigma outliers, provides an appropriate quality metric system for HDR.
Presenter bio: Gary Demos is the recipient of the 2005 Gordon E. Sawyer Oscar for lifetime technical achievement from the Academy of Motion Picture Arts and Sciences. He has pioneered in the development of computer generated images for use in motion pictures, and in digital film scanning and recording. He was a founder of Digital Productions (1982-1986), Whitney-Demos Productions (1986-1988), and DemoGraFX (1988-2003). He is currently involved in digital motion picture camera technology and digital moving image compression. Gary is CEO and founder of Image Essence LLC, which is developing wide-dynamic-range codec technology based upon a combination of wavelets, optimal filters, and flowfields.
Gary Demos

Evolution of Broadcast Facilities-Part 2

Room: Salon 2
Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
16:00 IT-TV-Live - An Integrated Concept for IP-based Distributed Broadcast Production with 'SDI Quality'
Alfred Krug (Scalable Video Systems GmbH, Germany)
Broadcast production has been growing in complexity for several decades. This growth has been incremental, based on a largely unchanged broadcast architecture. Key components necessary for production-centric functions have evolved continuously in quality and functionality, but have brought increased complexity in signal routing and control interconnections. Today, signal routing is joining control in exploiting IP-based solutions; productions with widely-distributed acquisition and control locations must now be supported; new emerging control systems must be user-friendly; SDI-based studio quality must not be compromised in this transition. The paper describes a new, fully-scalable production-centric architecture, with software virtualization of a modular hardware device's network location. GPU-based on-demand video processing with compression-free interconnection via IP routing allows natural execution of creative intent in live productions. The linking network is scalable from studio to intercontinental access level. The enhanced functionality and flexibility now demanded in today's live productions is elegantly achieved in this major rethink.
Presenter bio: In the grade of a 'Diplom-Ingenieur' in Electrical-Engineering from "Fachhochschule der Deutschen Bundespost" of Dieburg, he started his career 1982 in the QA-Department of “BTS” in Darmstadt. There, he adopted a detailed and widespread technical knowledge about all major Broadcast Studio Equipment. In 1989 he changed to the Software Development Department and migrated first into "KCM-125"-Camera- and finally into "DD30/35"-Switcher Software Development group. From very beginning he became part of the XtenDD/HD and subsequently KayakDD/HD Development team. The final 11 years while acquired by Thomson and Grass Valley, he headed this department as a Software Development Manager. Being within this 13 people core team they decided to follow their own global rethink about Broadcast Live Production by focusing on the future essentials while applying latest technologies. By supporting Scalable Video Systems (SVS) Development as a Program Manager, the revolutionary 'IT-TV-Live' concept is close to be released under "DYVI".
Alfred Krug
16:30 Next Gen – Broadcast Facilities of the Future
Kevin Gage (One Media LLC, USA)
We have all seen the headlines, and we know it is coming…but how will "Next Gen" affect my facility? With flexibility and optionality comes opportunity and variety. The workflows of today will most certainly be impacted, and new flows will introduced. Other infrastructure issues come to the forefront: • High Power/High Tower – with or without Single Frequency Networks and/or on-channel repeaters? • Fixed/Portable/Mobile services – All, some or just one? HDTV, UHDTV or Layered coding? • Broadcast Island or Networked Broadcasting – How does that affect programming, advertising and future opportunities? Distribution of content? • Traffic, Automation and Playout – Local, Hyper local and targeted programming and advertising coordinated how? These and other concerns will be discussed with some of the optional solutions offered.
Kevin Gage

19:00 - 22:00

Honors & Awards Ceremony and Dinner (Ticket Required)

Room: Hollywood Ballroom

22:00 - 23:59

Afterparty and SMPTE Jam

Room: Hollywood Ballroom