Program for 2012 SMPTE Annual Technical Conference

Time Exhibit Hall Hollywood Ballroom-Mezzanine Level Of The Loews Hollywood Hotel Mount Olympus Room-3rd Floor Of Loews Hollywood Hotel Ray Dolby Ballroom Terrace Salon 1 Salon 2 Theatre: Chinese 6

Tuesday, October 23

08:00       Continental Breakfast      
09:00         Welcome  
09:15         Opening Keynote    
10:15 Break
10:45         Topics in File-Based Work Flows (Part 1) Image Processing (Part 1)  
12:00 Exhibit Hall Opens        
12:30 Industry Luncheon          
14:15       Topics in File-Based Work Flows (Part 2) Image Processing (Part 2)  
15:45            
16:15       Topics in File Based Work Flows (Part 3) Image Processing (Part 3)  
18:00            

Wednesday, October 24

08:00       Continental Breakfast      
09:00         SMPTE Timed Text for Captioning Internet-delivered Content (Part 1) High Performance Networks Advances in 3D (Part 1)
10:30 Exhibit Hall Open,
Break in Exhibit Hall
           
11:00       SMPTE Timed Text for Captioning Internet-delivered Content (Part 2) Cinematography and Post (Part 1)  
12:30   Fellows Lunch        
14:15       Migrating to the Cloud: Understanding the Opportunities and Challenges (Part 1) Cinematography and Post (Part 2)  
15:45            
16:15       Migrating to the Cloud (Part 2) Cinematography & Post (Part 3)  
17:45            
18:30         Annual Membership Meeting    

Thursday, October 25

08:00       Continental Breakfast      
09:00         Olympics Asset Management and Archive Advances in 3D (Part 2)
10:30 Exhibit Hall Open,
Break in Exhibit Hall
           
11:00       Ultra-High-Definition Imaging Evolving Broadcast Infrastructure (Part 1)  
12:30            
14:00         Sound Techniques (Part 1) Evolving Broadcast Infrastructure (Part 2)  
15:30 Break
16:00         Sound Techniques (Part 2) Evolving Broadcast Infrastructure (Part 3)  
19:00 Honors and Awards Dinner & Ceremony
22:15 Afterparty with SMPTE Jam

Tuesday, October 23

08:00 - 09:00

Continental Breakfastgo to top

Room: Ray Dolby Ballroom Terrace

09:00 - 09:15

Welcomego to top

Rooms: Salon 1, Salon 2

09:15 - 10:15

Opening Keynotego to top

Room: Salon 1
09:15 Keynote Address
Anthony Wood (Roku, USA)
Founder & CEO
Presenter bio: A pioneer and innovator in TV and digital media, Anthony Wood is the Founder and CEO of Roku, a name that means “six” in Japanese to represent his sixth company. In the early days of Roku, Anthony also served as the vice president of Internet TV at Netflix, where he developed what is known today as the Roku streaming player, originally designed as the original video player for Netflix. Prior to Roku, Anthony invented the digital video recorder (DVR) and founded ReplayTV, where he served as President and CEO before the company's acquisition and subsequent sale to DirecTV. Before ReplayTV, Anthony was Founder and CEO of iband, Inc., an Internet software company sold to Macromedia in 1996. The code base developed by Anthony at iBand became a central part of the original core code of Macromedia now known as Adobe Dreamweaver. After selling iBand, Anthony became the vice president of Internet Authoring at Macromedia. Earlier in his career, Anthony was Founder and CEO of SunRize Industries, a supplier of hardware and software tools for non-linear audio recording and editing. Anthony holds a bachelor's degree in electrical engineering from Texas A&M University.
Anthony Wood

10:15 - 10:45

Breakgo to top

10:45 - 12:15

Topics in File-Based Work Flows (Part 1)go to top

Applying File-Based Workflows
Room: Salon 1
10:45 The Pipe Dream Becomes Real: Advertising Workflows Come of Age
Christopher J Lennon (MediAnswers, USA); Harold Geller (Advertising Digital Identification, LLC (Ad-ID), USA)
The past year has been incredibly eventful in the development of advertising workflows. We can now embed a digital version of the advertising slate with delivered commercials, using the AMWA's AS-12. BXF can be used to exchange the schedule of commercials, instructions to move them from point A to B, and their metadata. It's also developing the ability to move copy rotation instructions from Agency to Broadcaster, filling the biggest gap existing today in the workflow. Ad-ID bridges all of this, making unique commercial identification simple. With an ever-expanding array of delivery platforms, as well as targeted advertising, maximum efficiency for advertising workflows has gone from a nice idea to a must-have. The good news is that we now have the tools to make it all work. We'll show how the whole thing fits together today, using industry standard approaches, taking the pipe dream of automated advertising workflows to reality.
Presenter bio: Chris Lennon serves as President & CEO of MediAnswers. He is a veteran of over 25 years in the broadcasting business. He also serves as Standards Director for the Society of Motion Picture and Television Engineers. He is known as the father of the widely-used Broadcast eXchange Format (BXF) standard. He also participates in a wide array of standards organizations, including SMPTE, SCTE, ATSC, and others. In his spare time, he's a Porsche racer, and heads up a high performance driving program.
Presenter bio: Harold S Geller is Chief Growth Officer of, Advertising Digital Identification LLC (Ad-ID) a US Based advertising-metadata system, the UPC code for Ads across all platforms, which is a joint venture of American Association of Advertising Agencies (4A’s) and the Association of National Advertisers (ANA). Harold speaks and writes extensively regarding interoperability, digital workflow and metadata in advertising and is the co-author of four white papers on the subject. Harold’s advertising career spans nearly 30 years, in the United States and Canada. He has worked in media buying/planning, account management, financial, and technology roles at MindShare, Ogilvy & Mather, and McCann Erickson, and the defunct Ted Bates and Foster Advertising. Harold is a graduate of radio and television broadcasting from Seneca College (Toronto, Ontario, Canada).
Christopher J LennonHarold Geller
11:15 Lessons Learned Implementing FIMS 1.0
Ian Hamilton (Signiant, Inc., Canada); Tony Vasile (Signiant, Canada)
This presentation describes practical lessons learned while implementing services interfaces in accordance with the Framework for Interoperable Media Service (FIMS) 1.0. FIMS is a framework of service definitions for implementing media related operations using a Service-Oriented Architecture (SOA) approach Experiences gained through building a portable test harness that implements data driven simulators for both Service Consumer (orchestration layer) and Service Provider interface will be shared.
Presenter bio: Ian Hamilton has been an innovator and entrepreneur in Internetworking infrastructure and applications for more than 20 years. As a founding member of Signiant, he has led the development of innovative software solutions to address the challenges of fast, secure content distribution over the public Internet and private intranets for many of the media and entertainment industries' largest companies. Prior to Signiant, Ian was Chairman and Vice President of Product Development at ISOTRO Network Management. He was responsible for launching ISOTRO's software business unit and created the NetID product suite before the company was successfully acquired by Bay Networks. Ian held senior management positions at Bay Networks and subsequently Nortel Networks, from which Signiant emerged. Previously Ian was a Member of Scientific Staff at Bell Northern Research performing applied research and development in the areas of Internetworking and security.
Ian Hamilton
11:45 Developments in the Realization of Practical File Based Workflow Environments
David A Pease (IBM Almaden Research Center, USA); Andrew G. Setos (BLACKSTAR Engineering Inc., USA); Ed Childers (IBM, USA)
Description: The imperative of file-based content environments has been compelling, but equally challenging. Media-based content environments have matured over virtually all of recorded history, and paradigms and tools have become so ingrained that their presence and utility have become second nature. To successfully make this transition tools are necessary to facilitate the workflows and other attributes of a true file-based infrastructure. LTO5, with its high density and throughput and its fundamental reliability, coupled with the Linear Tape File System (LTFS) are two such innovations. The fact that both technologies are well documented, standardized and multi-sourced is another essential component and leading indicator for positive contribution to file-based environments.  This paper will discuss the business and technical necessities of moving to a file-based workflow, the  history and attributes of LTO tape, the development and features of LTFS, and how all of these pieces can come together to create a modern environment.
Presenter bio: David Pease has worked in the computer industry for more than 40 years. After running a successful consulting company for many years, he joined IBM Research in the early 1990s. At IBM he has concentrated on storage-related research; he has contributed to various projects, including Tivoli Storage Manager (TSM), the DVD standard and specifically the UDF file system, and most recently he led the development of LTFS (the Linear Tape File System). He recevied his Master's and Ph.D. in Computer Engineering from U. C. Santa Cruz.
Presenter bio: Andrew G. Setos has spent his entire career at the cutting edge of audio-visual innovation, from production to distribution and exhibition and in virtually every form of content play. He has collaborated with some of the industry's most prolific creative talents and business executives to help realize their visions. He is currently CEO of BLACKSTAR Engineering Inc., a firm that advises on the intersection of technology and media. Most recently he was President, Engineering for the Fox Group where he was involved in almost every aspect of content creation and distribution. Previous to Fox he was the lead engineering and operations executive at the company that launched MTV, VH-1 and Nickelodeon. His role is summarized in the recently published book I Want My MTV. Before that he spent several years at WNET as Chief Engineer where he was involved in many innovative, award winning productions, such as Live from Lincoln Center, Dance in America, Bill Moyers Journal and the MacNeil/Lehrer News Hour. He has applied for and been granted a variety of patents. Along the way he has received many distinctions, including being elected a Fellow of the Society of Motion Picture and Television Engineers and accepted three Emmy's for Engineering from the Academy of Television Arts & Sciences, the most recent being the Charles F. Jenkins Lifetime Achievement Award. Andrew holds a Bachelor of Science degree from Columbia University School of Engineering and Applied Science.
David A PeaseAndrew G. Setos

Image Processing (Part 1)go to top

Real-Time Workflows
Room: Salon 2
10:45 GPU-Based Real-Time 4K RAW Workflows
Thomas True and Andrew Page (NVIDIA Corporation, USA)
Advances in digital imaging technology are fundamentally changing the cinema workflow and the tools artists and engineers traditionally use. Relatively inexpensive 4K resolution digital motion picture cameras capable of capturing and storing RAW sensor data with a wide dynamic range, high color gamut, and high bit depths all at frame rates that have traditionally been the domain of broadcast video are now available. Implementing a RAW workflow that provides real-time interactivity and a production path where all artistic choices are non-destructive requires a great deal of compute as every image displayed needs conversion from RAW sensor data to display oriented imagery and colorimetry. This highly parallel operation is well suited to the capabilities of modern graphics processing units (GPUs). This paper will present best practices for optimal GPU compute core and memory usage as well as efficient data transfer schemes for sensor data processing and display.
Presenter bio: Tom is a Senior Applied Engineer for Media & Entertainment in NVIDIA's Professional Solutions Group where he focuses on the use of GPUs in broadcast, video and film applications ranging from pre-visualization to post production and live to air. Prior to joining NVIDIA, Tom was an Applications Engineer at SGI. Thomas has a M.S. degree in Computer Science from the Computer Graphics Lab at Brown University and a B.S. Degree from the Rochester Institute of Technology.
Thomas True
11:15 Dynamic Rate Control Technologies enabling Priority Based Bandwidth Allocation for IP News Gathering Networks
Shuhei Oda, Katsunori Aoki and Yosuke Endo (Japan Broadcasting Corporation, Japan)
In this paper, we propose an IP based news gathering network where terminals share bandwidth in accordance with the DiffServ model. Seamless route connection of IP networks and dynamic bandwidth allocation enables speedy and accurate coverage. Therefore, we developed two key technologies: a dynamic rate control for live video transmission and a modified TCP that considers transmission priority. This rate control adjusts each encoding rate of multiple videos that share a common path to avoid video interruption. The developed TCP allocates bandwidths at an appropriate utilization ratio with consideration of their priority while the conventional TCP allocates bandwidths equally among TCPs in the common path, and this protocol maintains backward compatibility with the conventional TCP. We evaluated these technologies by performing transmission experiments and proved that both live flows and file based flows can share network bandwidth appropriately by using the maximum bandwidth of the network.
Presenter bio: Shuhei Oda has 7 years' experience in the broadcasting industry, starting with program production and system operation in 1999, before he moved into NHK Science and Technical Research Laboratories. He has been working in the area of video transmission systems for program production using IP networking technologies since 2006. His research interest covers traffic control and management of IP networks for the purpose of program production and exchange, and practical technologies dedicated to reliable and speedy program production.
Shuhei Oda
11:45 Real Time File System for Content Distribution
Heiko Sparenberg and Siegfried Foessel (Fraunhofer IIS, Germany)
This presentation gives a deep view into the development of a file system, especially designed for scalable media files including JPEG 2000 and H.264 SVC. By applying specially developed techniques, including the Substitution Strategy, a real-time capable file-system can be built, even if the mass storage, or the interface to it, is too slow to deliver the data in the desired time. Rather than skipping whole files, new caching strategies will be shown that again, take advantage of the file-inherent scalability. The presented system also comprises an advanced user-rights-management that allows for granting access-rights to certain parts of a scalable file, rather than granting rights to whole files. Users will therefore get a different version of an image or video, dependent on their current access-rights. Due to the Media Repackaging Component, these customized versions will be generated on the fly, if a user requests it.
Presenter bio: Heiko Sparenberg, born in 1977, received his Diploma degree in Computer Science in 2004 and a Master degree in 2006. He joined Fraunhofer IIS in Erlangen as a software engineer in 2006. Today, Heiko is Head of the group Digital Cinema and responsible for several software developments, e.g. the easyDCP software suite for digital cinema. His research topics are scalable media-file management, post-production software in the field of Digital Cinema and image-compression algorithms, with focus on scalable codecs including JPEG2000 and H264.SVC.
Heiko Sparenberg

12:00 - 20:00

Exhibit Hall Opensgo to top

Room: Exhibit Hall

12:30 - 14:00

Industry Luncheongo to top

Room: Hollywood Ballroom-Mezzanine Level of the Loews Hollywood Hotel
12:30 Luncheon Keynote: NBC's Innovative Use of Technology at the 2012 Olympics
Darryl Jefferson (NBC Universal, USA)
Please join SMPTE's Luncheon Keynote Speaker, Darryl Jefferson of NBC's Olympics International Broadcast Center, as they present an intriguing and enlightening view of the behind the scenes innovations and advanced technologies that enabled NBC to bring the 2012 London Olympics to virtually every corner of the world. Planned topics include a "Big Picture" look at transmission, inter-continental production (@home efforts), project overall size and scope, the MAM, Highlights and Streaming Factories and New Media Deliverable. This is a "don't miss" event!
Presenter bio: With a career that cuts across television, film, and sound, Darryl Jefferson was named Director of Post Production Operations for NBC's Olympic division in 2008. Jefferson oversees and maintains the division's Stamford facility, where he also acts as the Highlights Factory Project Manager, and directs technical operations for NBC Sports Digital Group. In his current position, Jefferson took the Highlights Factory from conception through implementation at the London Olympic Games, did the same at the Vancouver , 2010 Winter Games. In London, the system delivered web, broadband, live stream, and VOD clips during the Olympics, creating 3000 highlight packages in 17 days. During the games, NBCOlympics.com saw upwards of 57 million unique visitors and 1.5 billion page views, shattering the records of previous games, and taking content delivery technology to a whole new level. Jefferson is a four-time Emmy Award Winner for New Approaches in Broadcasting, Short Form 2008, 2009, 2010, as well as a 2010 Technical Emmy Award Tech Team, Studio.
Darryl Jefferson

14:15 - 15:45

Topics in File-Based Work Flows (Part 2)go to top

Performance Issues in File-Based Workflows
Room: Salon 1
14:15 Performance Parameters in File Based Workflows
Karl E Paulsen (Diversified Systems, USA)
Establishing a high system performance value for rich-media file-based workflows is tightly coupled to storage bandwidth. Configuring small scale storage solutions can be straight forward and simple. However, larger enterprise class systems that intend to grow, that must bridge other media platforms and peripherals, and need to support multiple sets of clients and associated workflows require a proper storage solution with few limitations. The hidden issues that become performance killers in a large scale storage solution are frequently misunderstood. This paper will present some of those hidden parameters; provide examples on how systems can be designed for scalability in both capacity and bandwidth; and show that by proper planning and implementation the consequences of a poorly designed, under rated system can be alleviated.
Presenter bio: As Chief Technologist at Diversified Systems, Karl Paulsen provides technology driven engineering services for projects related to media asset management, advanced digital video systems, workflow, and media storage technologies. Actively involved in television engineering for over 35 years; Karl held positions as CTO, VP Engineering and Director of Engineering for leading systems integration companies, broadcast television stations, mobile, CGI and post-production companies. Karl is a SMPTE Fellow, Standards Committee participant, SBE Lifetime Certified Professional Broadcast Engineer, and an IEEE member. He is a recognized author and industry technologist, publishing over 150 articles for TV Technology magazine in his continuing series ‘Media Servers' which focuses on servers, storage, file-based workflow and media management. Karl authored the books ‘Moving Media Storage Technologies' and ‘Video and Media Servers: Applications and Technology', and has held SMPTE manager and chair positions for the Pacific Northwest Section.
Karl E Paulsen
14:45 Optimised IP multicast architectures for real-time digital workflows
Thomas Kernen (Cisco, Switzerland); Javed Asghar (Cisco Systems USA, USA)
Real-time digital workflows are now commonly being distributed over Internet Protocol (IP) networks, across the entire broadcast chain from production to distribution. Many deployments are leveraging IP multicast for optimising the delivery of a source to a set of diverse end points such as video servers, quality control units, video monitoring, time & sync slaves, etc. This paper focuses on the evolution of IP multicast delivery, architectural best practices, security considerations and hardware performance requirements.
Presenter bio: Thomas Kernen is a Consulting Systems Engineer in Cisco's European Enterprise Networking architecture team. His main area of focus is defining architectures for transforming the broadcast industry to an All IP Video infrastructure. Thomas is a member of the IEEE Communications and Broadcast Societies, and the Society of Motion Picture & Television Engineers (SMPTE). He is active within a number of trade and industry organisations including the Digital Video Broadcasting (DVB) Project, the SMPTE Standards Committees and the European Brodcasting Union (EBU) working groups. Prior to joining Cisco, Thomas spent ten years with various telecoms operators, including a FTTH triple play operator, for whom he developed their video architecture.
Thomas Kernen
15:15 Being The Change You Wish To See: Changing Broadcast Schedules Right Up To Air
Christopher J Lennon (MediAnswers, USA)
The Internet has caused us to think about the words "dynamic" and "media" in new ways. Viewers now have access to whatever they want, whenever they want. Advertising is no exception. Advertisers expect the right ad to be shown to the right person, on the right device. This includes changing their minds about what they want to advertise, when, and where, right up to the time that the viewer sees the ad. Sounds like a nightmare, doesn't it? It used to be. Fortunately, we now have SMPTE's Broadcast Exchange Format (BXF), which is perfect for this task. We'll look at real-world cases in which BXF is enabling dynamically changing delivery of content, right up to the time the viewer sees it. So, don't fear change, embrace it. Oh, and you can expect to not only save money doing this, but also find new revenue.
Presenter bio: Chris Lennon serves as President & CEO of MediAnswers. He is a veteran of over 25 years in the broadcasting business. He also serves as Standards Director for the Society of Motion Picture and Television Engineers. He is known as the father of the widely-used Broadcast eXchange Format (BXF) standard. He also participates in a wide array of standards organizations, including SMPTE, SCTE, ATSC, and others. In his spare time, he's a Porsche racer, and heads up a high performance driving program.
Christopher J Lennon

Image Processing (Part 2)go to top

Algorithms and Compression
Room: Salon 2
14:15 HEVC - Enabling commercial opportunities through next generation compression technology
Lukasz Litwic (Ericsson Television, United Kingdom)
High Efficiency Video Coding (HEVC) is near completion by the ITU-T | ISO/IEC Joint Collaborative Team on Video Coding (JCT-VC). The aim of HEVC is to revolutionize the compression world with a potential 50% bitrate saving over AVC, or H.264 / MPEG-4 AVC and even more dramatic bandwidth savings compared to MPEG-2. HEVC is already attracting much interest from acquisition to distribution and delivery to the home over all networks. Forecasts say 90% of IP traffic will be video by 2015, making HEVC an attractive enabler for new types of video consumption, from mobile devices served over unmanaged networks to high-end 4K TV to the home. This paper compares simulation results from the JCT-VC HEVC test model against an industry-leading AVC encoder. The paper also examines the behavior of selected HEVC tools that facilitate compression gains over AVC. Finally, it explores the significance of these efficiency gains for a variety of applications.
Presenter bio: Lukasz has joined Ericsson Television in 2007 and has worked on various aspects of video processing and compression research. The Ericsson Television Compression Algorithms R&D group specialises in video compression performance. The group has developed an advanced compression research model to investigate pre-processing, multipass encoding and general coding performance. Knowledge from this work has formed the foundation of Ericsson Television's real-time encoding products. Looking forward, the group is focused on researching and developing the technology to meet the future compression demands of our customers. MEng. Electronics and Telecommunications, Gdansk University of Technology Poland. PhD Candidate, University of Surrey, UK
Lukasz Litwic
14:45 Automatic Interlace or Progressive Video Discrimination
Manish Pindoria and Tim Borer (BBC, United Kingdom)
Video content originates from a wide variety of sources. Even within one programme, several different video technologies may have been used during production. This paper discusses an algorithm that is able to reliably identify progressive and interlace frames. The algorithm is based on calculating a metric based on the degree of "interlacing artefacts" produced when adjacent fields from different frames are re-interleaved to reform a frame. The metrics are analysed over multiple frames to detect whether the material originates from a progressive or interlace source. This process has successfully been adapted to correct film-phase errors found in telecined archive material.
Presenter bio: Manish Pindoria works as an Engineer at BBC Research and Development, currently focusing on digitisation and signal processing for archive applications. Prior to the BBC he designed image processing algorithms and hardware (ASIC, FPGA) for products including broadcast reference monitors, 4k camera systems and medical image processing units for Sony Broadcast and Professional Research Labs (BPRL). Manish holds a masters degree in Engineering Science from Oxford University. He is the co-inventor of 6 patents.
Manish Pindoria
15:15 Spatial Concealment for Damaged Images Using H.264/AVC Intra Prediction and Neighborhood Cliques
Seyfullah Halit Oguz (Qualcomm Incorporated, USA)
Intra prediction methods introduced by H.264/AVC and furthered by HEVC, beyond enhancing intra coding efficiency also provide a potent tool for spatial error/loss concealment and digital film restoration e.g. scratches. In this paper, a novel H.264/AVC intra prediction based algorithm for spatial concealment is introduced. The proposed algorithm utilizes reliable intra prediction direction information from available neighboring regions in the same image (or video frame) and synthesizes intra prediction directions most suitable for erroneous/lost/damaged image regions. The synthesized intra prediction directions are used to conceal the underlying artifacts through pixel domain interpolation. Distinguishing features of the current work contributing to its success are its use of (a) an accuracy assessment and consequent weighting of available neighbors' information, and (b) conditional propagation of available neighbors' information based on the concept of 'neighborhood cliques'. Both features significantly improve the reliability of interpolation results. Proposed framework enables using both causal and non-causal information.
Presenter bio: Seyfullah Halit Oguz received his B.Sc. (1987) and M.Sc. (1990) degrees in Electrical and Electronics Engineering respectively from the Middle East Technical University and Bilkent University in Ankara, Turkey. He received his Ph.D. degree in Electrical Engineering in 1999 from the Electrical and Computer Engineering Department of the University of Wisconsin – Madison. In his engineering career prior to joining Qualcomm Inc., Dr. Oguz worked for Los Alamos National Laboratory (Group NIS-1), EMC Corporation and Sand Video Incorporated. He joined Qualcomm Inc. in July 2003 initially taking part in the MediaFLO project. Currently, Dr. Oguz is a Senior Staff Engineer member of the Multimedia Team in the Strategic IP Division of Qualcomm Inc. Dr. Oguz, Seyfullah is the author/co-author of 24 refereed journal and conference papers, and served as a reviewer for many prominent journal and conferences. He holds 15 granted US Patents. Dr. Oguz is a member of SMPTE, IEEE and ACM.
Seyfullah Halit Oguz

15:45 - 16:15

Break in Exhibit Hallgo to top

Room: Exhibit Hall

16:15 - 17:45

Topics in File Based Work Flows (Part 3)go to top

The Unexpected in File-Based Workflows
Room: Salon 1
16:15 High-speed format converter with intelligent quality checker for file-based system
Kenichiro Ichikawa, Takuji Shimmi, Yasunori Iguchi and Kentaro Higashijima (Japan Broadcasting Corporation, Japan)
NHK broadcasting is shifting to file-based systems for its TV production and playout systems including VTRs and editing machines. A variety of codecs and Material eXchange Format (MXF) formats have been adopted for broadcast equipment. These include MPEG-2/AVC and OP1a/OP-Atom. Video files need to be converted into the selected codec and format to operate efficiently. The quality of video and audio must be checked during this conversion process because degradation and noise may occur. This paper describes equipment that can quickly convert files to multiple formats as well as intelligently check the quality of video and audio during the conversion. The equipment automatically adjusts thresholds to detect errors in the quality check, depending on the type of codec and the spatial frequency of each area, which is divided into 16 sub-areas. Furthermore, this can be done in less time than the actual video length by optimizing the software processing performance.
Presenter bio: Kenichiro Ichikawa received his B.S. degree from the Keio University in 2002 and M.S. degree from the Keio University in 2004. Following graduation, he joined NHK (Japan Broadcasting Corp) and built his career as a video engineer through studio program production and live telecasts. He is currently involved in the development of Super Hi-Vision Systems, particularly video and master control systems. He belongs to Super Hi-Vision System Design & Development Division.
Kenichiro Ichikawa
16:45 Corralling the Chaos of Ancillary Data within Multiple File Formats
Sara Kudrle (Grass Valley, a Belden Brand and & SMPTE Western Region Governor, USA)
SDI Based Workflows and formats were "iron clad" and well defined. These were the good old days when devices interconnected with ease thanks to the rigor and breadth of SMPTE Standards for SDI. Nowadays, with the great flexibility of File Based Media Workflows and the multitude of formats needed for different applications, we are dealing with incompatible wrappers and inconsistent or non-extendable ANC data carriage. This paper will look at these evolving workflows and the resulting Wild West of Files. More specifically the paper explores the challenges faced with the handling of ANC data such as AFD, captions, ad insertion triggers, Dolby's Dialnorm, etc., within various file formats. The paper then describes the unified and extensible approach offered by SMPTE436M for the carriage of ANC data within MXF wrapped files. Could SMPTE436M be the champion we need to restore order to the Wild West and corral some of the chaos?
Presenter bio: Sara Kudrle is currently the Product Marketing Manager for Monitoring and Control within the Strategic Marketing group of Grass Valley, a Belden Brand. Sara received her degree in Computer Science with a Minor in Mathematics from California State University Chico. Sara's 15 plus years as an engineer in the Broadcast industry started at Tektronix where she worked in VideoTele.com. From there, she joined Continental Electronics, working within the TV Transmitter group where she was responsible for developing exciter control software. From there she joined Miranda/NVISION and was responsible for several projects within the Router Control group. Sara has authored several papers for NAB, PBS and SMPTE conferences and has been published in the SMPTE Motion Imaging Journal and Broadcast Engineering. Sara's paper on "Fingerprinting for Solving A/V Synchronization Issues within Broadcast Environments,” received the 2012 Journal Award for best article. Sara is active within SMPTE serving on several committees and within the standards community. Sara is a current SMPTE Secretary/Treasurer and former Section Manager for Sacramento as well as the Western Region Governor for SMPTE. She is also a member of IEEE.
Sara Kudrle
17:15 And the winner is... Workflows for Judging Content Submissions at Siggraph and VES
Ben Roeder (Sohonet, Inc., United Kingdom); Martin Rushworth (Sohonet, Inc., USA)
With the proliferation of formats and tools for media creation, providing a uniform arena in which to judge creative submissions for peer group recognition is a difficult and potentially labour intensive problem. This paper discusses a workflow and supporting software developed to support the uniform submission, judging, and display of content for the Visual Effects Society and ACM Siggraph Awards.
Presenter bio: Ben started at Sohonet in 2000 and is now in charge of Sohonet's global technology programme and support engineers. Ben has developed a number of groundbreaking QC and storage solutions for the media and entertainment industries, including the electronic cinema submission process for SIGGRAPH and VES awards process for many years. Ben was previously part of the Oscar and Emmy award-winning development team at Lightworks and has worked on many research projects and written a variety of articles regarding the future of media production.
Ben Roeder

Image Processing (Part 3)go to top

Perception and the Human Visual System
Room: Salon 2
16:15 Quantitative Evaluation of Human Visual Perception for Multiple Screens and Multiple CODECs
Sean McCarthy (ARRIS, USA)
Great consumer experiences are created by a convergence of sight, sound, and story. This paper is an in-depth quantitative analysis of the neurobiology and optics of sight. More specifically, we examine how principals of vision science can be used to predict the bit rates and video quality needed to make video on everything from smartphones to Ultra HDTV a success. We present the psychophysical concepts of simple acuity, hyperacuity, and Snellen acuity to examine the visibility compression artifacts for MPEG2 and MPEG4/H.264. We also take a look at the newest emerging International compression standard HEVC. We investigate how the various sizes of the new compression Units (CU, PU, and TU) in HEVC would be imaged on the retina, and what that could mean in terms of the HEVC video quality and bit rates we would likely need to deliver quality content to smartphones, tablets, HDTV, 4K TV, and Ultra HDTV.
16:45 Perceptual Signal Coding for More Efficient Usage of Bit Codes
Scott Miller (Dolby Laboratories, Inc., USA); Mahdi Nezamabadi and Scott Daly (Dolby Laboratories, USA)
As the performance of electronic display systems continues to increase, the limitations of current signal coding methods become more and more apparent. With bit depth limitations set by industry standard interfaces, a more efficient coding system is desired to allow image quality to increase without requiring expansion of legacy infrastructure bandwidth. A good approach to this problem is to let the human visual system determine the quantization curve used to encode video signals. In this way optimal efficiency is maintained across the luminance range of interest, and the visibility of quantization artifacts is kept to a uniformly small level.
Presenter bio: Scott Miller is a senior member of the technical staff at Dolby Laboratories where he serves in the Imagaging Research group. He specializes in image display technology and video signal processing, most recently working on Dolby's Professional Reference Monitor. He received a B.S. in electrical engineering from Cornell University and has spent nearly 30 years working in the video industry, including several years with Panasonic Research where he helped develop the Emmy award winning Universal Video Format Converter.
Scott Miller
17:15 Human Perception & Advancements in File-Based Quality Control
Eric Carson and Atul Ravindran (Digimetrics, USA)
When retrieving video pictures from digital video tape or film prints, artifacts are generally introduced via methods that are difficult to detect without use of the human visual system, since many of these artifacts do not have a common, mathematically definable pattern to them. These artifacts can include film tearing, film dirt, analog noise, block-based digital drop outs and others. This paper covers a newly designed metric and the implementation methods used to automatically find these types of artifacts without need of an external reference, substantially functioning and locating artifacts in the same method as the human visual system. The paper also shows the viability of this metric in a system, and how this metric is useful and cost-saving for file-based content preparers compared to existing, manual processes for content review.
Presenter bio: Eric Carson is responsible for the business management of Digimetrics for DCA, Inc. He is the primary architect of the underlying technology of Digimetrics' Aurora quality control software, and has designed several metrics and methods that are used by broadcasters, cable operators and VOD distributors around the world.
Presenter bio: Atul is a software engineer at DCA Inc., and is the co-author of some of the quality algorithms used in automated QC software 'Aurora'.
Eric CarsonAtul Ravindran

18:00 - 20:00

Opening Night Reception in Exhibit Hallgo to top

Room: Exhibit Hall

Wednesday, October 24

08:00 - 09:00

Continental Breakfastgo to top

Room: Ray Dolby Ballroom Terrace

09:00 - 10:30

SMPTE Timed Text for Captioning Internet-delivered Content (Part 1)go to top

New Regulations and Implementations
Room: Salon 1
09:00 Compliance with FCC Rules for IP Distribution of Video Programming
Alison Neplokh (Federal Communications Commission, USA)
In January, 2012, the Federal Communications Commission adopted rules requiring closed captioning of IP-delivered video programming that has aired on television. The rules apply to video programming owners (i.e., copyright holders), video programming distributors (i.e., websites), and manufacturers of apparatus designed to receive or play back video programming. These rules begin to take effect on September 30, 2012. This presentation will describe the rules, compliance, and the status of SMPTE Timed Text as a "safe harbor" for compliance. It will also cover the status of ongoing accessibility initiatives at the FCC.
Presenter bio: Ms. Neplokh is the Chief Engineer of the Media Bureau at the Federal Communications Commission where she advises the Bureau Chief on a variety of technology issues related to cable television, broadcast television, and cable broadband service. She also serves as the FCC Co-Chair of the Video Programming Accessibility Advisory Committee, which is charged with providing recommendations on closed captioning and video description of television programming. Prior to joining the FCC, she worked as a software engineer at a telecommunications equipment manufacturer, designing the internals of a high-speed IP router. Before that, she worked for Carnegie Mellon University in the systems development group, writing software to monitor the campus network. Ms. Neplokh has a B.S. in Electrical and Computer Engineering from Carnegie Mellon University and a J.D. from the Georgetown University Law Center.
Alison Neplokh
09:30 Closed Captioning Challenges for IP Video Delivery
Jason Livingston (CPC Closed Captioning, USA)
New FCC regulations require closed captions from TV broadcasts to be available when these videos are delivered by IP. This presents a number of challenges in content authoring, asset management, and delivery. To address these challenges, SMPTE created a new specification called SMPTE 2052 (SMPTE Timed Text). This paper will discuss the new regulations and best practices for the different workflows involved, such as: file-based authoring of closed captions for broadcast and IP compatibility, translating existing CEA-608 and CEA-708 broadcast closed captions data into SMPTE 2052, common pitfalls and workarounds, and current SMPTE activities to help address these challenges.
Presenter bio: Jason Livingston is a developer and product manager with CPC Closed Captioning. He is well known for providing closed captioning software solutions to the industry. His recent projects include development of captioning software with speech recognition capabilities, and implementation of the latest SMPTE and CEA closed captioning standards.
Jason Livingston
10:00 Post-Deployment Considerations for use of SMPTE Timed Text
Craig Cuttner (HBO, USA)
Post-Deployment Considerations for use of SMPTE Timed Text The US Federal Communications Commission has selected SMPTE-Timed Text (SMPTE-TT; SMPTE ST2052-1) as the "Safe Harbor" format for Broadband (IP) Captioning of previously-televised content. For many content providers, the deadline to begin captioning IP-delivered content has passed and implementation is underway.This presentation provides a content provider's deployment story.
Presenter bio: Craig Cuttner is senior vice president, Advanced Technology, for Home Box Office, responsible for all projects related to advanced technology architecture in the Technology Operations area. He oversees the planning of distribution technology architecture used to serve HBO's core and new business platforms, and the establishment of technical standards for new technologies of interest to the company. He was named to this position in November 2003. Previously, he was vice president, Technology. Cuttner joined HBO in 1982 as a system engineer. Cuttner has been active in HDTV since the late 1980s, contributing to many aspects of HDTV industry-wide. He has also been involved in strategic work since the mid 1990's on video on demand. He was named a Fellow in the Society of Motion Picture and Television Engineers in 2000, is a member of the Society of Cable Telecommunications Engineers Engineering Committee and is chair of SCTE Digital Video Subcommittee Working Group 1 on Encoding and also the National Academy of Television Arts and Sciences Technical Emmy Committee. Cuttner has over one dozen patents and patents pending. Cuttner holds a BS degree in Industrial Management from Georgia Tech.
Craig Cuttner

High Performance Networksgo to top

Room: Salon 2
09:00 Leveraging Fiber Properties to Our Advantage
John Beatty, Kimberly Allen, Richard Zahm and John Bradford (Fiber Core Networks, USA)
A strand of optical fiber is inherently thin, flexible, and light weight. How can we leverage these properties to improve the fiber installation process and make it easier to adapt to changing facility needs? A new infrastructure / installation technology called "Air Blown Fiber Infrastructure" facilitates this approach. Using a point-to-point network of high density tubes as a highway, 3000 ft. of 24 strand- fiber can be blown (installed) from source to destination across a facility in just 30 minutes. Once the tube network is in place, changes can be made at a fraction of the time and cost of conventional fiber networks, without disruption to the network or the facility. Technical Discussions include: • What ABF looks like and the science • Design Considerations (intra-building, campus) • Tube Bundle Specifications and Limitations • Fiber Bundle Installation Considerations and Options • Jetting Specifications, Limitations and Testing • Termination Options
Presenter bio: John brings a wealth of knowledge to Fiber Core Networks as Director of Operations. His unique understanding of broadcast, cable, and communication systems was gained over the last 30 years working with such industry leaders as Comcast, CNN, and Turner Entertainment Networks. During his career he has held positions as CATV Technical Engineer, Broadcast Design Engineer, Manager of Headends, Director of Engineering, and now Director of Operations. It is this experience that allows him to work with clients to develop the ideal solution for both their enterprise infrastructure strategies as well as individual systems design.
John Beatty
09:30 Trends in wireless high-bandwidth display technology
Peter H Putman (ROAM Consulting LLC, USA)
The newest generation of high-resolution digital display interfaces now has a new face: Wireless connectivity. Several systems, including generic WiFi (802.11)-based products, have already come to market. Two of them - 6 GHz wireless high-definition interface (WHDI) and 60 GHz wireless HD (WiHD) - are competing head to head in the consumer electronics space, while a wide-channel, short range implementation (ultra wideband, or UWB) is also making headway. All of these systems support full bandwidth HDMI and DisplayPort signals (10 Gb/s) with low latency, making them attractive as well for 3G camera-to-monitor links for field video production. This paper will describe each system and explain their advantages and disadvantages, as well as the differences between them. (A WHDI link can also be used to run the presentation at the conference.)
Presenter bio: Pete Putman is a technology consultant to Kramer Electronics USA; engaged in product development and testing, technology training, and educational marketing programs. Pete is also a contributing editor for Sound and Communications magazine, the leading trade publication for commercial AV systems integrators. He publishes HDTVexpert.com, a Web blog focused on HDTV, digital media, wireless, and display technologies. Pete holds a Bachelor of Arts degree in Communications from Seton Hall University, and a Master of Science degree in Television and Film from Syracuse University. He is an InfoComm Senior Academy Instructor for the International Communications Industries Association (ICIA), and was named ICIA's Educator of the Year for 2008. He is a member of both The Society of Motion Picture and Television Engineers (SMPTE) and Society for Information Display (SID).
Peter H Putman
10:00 Next-generation techniques for the protection and security of IP transport
Chin Chye Koh (Nevion USA, USA)
Few in the professional video community foresaw IP's rapid ascent to its position as a, if not the, dominant video transport protocol. To many, IP lacks the control and protection so critical to video networking. While today's IP network infrastructure, driven by the speed and capacity requirements of data centers and cloud-based services, is now capable of carrying professional video in a controlled, usable manner, significant concerns remain for the best way to control, monitor and protect services in wide area routed networks. This paper will focus on recently-developed techniques for real-time data flow protection now undergoing trials and initial deployment, including: delay offset launch network stream feeding for dual-path protection (enabling simultaneous network hits on dual-path connectivity); single path protection using control techniques for dynamic end-to-end movement of data buffering (bandwidth savings and effective dual-path protection); and RTP source coherence across multiple sources for seamless source and destination protection.
Presenter bio: Chin Chye Koh holds a Ph.D. and M.Sc. in Electrical and Computer Engineering from the University of California Santa Barbara for work on the perception of visual quality in relation to image and video compression. He received his B.Sc. degree in Electrical Engineering from Washington State University. As Senior Solutions Architect at Nevion USA, he has responsibility for the development of system solutions primarily focused on contribution video transport in managed media networks. Prior to his position as Solutions Architect, Dr. Koh was Product Manager for the Ventura line of modular video transport solutions and before that, Member of Technical Staff responsible for algorithm research and development for video compression and transport solutions. His post-graduate work included positions at Intel Corporation in Arizona and Philips Research in The Netherlands. Dr. Koh was also a research and development engineer at Pepperl+Fuchs, Singapore, where he developed sensor modules for factory automation.
Chin Chye Koh

Advances in 3D (Part 1)go to top

3D Production
Room: Theatre: Chinese 6
09:00 Unconstrained 2D to Stereoscopic 3D Image and Video Conversion using Semi-Automatic Energy Minimization Techniques
Raymond Phan, Richard J Rzeszutek and Dimitri Androutsos (Ryerson University, Canada)
We present a method for semi-automatically converting unconstrained 2D images and videos into stereoscopic 3D. User-defined strokes for the image, or over several keyframes, corresponding to a rough estimate of the scene depths are defined. After, the rest of the depths are solved, producing depth maps to create stereoscopic 3D content. For video, to minimize effort, only the first frame has labels, and are propagated over all frames by a robust tracking algorithm. Our work combines the merits of two energy minimization techniques: Graph Cuts and Random Walks. Current efforts rely on automatic or manual conversion by rotoscopers. The former prohibits user intervention, or error correction, while the latter is time consuming and prohibits use in smaller studios. Semi-automatic is a compromise to allow more faster and accurate conversion, decreasing the time for studios to release 3D content. Results demonstrate good quality stereoscopic images and video creation with minimal effort.
Presenter bio: Raymond Phan is a Ph.D. candidate with the Department of Electrical and Computer Engineering at Ryerson University in Toronto, Ontario, Canada. He obtained Bachelors of Engineering in Computer Engineering (2006), and his Masters of Applied Science in Electrical and Computer Engineering (2008) from Ryerson as well. Ray's research interests include computer vision, image processing, stereo vision, 2D to 3D conversion and 3DTV. In 2008, Raymond received the Ryerson University Gold Medal - the highest accolade a graduating student from Ryerson can receive, signifying the significant volunteer contributions made to the university, to their department and program. In 2010, Raymond was awarded with the Natural Sciences and Engineering Research Council of Canada (NSERC) Vanier Canada Graduate Scholarship - the most prestigious award for Ph.D. study in Canada. Ray is also a part-time instructor with Ryerson, as well as serving as a volunteer, chair and co-chair on many academic and university committees.
Raymond Phan
09:30 Image Enhancement Using Similarity-based Color Matching for High-quality Stereoscopic 3D Image Acquisition
Young hoon Lim and Eunjung Chae (Chung-Ang University, Korea); Eunsung Lee (Image Processing and Intelligent System Laboratory, Chung-Ang University & Image Processing and Intelligent System Laboratory, Chung-Ang University, Korea); Wonseok Kang and Joonki Paik (Chung-Ang University, Korea)
Stereoscopic three-dimensional (S3D) movies often suffer from the inconsistency problem between left and right images acquired by a stereo camera due to unstable filming environment. This research introduces a novel image enhancement algorithm using similarity-based color matching of S3D images. The proposed algorithm first partitions both reference and target images into multiple sub-blocks, and decomposes them into reflection and illumination components using retinex theory. Color correction is performed by matching histograms of a corresponding pair of blocks based on the structural similarity index measure (SSIM). The color corrected images are finally enhanced by removing noise using a priori trained dictionary-based patches. We can make high-quality S3D images from imperfect input images acquired under critical conditions including limited dynamic range, unstable calibration of stereo camera pairs, and low signal-to-noise ratio (SNR). The proposed method can be applied to high-quality panorama images, frame difference-based video tracking, and similarity-based image analysis.
Presenter bio: Wonseok Kang was born in Jeju, Korea in 1983. He received the B.S. degree in electronic engineering from Korea Aerospace University, Korea, in 2010.Currently, he is pursuing M.S. degree in image processing at Chung-Ang University. His research interests include image restoration, computational camera and data fusion.
Wonseok Kang
10:00 3D Production Issues during the London 2012 Olympic Games
Jim DeFilippis (TMS Consulting, USA)
For the first time the Olympics were telecast in 3D. In the past, some 3D coverage was available on a closed circuit basis of limited events. The London Olympics 3D Channel covered multiple sports, both live and ENG coverage, and provided a full up 3D channel of over 275 hours of 3D programming. The core of the 3D coverage was provided with (3) OB Van remote production units as well as (6) single camera EFP production units. A variety of stereoscopic rigs were used in each of (4) venues along side the Panasonic ENG/EFP P2 3D Camcorder. Some special stereo cameras were also used including: pole cameras, rail cameras, RF cameras and underwater cameras. The paper will present the unique challenges to providing 3D coverage, from organizing the 3D channel as well as the technical challenge of covering sports in 3D while accommodating the full up 2D production, finally, what worked and what did not.
Presenter bio: Jim has worked in radio and television broadcasting for over 32 years including the ABC Radio Network, the ABC Television Network, the Advanced Television Test Center, and the Atlanta Olympic Broadcast Organization. Most recently he was EVP, Digital Television Technologies and Standards, for the FOX Technology Group. At FOX he led the development of progressive camera systems to replace film for television, 480p30 video production systems (FOX Widescreen), and the FOX HD splicing system design and deployment for the FOX Network. Previous to FOX, Jim was the Head of Engineering for the 1996 Atlanta Olympic Games where he championed the development of the first all digital, disk based, super slo-motion camera/recording system (Panasonic/EVS). Jim has been involved with the Olympic Host Broadcaster since 1993 and has been involved in the Atlanta (1996), Sydney (2000), Torino (2006) and at the London (2012) games, assisting with the technical production and distribution of the Olympic 3D TV channel. He attended the School of Engineering at Columbia University in the City of New York where he attained his Bachelor of Science in Electrical Engineering in 1980 and his Masters of Science in Electrical Engineering in 1990. Jim is a Fellow of the SMPTE and is involved in standards development at SMPTE, the International Telecommunications Union, and the ATSC including work on RP 85 Audio Loudness Control for DTV. He has received two Technical Emmy awards for his work at the ATTC in the Development of the ATSC standard and for the FOX HD Splicing System. Jim lives in Pacific Palisades, CA with his wife, Maggie and two teenage children, Jake and Juliana.
Jim DeFilippis

10:30 - 18:30

Exhibit Hall Opengo to top

Room: Exhibit Hall

10:30 - 11:00

Break in Exhibit Hallgo to top

Room: Exhibit Hall

11:00 - 12:30

Cinematography and Post (Part 1)go to top

Acquisition
Room: Salon 2
11:00 High Performance Optics for a New 70mm Digital Cine Format
Brian Caldwell (Caldwell Photographic Inc., USA)
This paper details technical features of the first series of high-speed prime lenses specifically designed for a new 70mm digital cine format. These new lenses offer full aperture (f/2.5) performance at or near the diffraction limit from near-UV to near-IR over a 48mm x 20.25mm image area. Additionally, these lenses are designed to work properly with optical filters inserted between the lens and sensor. Twelve focal lengths are under development, ranging from a 27mm ultra-wide to a 300mm telephoto. In addition to traditional externally geared controls, all lenses have internal motors for focus and aperture. High-resolution metadata is continuously transmitted to the camera, including focus distance, aperture, temperature, and individual lens identification. A replaceable internal filter near the aperture stop permits a wide range of creative effects, including soft-focus.
Presenter bio: Dr. Caldwell has been a professional lens designer since 1985, and has completed more than 500 design projects. More than 100 of these designs have been fabricated, ranging from mass-market camera lenses to ultra-high performance zoom and reconnaissance lenses. He founded Caldwell Photographic Inc. in 2001, and is actively involved in optical product development and manufacture in addition to lens design. Areas of particular interest include broadband UV-VIS-IR optics and high-performance large aperture optics. Recently developed products include the 60mm UV-VIS-IR lens currently licensed for manufacture to Coastal Optical Systems, and a new 120mm UV-IR lens manufactured by Caldwell Photographic. Dr. Caldwell has worked as a consultant and contractor to Panavision for more than 12 years.
Brian Caldwell
11:30 Focusing on lens metadata
Jonathan Erland (Composite Components Company & Society of Motion Picture and Television Engineers, USA); Ron Fischer (NBC Universal, USA)
Motion pictures are increasingly created from a combination of real and virtual subjects. This involves the creation of a "virtual" camera that must closely emulate the camera behavior during principal photography. Use of highly dynamic camera moves during a shot have made it nearly impossible to deduce from the images and written notes the lens settings and characteristics. It is now both possible and necessary to record on a frame-by-frame basis the status of the taking lens. Modern post-production workflow, especially for stereoscopic 3D, increasingly demands an accurate record of the lens settings to facilitate compositing. This paper will discuss the following: - the objectives and advantages of having these data - the current state of the art of lens metadata in the industry - techniques for acquiring, preserving, disseminating and using these data - whether a standard or RP is desirable and achievable
Presenter bio: From his student filmmaker days in London through industrial design work to his founding role in industry technical organizations, Visual Effects Society Fellow, Jonathan Erland has been engaged in both the dramatic and technical side of the story-telling process for over 50 years. A member of the Star Wars VFX crew, he has six patents and four Academy Awards for innovative technologies. A Life Fellow of SMPTE, he's authored 20 papers, served as Program Chair, and received the Journal Award and Fuji Gold Medal. At AMPAS, he has served as a Governor, establishing Visual Effects as a branch. He's also a member of the Science and Technology Council, Scientific and Engineering Awards and numerous other committees. He's received an Academy Commendation for "solving High-Speed Emulsion Stress Syndrome in film stock" and the 2012 John A. Bonner Medal for "outstanding service and dedication in upholding the high standards of the Academy."
Presenter bio: Ron Fischer is the Technical Director of Universal Virtual Stage 1, a green screen virtual production facility located on the NBC Universal Studios lot, but working around the world. The facility has hosted a wide variety of film, commercial and television productions including Fast Five, Battleship, Xxit, Toyota "Kingdom", etc. Ron's previous credits include virtual set and motion capture systems for Alice in Wonderland, Beowulf at Sony Imageworks, as well as work at Disney Feature Animation and Silicon Graphics.
Jonathan ErlandRon Fischer
12:00 Computational photography for dust and scratch detection on transparent photographic material
Giorgio Trumpy and Rudolf Gschwind (University of Basel (CH), Switzerland)
This work pertains the digital restoration of motion-picture films. A new method for the automatic detection of blemishes on any kind of transparent photographic material (still and moving images, silver-based and dye-based material) is presented. It consists in an innovative combination of different illumination techniques and computational photography. The image layer is a random dispersion of microscopic elements (e.g. silver particles in b&w material) and its interaction with light is isotropic. Dust, scratches and other irregularities of the film surface produce shadows and reflections that are strongly dependent on the provenance of light. The acquisition of multiple images with different geometries of illumination, and the analysis of the differences between them, is found to be an effective method to emphasize irregularities in the film surface. Moreover, cross-polarization technique is found to improve the blemish detection. We present the description of the experiments that determined the method in its details.
Presenter bio: I received my BSc degree in Technologies for Conservation and Restoration of Cultural Heritage in 2004 and my MSc degree in Science for Cultural Heritage in 2009, both from the University of Florence. In 2006 I started working on various projects related to Conservation Science, Color Science and Digital Imaging, in the framework of the digitization of the Florentine museum heritage, collaborating with public and private institutions. During these years my main affiliation was with the Institute of Applied Physics (IFAC-CNR). Since 2007 I have been a member of the European group CREATE (Colour Research for European Advanced Technology Employment) which promotes and exchanges research and knowledge through a series of conferences and training courses. In 2010 I was selected for one of the two PhD positions in Imaging Science posted by the Imaging & Media Lab (IML) of the University of Basel, in collaboration with the Images and Visual Representation Group (IVRG) of the Ecole Polytechnique Fédérale de Lausanne (EPFL). Since September 2010, under the supervision of Prof. Rudolf Gschwind, I have been working in Basel at the Imaging & Media Lab, on the digital reconstruction and permanence of photographs and motion-picture films by digital image processing.
Giorgio Trumpy

SMPTE Timed Text for Captioning Internet-delivered Content (Part 2)go to top

Interoperability through Standards
Room: Salon 1
11:00 W3C Timed Text Updates
Sean Hayes (Microsoft, United Kingdom)
The paper will detail recent work in the W3C Timed Text Working group including updates in the Second Edition of Timed Text 1.0, as well as work in progress on the next version of Timed Text to accommodate the work of SMPTE-TT. We will also present updates on interoperability profiles and validation.
Presenter bio: As part of Microsoft's Accessibility Business unit, Sean Hayes is responsible for fostering innovation and tracking and developing standards in the accessibility area. Since joining Microsoft in 2000, Sean has worked to drive towards truly open standards that allow media and software to be developed universally and available to all. He believes today's solutions only scratch the surface of what the power of technology can make possible. He was an active member of the TEITAC activity, The European M376 accessibility work and the W3C Web Accessibility Initiative. He participates actively in SMPTE 24B and a number of W3C groups and is currently chair of the W3C Timed Text Working Group. Before Joining the Accessibility group, Sean spent his first 5 years at Microsoft in the Digital Media division, working on Digital Television and HD DVD. Prior to joining Microsoft, Sean spent 11 years at Hewlett Packard Research Labs, and dedicated five years to the digital media department studying advanced video techniques, including 3D video sprites and models for flexible storytelling using fuzzy logic. He eventually became involved in the DVB standards body, which ultimately led to his role at Microsoft. Sean holds a bachelor's of science in computer science from the University of London.
Sean Hayes
11:20 SMPTE Timed Text: Update from the 24TB Captions Ad Hoc Group
Craig Cuttner (HBO, USA)
The US Federal Communications Commission has selected SMPTE-Timed Text (SMPTE-TT; SMPTE ST2052-1) as the "Safe Harbor" format for Broadband (IP) Captioning of previously-televised content. This presentation provides a status report on the work of the SMPTE 24TB Captions Ad Hoc Group as well as the status of items that may be of interest to IP Captioning users.
Presenter bio: Craig Cuttner is senior vice president, Advanced Technology, for Home Box Office, responsible for all projects related to advanced technology architecture in the Technology Operations area. He oversees the planning of distribution technology architecture used to serve HBO's core and new business platforms, and the establishment of technical standards for new technologies of interest to the company. He was named to this position in November 2003. Previously, he was vice president, Technology. Cuttner joined HBO in 1982 as a system engineer. Cuttner has been active in HDTV since the late 1980s, contributing to many aspects of HDTV industry-wide. He has also been involved in strategic work since the mid 1990's on video on demand. He was named a Fellow in the Society of Motion Picture and Television Engineers in 2000, is a member of the Society of Cable Telecommunications Engineers Engineering Committee and is chair of SCTE Digital Video Subcommittee Working Group 1 on Encoding and also the National Academy of Television Arts and Sciences Technical Emmy Committee. Cuttner has over one dozen patents and patents pending. Cuttner holds a BS degree in Industrial Management from Georgia Tech.
Craig Cuttner
11:40 SMPTE Timed Text in the UltraViolet Common File Format
Mike Dolan (TBT, USA)
SMPTE Timed Text has found its way into various electronic media delivery formats. One is the UltraViolet Common File Format (CFF) for use both as subtitles and closed captioning. This presentation will provide a background on the underlying technology, including W3C Timed Text and SMPTE Timed Text, and then focus on the extensions and constraints developed by UltraViolet.
Presenter bio: Michael A Dolan is founder and president of Television Broadcast Technology, providing specialized professional encoders, test tools, and technical consulting in the field of digital television. He holds a BSEE degree from Virginia Tech '79 and has worked for and founded various leading edge computer graphics and real time systems companies since then, including early foundational work in W3C technology and analog data broadcasting. Mr. Dolan has been involved in digital television engineering for many years, including data broadcast system architecture, digital receiver design and compliance. He also currently chairs the ATSC Data Broadcasting Specialist Group (TSG/S13), co-chairs the CEA Working Groups on Digital Closed Captioning (R4SC3WG1) and Internet Captions (R4SC3WG15), co-chairs the SMPTE Committee on File Formats and Systems (31FS), co-chairs the DECE/Ultraviolet Technical Working Group, and is active in SCTE and W3C. Mr. Dolan is an SMPTE Fellow, a former SMPTE Governor for the Hollywood Region, authors the SMPTE Journal Almanac column, and holds several patents in computer web technology.
Mike Dolan
12:00 CE Device Implementation of SMPTE Timed Text: Navigating to the "Safe Harbor"
Mark Eyer (Sony Electronics, USA); Mike Dolan (TBT, USA)
In January, the FCC released a report and order on IP closed captioning to support provisions of the 21st Century Communications and Video Accessibility Act of 2010. The order places requirements on consumer video player devices regarding their ability to decode and present closed captioning in IP-delivered content. Devices which implement a SMPTE Timed Text decoder would be deemed to be in compliance with the new rules. This presentation explores the implications of the FCC's action on the implementation of consumer video players. CE manufacturers have been working in CEA to establish industry guidelines designed to establish a consistent framework for the implementation of SMPTE-TT. The end result is envisioned as being industry agreement on the definition of a standard "video player" that can decode and render captioned video from files or streams. The presentation will provide an update on the work and describe decisions and approaches agreed to date.
Presenter bio: Mark K. Eyer is currently Director of Systems for the Technology Standards Office of Sony Electronics. He graduated Cum Laude with a B.S. degree from the University of Washington in 1973 and received an MSEE degree in 1978 from the same institution. For the past thirty years, Mark has been involved with the development of technologies and products related to secure and digital television. Mr. Eyer is the recipient of a variety of industry awards for excellence in standards. Mr. Eyer represents Sony in various standards committees in the US and contributes systems engineering expertise to development of Sony's digital consumer electronics products.
Mark Eyer

12:30 - 14:00

Fellows Lunchgo to top

Room: Mount Olympus Room-3rd Floor of Loews Hollywood Hotel
12:30 Peter Owen
Peter Owen (IBC, United Kingdom)
Chairman, IBC Council
Presenter bio: Since 2002 Peter Owen has chaired IBC Council, a group of senior individuals drawn from broadcast related disciplines which acts as a sounding board for the Conference and Exhibition. It also assists in formulating the conference agenda and attracting leading industry contributors to the event. Trained as an electronics engineer in the mid 60's and introduced to analogue television technology at EMI Broadcast Equipment Division, conversion to digital television technology came about with a move to the IBA ( Independent Broadcasting Authority ) R&D labs whilst working with the team which produced the worlds first digital standards converter. In 1974 Peter was one of the founding members of Quantel where he stayed until his retirement from the supplier side in year 2000. During his time at Quantel Peter occupied many roles ranging from Head of Broadcast to Director of Engineering and whose duties included a close relationship with SMPTE standards groups and SMPTE conferences.
Peter Owen

14:15 - 15:45

Cinematography and Post (Part 2)go to top

Color Management
Room: Salon 2
14:15 Towards Higher Dimensionality in Cinema Color: Multispectral Video Systems
David Long (Rochester Institute of Technology, USA)
The digital transition being experienced by the motion picture industry has afforded an increase in dimensionality in time and space, however, comparatively little effort has been put into expanding color. All practical motion imaging systems continue to rely on metamerism, wherein a 3-channel signal is sufficient to reproduce the color of real objects regardless of higher order spectral composition. Such treatments restrict cinema color, offering limitations in absolute color accuracy and gamut and exacerbating observer metamerism. Multiprimary reproduction focused on spectral accuracy or metamerism reduction may prove a better answer to enhancing the color experience. It also promises to open new color management paradigms for visual effects compositing of live action and CGI or for virtual cinematography. The proposed talk summarizes past and present research in multispectral video. Preliminary results from the design and construction of an abridged multispectral video capture and display system will also be presented.
Presenter bio: David Long joined the faculty of the School of Film and Animation at Rochester Institute of Technology in 2007, where he is currently Program Chair and Assistant Professor for the BS Motion Picture Science program. His research interests focus on color science and multispectral imaging. Previous to RIT, Long worked as a Development Engineer and Imaging Scientist with Kodak's Entertainment Imaging Division. At Kodak, his primary responsibilities included new product development and image science and systems integration for the motion picture group, focusing on film and hybrid imaging products. Long contributed to the design and commercialization of the Vision2 family of color negative films, as well as several digital and hybrid imaging products for television and feature film post-production. His work has earned him numerous patents and a 2008 Scientific & Technical Academy Award for contributions made to the design of Vision2 films. Long has a BS in Chemical Engineering from the University of Texas at Austin and an MS in Materials Science from the University of Rochester.
David Long
14:45 Issues in Color Matching
Derek Smith, Joel Barsotti and Larry Heberlein (SpectraCal, Inc., USA)
To create a numerical description of color (e.g., X,Y,Z), one applies a Color Matching Function to spectral power distribution data acquired with an instrument such as a spectroradiometer. All the adjustments one makes to a video display or to video data depend on the accuracy of these numbers. Since 1931, the broadcast industry and others for whom color fidelity is crucial have depended on the 1931 CIE Color Matching Function (CMF). Recent (and continuing) advances in display technology, however, have exposed serious deficiencies in this CMF. These deficiencies have long been known to academic researchers, who have in the intervening years proposed several alternative CMFs. This paper reviews the critical flaws that render the 1931 CMF no longer reliable, and surveys the strengths and weaknesses of candidates for its replacement.
Presenter bio: As a veteran of the software industry, Joel spent the first several years working at a graphic design studio managing color critical work stations. He then began writing his own software to calibrate PCs and home theater computers, and was then hired away to SpectraCal. In his current position as SpectraCal's Director of Software Development, he has presided over the development of one of the most sophisticated color management packages available. The research he's done developing this engine makes him uniquely suited to discussing color matching functions and their role in video calibration.
Joel Barsotti
15:15 Accurate ACES Rendering in Systems Using Small 3DLUTs
Yasuharu Iwaki and Mitsuhiro Uchida (FUJIFILM Corporation, Japan); Michael Bulbenko (FUJIFILM US, USA)
The ACES color space has unlimited dynamic range, however, it is difficult to implement ACES workflow with the grading systems currently in use. To this end, we propose custom Log ACES and High Saturated Log ACES methods. The custom Log ACES can process negative ACES values and can handle high dynamic range. HSLA expands the ACES color space to reduce vacant area and spread real color data area in order to use 3D LUTs more effectively. These two methods drastically improve accuracy of color reproduction even if the post-production system only supports a highly limited dimension or very small number of lattice points for its 3D LUTs.
Presenter bio: After graduating from Tokyo Institute of Technology, Mitsuhiro Uchida entered Fujifilm and started research and development of color photographic film. He expanded his target to film camera, digital photo printer, and digital cameras. He joined AMPAS IIF committee in 2009. Since then, he and his team started to contribute to AMPAS-IIF activity attending IIF meeting at Hollywood every two month from Japan. He contributed wide spectrum of ACES, especially RRT development. Currently, while contributing logACES standard, he is trying to market Fujifilm's new product, CCBOXX.
Mitsuhiro Uchida

Migrating to the Cloud: Understanding the Opportunities and Challenges (Part 1)go to top

Room: Salon 1
14:15 The Cloud and its Potential Role in the Production and Distribution of Multi-Screen Enabled Content
Jason Williamson and Hanish Patel (Deloitte Consulting, LLP, USA)
Consumer behavior, distribution channel preference, and demographics are shifting toward increased consumption via multiple connected devices. We would like to share the results of a recent digital ecosystem study, and discuss how the Cloud might factor into these trends.
Presenter bio: Jason Williamson is a Specialist Master with Deloitte's Technology Strategy & Architecture consulting practice and brings over 13 years of experience in the Media & Entertainment industry, specifically on digital transformation, content distribution, content management, and production workflows. His work focuses around advising clients with digital broadcast and filmed entertainment processes on new technology and processes in the media space. Jason has also applied his expertise and knowledge of digital media operations alongside relevant experience in data systems architecture to design and develop solutions fit for the growing demands and complexity of content lifecycle workflows.
Presenter bio: Hanish is a Senior Manager in Deloitte's Technology, Strategy and Architecture practice who is focused on the Media & Entertainment industry. He has experience in shaping, leading and delivering successful complex technology solutions in the U.S. and Internationally. With 11 years of experience, Hanish has consulted nationally and internationally to corporate clients ranging from Media, Telecommunications, Financial Services, Life Sciences and High-Tech industries as well as Governmental departments. Hanish has experience of large scale transformation programs. His areas of experience include complex technology programs, IT cost reduction and governance, IT M&A, Target Operating Models for IT Organizations, vendor selections as well as planning and delivery of IT led programs. His most recent experience includes leading a number of Digital Media projects. His roles ranged from Metadata Management, Test Strategy for Unicast distribution, Technology Operating Model, International requirements and regulations, post production integration and requirements development
Jason WilliamsonHanish Patel
14:45 Leveraging the Cloud for File-based Workflows
Ron Quartararo (Verizon, USA); Mark Brown (Verizon Digital Media Services, USA); Brian Campanotti (Front Porch Digital, Canada)
For some time the IT world has embraced the Cloud as a vehicle to help transform the economics their business infrastructure from a cap-ex to an op-ex model. The M&E industry, however, has been slow to adopt the Cloud - partly based on issues around high bandwidth content, security, and the still complex nature of some M&E workflows. With the increasing role of IT in media production, management and distribution, and with the improvements in security and lower costs of bandwidth, the Cloud is beginning to make its presence known in M&E. Whether its public, private or hybrid, the Cloud can provide economic, efficiency, and collaborative advantages to a media organization from a large studio to a small broadcast network. This paper will examine the various aspects of M&E workflow including digital content & archival storage management, digital asset management, editing, transcoding, DRM, distribution and its suitability for the Cloud.
Presenter bio: Ron joined Verizon in 2009 as an Industry Partner in their Media & Entertainment practice before moving into his current role as Managing Principal in New Business Incubation. Prior to joining Verizon, Ron spent 20+ years in management roles with companies such as RKO, Arbitron, Sun Microsystems, and Ascent Media. His areas of expertise include business development, strategic planning, P&L management, sales/sales management, solutions development and consulting. Ron has spent the past 10 years focused on digital media workflows and technologies. He has been published in both the general as well as trade press including NY Times, NY Daily News, Barrons, Broadcasting & Cable and Broadcast Engineering.
Ron Quartararo
15:15 A Cloudspotter's Guide to Migration
Al Kovalick (Media Systems Consulting, USA)
Cloud adoption is growing at a 22% annual rate. SaaS apps revenue will reach $258 billion in 2020 (Forrester Research). This train is unstoppable. The economic and systems benefits are compelling and being leveraged by media companies worldwide. What does facility migration to the cloud involve? What low-hanging fruit can migrate now? What are the tradeoffs? This talk is a short tutorial on cloud basics with tips on migration. Aspects of architecture, application delivery, economics, open systems, reliability, QoS and security are considered. If you are a cloudspotter, this talk is for you.
Presenter bio: Al Kovalick has worked in the field of hybrid AV+IT systems for the past 20 years. Previously, he was a digital systems designer and technical strategist for Hewlett-Packard. While at HP, he was a principal researcher and architect for a new product-class of signal synthesizer. He was also the principal architect of HP’s first VOD server. Following HP, from 1999 to 2005, Al was the CTO of Pinnacle Systems. After Avid acquired Pinnacle, Al served as an Enterprise Strategist and Fellow for six years. In 2011, Al founded Media Systems Consulting in Silicon Valley. His work focuses on all aspects of networked media systems, file-based workflows and cloud migration for media facilities. Al is an active speaker, educator, author and participant with industry bodies including SMPTE. He has presented over 50 papers at industry conferences worldwide and holds 18 US and foreign patents. In 2009 Al was awarded the David Sarnoff Medal from SMPTE for engineering achievement. Al has a BSEE degree from San Jose State University and MSEE degree from the University of California at Berkeley. He is a life member of Tau Beta Pi and a SMPTE Fellow. Al writes the Cloudspotter's Journal column for TV Technology magazine.
Al Kovalick

15:45 - 16:15

Break in Exhibit Hallgo to top

Room: Exhibit Hall

16:15 - 17:45

Cinematography & Post (Part 3)go to top

Finishing
Room: Salon 2
16:15 The Unfolding Merger of Television and Movie Technology
Gary Demos (Image Essence LLC, USA)
HDTV utilizes "in-camera" or "in-switcher" rendering for live broadcast, wherein the interpretation of the scene for display happens immediately. Typical rendering is typically modeled upon an in-camera matrix and gamma boost, and a highlight "knee". Such shows are usually captured, processed, transmitted, and displayed at 50 or 60 fields or frames/sec. Telecine mastering of shows interprets and renders the scenes from a film negative, but not in real time. The rendering of colors during telecine is often more of a print film emulation than a video-style process. Such shows are usually mastered at 24 frames per second, and may be shows made for television, or cinematic movies being presented on television. The industry move from film-based capture to digital camera capture has begun to bridge the gap between the television image model and the telecine film-input model. The key new ingredient bridging this gap is the Reference Rendering Transform.
Presenter bio: Gary Demos is the recipient of the 2005 Gordon E. Sawyer Oscar for lifetime technical achievement from the Academy of Motion Picture Arts and Sciences. He has pioneered in the development of computer generated images for use in motion pictures, and in digital film scanning and recording. He was a founder of Digital Productions (1982-1986), Whitney-Demos Productions (1986-1988), and DemoGraFX (1988-2003). He is currently involved in digital motion picture camera technology and digital moving image compression. Gary is CEO and founder of Image Essence LLC, which is developing wide-dynamic-range codec technology based upon a combination of wavelets, optimal filters, and flowfields.
Gary Demos
16:45 Theatrical versioning in the content pipe - integrating digital cinema into end to end workflow
Richard J Welsh (Sundog Media Toolkit, United Kingdom)
Digital cinema compression, versioning and packaging is traditionally a cul-de-sac process within the life of a movie as content flowing through the "pipe" to different versions and delivery formats. With more integrated workflow and appropriate mezzanine files, the creation of digital cinema packages can become part of the flow of content from the creation to all downstream deliveries. Looking processes, system architecture and technology choice to show how this allows us to efficiently flow content from capture/creation to the final consumption point, be it the cinema, TV, tablet or mobile. Building work flows now that are extensible to the many coming new formats such as high frame rate, object oriented audio, wide colour gamut etc.
Presenter bio: In his role as Head of Operations for Digital Cinema at Technicolor, Richard is responsible for mastering, distribution, and client services for theatrical releases. Previously he held roles as Digital Cinema Services Director, Sound Consultant and Applications Engineer at Dolby Laboratories, where he developed a number of proprietary tools and patented techniques. Richard serves on the board of the Society of Motion Picture and Television Engineers (SMPTE) as Governor for EMEA, Central and South America. He holds a BSc (Hons) in Media Technology from Southampton Solent University.
Richard J Welsh
17:15 Low-latency transmissions for remote collaboration in post-production
Sven Ubik (CESNET, Czech Republic); Jiri Halak and Petr Zejdl (CESNET / CTU, Czech Republic); Michal Krsek (CESNET, Czech Republic)
The post-production often involves several key parties who want to be in the control of the process - the director, the producer, the editor, and several technical experts - the colorist, the stereographer, the sound master, etc. Some decisions are more effectively done in real-time, interactively. However, the participants are often very busy, working on multiple projects in parallel and it is difficult for them to travel together for a collaborative session. We believe that future technology for low-latency high-quality transmissions of image and sound will enable remote real-time collaboration in post-production. As the capacity of optical networks is increasing, uncompressed transmissions of original content with minimal latency will be possible. We did several experiments with real-time remote collaboration in color grading and stereography to over 10000 km between Europe and West Coast US, using GLIF (Global Lambda Interchange Facility) network. We describe the key technology aspects and lessons learned.

Migrating to the Cloud (Part 2)go to top

Room: Salon 1
16:15 The Cloud - What does it mean for media archives?
Doug Wynn and Chris Luther (Software Generation Limited, USA)
Perhaps due to the close proximity IT companies now have to the broadcast industry the usage of the term 'Cloud' is becoming more and more prolific. Is this just another buzz word or are there real benefits to embracing the Cloud? This paper looks at the conventional media archive and discusses the opportunities and emerging trends for archive technology and the cloud. Some of the discussion will be about the enabling technologies like LTFS, as well as the emergence of Archiving as a Service. Additionally this paper will address the considerations to build a private 'cloud' archive in terms of users and growth expansion.
Presenter bio: Chris Luther is the Director of Professional Services at SGL, based out of New York. He has been with SGL for 5 years, providing ISV services to Broadcast and M&E companies. Previously he was Director of Technology at The Color Wheel, in charge of Graphics/Pre-Press/Video and Post workflow creation and management systems. As Director of Professional Services at SGL Chris plans and implements some of the largest archive and content management systems for national networks, local broadcasters and other M&E organizations.
Chris Luther
16:45 Cloud Media Collaboration, Enter Stage Right: and Action
Robert Jenkins (CloudSigma, USA)
As digital media files grow in size and complexity, media service providers spend more time and resources than ever before developing, transferring, storing and optimizing them. With disks still being flown across the world between various partners involved in productions, the industry clearly needs a more efficient, pragmatic solution for collaboration and service delivery. It's time the media industry stepped into the future with the cloud. Public clouds offer collaborative ecosystems in which providers can essentially work together "under one roof" to improve the efficiency of their services, including file conversion, media injest, file transfer acceleration, encoding/transcoding, long term object storage and content delivery. This presentation provides a technical overview of how public cloud ecosystems offer a media hub with high connectivity and cost effective access to computing resources. Compelling case studies will provide practical guidance for attendees hoping to move media operations (compute, applications, storage, etc.) to the cloud.
Presenter bio: Robert Jenkins is the co-founder and CEO of CloudSigma and is responsible for leading the technological innovation of the company's pure-cloud IaaS offering. Under Robert's direction, CloudSigma has established an unprecedented open, customer-centric approach to the public cloud.
Robert Jenkins
17:15 Delivering live multi-cam content to smart devices through cloud platforms
Broadcasters must engage a new generation of multitasking viewers who no longer sit passively in front of their televisions but browse the internet and interact with social media while watching TV. Rather than risk losing viewers, broadcasters can provide original premium content — including unseen camera angles and highlights — to viewers via second screens. The large amount of unused content that sits on live TV production servers can be used to enrich the user experience and maximize the value of content. This paper will explore technology challenges in building open and scalable platforms to deliver high quality experiences on second screens, including: - Best practices in building near-live multi-camera replay platforms on top of standard live production environments - Overcoming challenges in cloud-based production and delivery to multi-screens - Integration with social networks, archives, stats and other third-party content
Presenter bio: Werner Ramaekers has worked in software engineering for the past 20 years. He started his career in the Belgian Military as technical expert on automated testing for telecommunications systems. He also created the first intranet solution for material identification purposes in 1996. In 2000 he left the Military and worked as an internet solutions architect to create highly scalable internet portals for clients in logistics and sports. In 2004 he joined the Belgian Public Service Broadcaster VRT to be Head of Development for the team in charge of developing web applications to let television viewers and radio listeners interact with and comment on the topics of the shows. He also was part of the Business Architects for VRT's transition from tape-based to file-based production. In 2007 Werner was asked by VRT to start the R&D initiative at VRT's medialab that would show the possibilities of the internet with the quality of broadcast. Together with his team he made it possible to pick and select video from different online resources and watch them as your own tv channel using a modified the set top box. The launch of the iPhone made it very clear to Werner there was a lot of opportunity for media in mobile so he started building mobile applications to help VRT explore the opportunities. After leaving VRT Werner joined EVS in 2011 where he serves as the Product Development Manager for the "Consumer Casting" solutions.
Werner Ramaekers

17:45 - 18:30

Exclusive Exhibit Hoursgo to top

Room: Exhibit Hall

18:30 - 19:30

Annual Membership Meetinggo to top

Room: Salon 1

Thursday, October 25

08:00 - 09:00

Continental Breakfastgo to top

Room: Ray Dolby Ballroom Terrace

09:00 - 10:30

Olympicsgo to top

Room: Salon 1
09:00 3D Production Edit Work Flow, London 2012 Olympic Games
Jim DeFilippis (TMS Consulting, USA)
For the first time the Olympics were telecast in 3D. In the past, some 3D coverage was available on a closed circuit basis of limited events. The London Olympics 3D Channel covered multiple sports, both live and ENG coverage, and provided a full up 3D channel of over 275 hours of 3D programming. Part of the Olympic 3D Channel every day was a (1) hour Summary program, presenting the best of the live 3D Coverage as well as the EFP single camera coverage captured that day. This is the first time a 3D daily program was attempted, using a hybrid edit work flow. The paper will discuss the work flow, including the capture of the ENG footage using the Panasonic P2 3D camera, EVS Servers and AVID Media Composer editing. Additionally the challenge of quick turn around and the QC process to insure the materials were 'stereo' correct. Final will cover the specific issues of what worked and what did not.
Presenter bio: Jim has worked in radio and television broadcasting for over 32 years including the ABC Radio Network, the ABC Television Network, the Advanced Television Test Center, and the Atlanta Olympic Broadcast Organization. Most recently he was EVP, Digital Television Technologies and Standards, for the FOX Technology Group. At FOX he led the development of progressive camera systems to replace film for television, 480p30 video production systems (FOX Widescreen), and the FOX HD splicing system design and deployment for the FOX Network. Previous to FOX, Jim was the Head of Engineering for the 1996 Atlanta Olympic Games where he championed the development of the first all digital, disk based, super slo-motion camera/recording system (Panasonic/EVS). Jim has been involved with the Olympic Host Broadcaster since 1993 and has been involved in the Atlanta (1996), Sydney (2000), Torino (2006) and at the London (2012) games, assisting with the technical production and distribution of the Olympic 3D TV channel. He attended the School of Engineering at Columbia University in the City of New York where he attained his Bachelor of Science in Electrical Engineering in 1980 and his Masters of Science in Electrical Engineering in 1990. Jim is a Fellow of the SMPTE and is involved in standards development at SMPTE, the International Telecommunications Union, and the ATSC including work on RP 85 Audio Loudness Control for DTV. He has received two Technical Emmy awards for his work at the ATTC in the Development of the ATSC standard and for the FOX HD Splicing System. Jim lives in Pacific Palisades, CA with his wife, Maggie and two teenage children, Jake and Juliana.
Jim DeFilippis
09:30 Challenges, Solutions, and Lessons Learned for Content Protection from 2012 Olympics
Michael Wilkinson (NBC Universal, USA)
In a groundbreaking effort, during the London 2012 Olympic Games, NBCUniversal made available to its U.S. viewers coverage of every Olympic sport on a live basis on either broadcast television, cable television, online, or mobile. NBC paid $1.18B for the exclusive rights to the London Games on every platform, and the rights owner of the Olympic Games, the International Olympic Committee (IOC), recognized that it could play a significant role in ensuring the investment of its exclusive partner in the USA was maximized and NBC's exclusivity protected. This paper covers what NBC and the IOC did to protect Olympic content from being available via illegitimate means on distribution channels such as the Internet. We will cover both the technical and operational aspects of our efforts as well as a view of the evolution of our operations from Beijing through London and show how content protection technologies and operations have improved to better manage online piracy.
Presenter bio: Mike Wilkinson is Director, Content Security Technology for NBC Universal. His responsibilities encompass all aspects of content protection technology, including fingerprinting, watermarking and forensics analysis, as well as internal content security auditing and investigations. Mike was responsible for the enterprise watermarking deployment within NBC Universal as well as the development and implementation of NBCU's internal fingerprinting system. Mike is a member of NBCU's internal Content Security Committee and the MPAA's Site Security Liaisons' Committee and the PreTheft Working Groups. Mike has been a member of NBCU's Anti Piracy team since 2005. Previously, Mike worked with Technicolor/Vidfilm for approx 9 years in various engineering roles, including Systems Engineer for the Digital Media Group. Mike received his Bachelors Degree in Organizational Management from the Masters College and served four years in the United States Navy working with advanced electronics systems.
Michael Wilkinson
10:00 Challenges and Solutions In Production/Post Production for the 2012 Olympics
Rajesh Rajah (Cisco Systems, USA); Harry Ryan (NBC Universal, USA)
Challenges and Solutions In Production/Post Production for the 2012 OlympicsChallenges and Solutions In Production/Post Production for the 2012 Olympics
Presenter bio: Rajesh Rajah is a Solutions Manager for Cisco Videoscape/End-to-End IP Video solution, and has been with Cisco for over 12 years. He specializes in building architectures and technologies for enabling Cloud-based Video delivery & Video optimized network transport. He was earlier a Solutions Architect with diverse experience in End-to-end planning, design and deployment phases of SP Carrier Ethernet and Video/IPTV engagements. Prior to focusing on IPTV/Video, he was involved in designing IP NGN and MPLS-based networks for service providers worldwide. He is a CCIE and has been with Cisco for nearly 12 years and has been a speaker at Cisco Live/Networkers, National Association of Broadcasters (NAB) & other forums. He also has few issued patents and a few pending with US Patent office on Video, Carrier Ethernet and Cloud/Datacenter.
Presenter bio: Harry Ryan has been in the Tele- Communications for over 25 years. Involved with Olympics through NBC since 2000 through this past 2012 London games, in the role of TCP/IP Network Architect.
Rajesh RajahHarry Ryan

Asset Management and Archivego to top

Room: Salon 2
09:00 Towards using Audio for Matching Transcoded Content
Dinkar Bhat (Arris Corp., USA)
With the advent of multiple screens for viewing, transcoding and transformation of content is becoming a mainstay of content delivery systsems. But Transcoding implies that copies and versions of the same content can proliferate across various storage devices. It also means keeping track of content becomes a major problem both from copyright and recording/indexing perspectives. In this context, video-based copy detection has emerged as a major area of research. On the other hand, audio-based techniques have received much less focus but audio could provide very useful supporting copy detection cues. In this paper, we present a systematic investigation of how audio signatures undergo transformation under typical transcoding operations including bitrate changes, codec transformation, sample rate variation, and standard audio transforms like downmixing/volume normalization.
Presenter bio: Dinkar Bhat received the B.Tech. degree in electrical engineering from the Indian Institute of Technology at Madras (now Chennai), the M.S. degree in computer science from the University of Iowa, Iowa City, and the Ph.D. degree in computer science from Columbia University, New York. He is Systems Engineer at Motorola Mobility where he has made many contributions to advanced set-tops in the area of video and audio, closed caption processing and transcoding. Prior to joining Motorola, he worked as a Principal Engineer at Triveni Digital, an LG Company, in the area of data broadcasting and stream monitoring. He has published in leading journals, such as the IEEE TRANSACTIONS ON PATTERN ANALYSIS, Society of Motion Picture Television Engineers (SMPTE), and National Association of Broadcasters (NAB) conferences. He holds patents in the area of digital television.
Dinkar Bhat
09:30 Using Name Spotting in Audio/Video Media Identification to Improve Media Discovery Service in Digital Object Architecture
Manish Goswami and Lan Yang (California State Polytechnic University, Pomona, USA)
Digital object repository, a component of digital object architecture, stores large number of audio/video files (as digital objects) and provides access and retrieval to them. Sometimes metadata for audio/video files are almost absent. Lack of enough metadata limits media discovery service from fetching the files containing little metadata. In addition, the media discovery service excludes those files from the result set. Relevant information, such as names, can be extracted from the given content of an audio/video file and appended in metadata of the same audio/video file for enhancing the media discovery service. In this research, we use a Hidden Markov Model and Viterbi algorithm based name spotting module, known as IdentiFinder to extract names. The research will help to make large number of audio/video files visible to the media discovery service with the help of name extraction. Also, it will increase the user satisfaction by improving the search result set.
Presenter bio: Manish Goswami came to USA to pursue MS in CS at California State Polytechnic University, Pomona in September 2010. While studying he had some educational experiences at the university. He worked as a research assistant for a program funded by the ‘National Science Foundation (NSF)' on ‘Digital Object Architecture (DOA)'. Presently he is working as a student assistant for I & IT web development department at Cal Poly Pomona. Before coming to US he worked as a Software Engineer at BrickRed Technologies Pvt. Ltd. India for more than 2 years. There he created and maintained the Brickred's website and some other in-house projects on PHP with the help of his reporting and project manager. His educational career includes bachelors and masters degree in Computer Applications in India. Outside of his professional interests he reads, cooks and plays basketball.
Manish Goswami
10:00 Practical Quality Assessment for Digitized Film Content
François Helt (Highlands Technologies Solutions, France); Valerie La Torre (HighlandsTechnologies Solutions, France)
The CineXPRES project introduces a practical quality assessment framework for digitized film, based on theoretical approach previously presented by one of the author at the SMPTE 2010 Fall Conference. A bottom-up workflow provides subjective quality reference from audience ratings of selected training contents shown on given displays. It also computes a permanent objective quality measure from degradation models and visual perception models. Bayesian network inference provides a "conditional subjective quality estimation" that depends on a given display. The same inference mechanism computes a "conditional objective quality estimation" through an expected cost function, by taking into account current image processing technologies and contextual information. Bayesian networks are powerful tools integrating expert knowledge and able to evolve with new information. These quality estimations must serve three purposes for long-term preservation: evaluation of the restoration work, comparison of similar contents before preservation and computation of a permanent content quality reference.
Presenter bio: Short Bio for Francois Helt Background in Mathematics and Film making, 28 years experience in Professional Video and Film, Designing Image Processing software since 1981, R&D manager of teams dedicated to special effects software including film scanners and film printers drivers since 1991. Technical manager of the European project "Limelight" aimed at the design of a complete digital film restoration system from 1994 to 1997. Founder and CEO of DUST company specialising in digital restoration and processing of film from 1997 to 2002. Author of automatic digital film restoration software. Designer of colour conversion and calibration software for Digital Cinema from 2004 to 2006. Technical and Application manager for Digital Cinema in Doremi since December 2006. Chief SCientific Officer for Highlands Technologies Solutions since 2013 Conferences and lectures: - SPIE Conference San Jose, California, February 1992, "High definition tape to film transfer" - 134th SMPTE technical conference, Toronto, Canada, November 1992, "High definition tape to film transfer" - CVPP, Atlanta, November 1997, "Deterioration Detection for Digital Film Restoration" - IBC September 1998, Amsterdam Workshop on film transfer - IBC September 1999, Amsterdam Workshop on digital convergence - Association of Moving Image Archivists Conference Los Angeles, November 2000, "Digital film restoration" (The Reel Thing - technical meeting) - IEE London, January 2001, "Advances in Digital Restoration for Addressing the Vinegar Syndrome Effects" - Festival Cinema Ritrovato, Bologna Italy, July 2001, "Digital restoration applied to the vinegar syndrome" - Association of Moving Image Archivists Conference, Portland, November 2001, "Vinegar Syndrome" (at "The Reel Thing" technical meeting) - 8th World Multiconference on Systemics, Cybernetics and Informatics (SCI 2004), Orlando, USA, July 18-21, 2004, "Bayesian framework for digital restoration of Film: A real case study and the role of perception" - SPIE Electronic Imaging 2005, San Jose, USA, January 16-20, 2005. "Image Quality Evaluation in the Field of Digital Film Restoration" with M. Chambah, C. Saint Jean. - SMPTE Fall 2009 technical conference, Los Angeles, USA, October 2009 “Proposal for practical screen luminance uniformity measurement” - SMPTE Fall conference 2010, Los Angeles, USA, October 2010 “Method and good estimators for projection uniformity measurement” - SMPTE Fall conference 2010, Los Angeles, USA, October 2010 “Quality Metrics in long-term preservation and restoration paradigms” - SMPTE Fall conference 2011, Los Angeles, USA, October 2011, “Matching The Human Visual System, Balancing Bit Depth, High Dynamic Range And Coding Efficiency” - SMPTE Fall conference 2012, Los Angeles, USA, October 2012 “Practical Quality Assessment for Digitized Film Content” - Association of Moving Image Archivists Conference, Seattle, December 2012, “Transmittance Film Scanning” - SMPTE Fall conference 2013, Los Angeles, USA, October 2013, "French Cinema goes IMF"
François Helt

Advances in 3D (Part 2)go to top

3D Broadcast and Display
Room: Theatre: Chinese 6
09:00 They must be genlocked? - Missing standards in the 3d ecosystem
J. Patrick Waddell (Harmonic Inc., USA)
In 2D video systems users understood the need to genlock equipment. That was never documented in any SMPTE standards or recommended practices. With the advent of digital 3D video production systems, this small oversight has provided considerable room for variation. With a SMPTE documentation project on this topic reaching closure, what has been learned from that effort and is there any additional documentation yet to be written? The presenter is the chair of the 32NF-40 AHG on 3D Production Timing and Sync.
Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC's TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC's Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
J. Patrick Waddell
09:30 Effects of viewing conditions on fatigue caused by watching 3DTV
Toshiya Morita and Hiroshi Ando (National Institute of Information and Communications Technology, Japan)
In order to enjoy a pleasant experience watching 3DTV, it is necessary to collect and analyze reliable safety assessment data. Evaluation experiments were conducted consisting of 500 adult participants watching 3D content for approximately one hour on commercially available 46 to 50-inch 3DTVs that require the use of shutter glasses. The degree of fatigue after watching the 3DTV was evaluated under various viewing conditions based on objective and subjective indexes of fatigue. The results of objective indexes showed that there was no statistical difference between watching 3DTV and traditional TV (i.e., watching 2D content without glasses) in degree of decline of visual and cognitive functions due to fatigue. On the other hand, the results of subjective indexes indicated that there were some differences between watching 3DTV and traditional TV in the sensation of fatigue, which may not be attributed to watching 3D content, but to wearing the 3D shutter glasses.
Presenter bio: Toshiya Morita received a diploma in college of information sciences in 1984 from University of Tsukuba, Ibaragi, Japan. He joined NHK (Japan Broadcasting Corporation) in the same year, and has been with NHK Science and Technology Research Laboratories since 1989. He is a senior research engineer and engaged in research on vision psychology, eye movement analysis, stereoscopic display and methods of objectively evaluating TV programs. He is currently on loan to NICT (National Institute of Information and Communications Technology) as research expert and involved in evaluation of 3DTV and 3D programs in terms of comfort and safety.
Toshiya Morita
10:00 High Performance Polarization-Based-3D and 2D Presentation
Systems for delivering 3D content in digital cinemas are inherently lossy. In the absence of careful attention to design, component efficiencies, and maintenance, the result can be an image luminance below 4.5 fL, even on modest sized screens. In addition, aspects of a 3D system can determine the performance of 2D presentation. Given the large installed base, methods for increasing brightness and image quality are sought which leverage existing projector platforms. This paper evaluates loss mechanisms in 3D projection systems and shows that high-brightness 3D is feasible using lamp-based illumination. These solutions can be implemented using single-projector sequential 3D, for conventional sized screens (averaging 40'), as well as premium large format screens in excess of 60'.
Presenter bio: Dr. Gary Sharp joined RealD as Chief Technology Officer in 2007 after RealD acquired ColorLink. In 2011, Sharp assumed the additional title of Chief Innovation Officer. In 1995, Sharp co-founded ColorLink, where he served as Vice President of Research and Development as well as Chief Technology Officer. Under Sharp's leadership, in 2005, ColorLink played an instrumental role collaborating with RealD to develop RealD's first cinema system. Sharp is the inventor on more than 70 US issued patents relevant to display technology, polarization optics and liquid crystal projection systems, including key patents related to RealD's Cinema System. He is a co-author of Polarization Engineering for LCD Projection (Wiley & Sons, 2005). Sharp earned a B.S. in Electrical and Computer Engineering from UCSD, where he focused on Optics. He later earned a Ph.D. in Electrical and Computer Engineering from the University of Colorado, Boulder.
Gary Sharp

10:30 - 14:00

Exhibit Hall Opengo to top

Room: Exhibit Hall

10:30 - 11:00

Break in Exhibit Hallgo to top

Room: Exhibit Hall

11:00 - 12:30

Ultra-High-Definition Imaginggo to top

Room: Salon 1
11:00 120 Hz-frame-rate Super Hi-Vision Capture and Display Devices
Hiroshi Shimamoto (NHK & Japan Broadcasting Corporation, Japan); Kazuya Kitamura, Toshihisa Watabe, Hiroshi Ootake, Norifumi Egami and Yuichi Kusakabe (NHK, Japan); Yukihiro Nishida (NHK Science & Technology Research Laboratories, Japan); Shoji Kawahito (Research Institute of Electronics, Japan); Tomohiko Kosugi and Takashi Watanabe (Brookman Technology, Inc., Japan); Tadaaki Yanagi and Tetsuo Yoshida (Hitachi Kokusai Electric Inc., Japan); Hideki Kikuchi (Link Laboratory Inc., Japan)
NHK has been researching and developing Super Hi-Vision (SHV), with 33 megapixels (7,680-pixel by 4,320-line), as a next-generation ultra-high definition broadcast system. At the last year's SMPTE conference NHK reported that NHK had decided to double the frame rate of SHV video to 120 Hz to improve its quality of motion portrayal. In this conference, we will report on the 120-Hz SHV devices we have developed. One is a 120-Hz SHV image-capture device using three 120-Hz 33-megapixel CMOS image sensors. The sensor uses 12-bit ADCs and operates at a data rate of 51.2 Gbit/s. We have also developed a 120-Hz SHV projector using three 8-megapixel LCOS chips and e-shift technology. These 120-Hz SHV devices were exhibited at our Open House in May 2012 and demonstrated superb picture quality with less motion blur.
Presenter bio: Hiroshi Shimamoto received the B.E. degree in electronic engineering from Chiba University, M.E. and Ph.D degrees in information processing from Tokyo Institute of Technology in 1989, 1991 and 2008, respectively. In 1991, he joined NHK (Japan Broadcasting Corporation). Since 1993, he has been working on research and development of UHDTV (ultrahigh-definition TV) cameras and 120-fps 8K image sensors at the NHK Science & Technology Research Laboratories. In 2005-2006, He was a visiting scholar at Stanford University. He is a member of the IEEE.
Hiroshi Shimamoto
11:30 Development Of A 70mm, 25megapixel Electronic Cinematography Camera With Integrated Flash Recorder
John J. Galt (Panavision, USA); Branko Petljanski (Panavision & Florida Atlantic University, USA)
This paper will describe the system design of the world's first 70mm, 25 megapixel, electronic-cinematography camera with an integrated flash memory recorder. Prior to 2004 the only so called "4K" imaging systems consisted of a single line array of 4096 photo-sites or in some instances three 4096 line arrays. A color "4K" scan of an Academy 35mm cine frame would generate a digital image that would be the equivalent of a 29 megapixel area array camera sensor. In 2004, one camera manufacturer introduced a 4096 x 2048 pixel CCD camera for cinema applications and declared it was a "4K" camera. Soon many would follow, and today this particular piece of obfuscation is rampant in the motion picture industry. Despite all the technical challenges in creating the first "True 4K" large format digital cinema camera one of the greatest challenges we are faced with is how to end the "4K" confusion.
Presenter bio: B. Petljanski received his B.Sc. and M.Sc. in EE from the University of Novi Sad in 1995. He also received his M.Sc. in EE and his Ph.D. in CE in 2001 and 2010, both from Florida Atlantic University. His captivation with digital imaging started at NASA Imaging Technology Space Center where he was involved in the development of high resolution cameras and recording systems. He is working currently in Panavision's Advanced Digital Imaging group as a senior engineer. At Panavision, he has been involved in conceiving and designing equipment for image acquisition, processing and storing. His special interest lies in optoelectronics, a magical area which studies the collection of photons and converts them into attractive images.
Branko Petljanski
12:00 1080p50/60, 4K and beyond: Future Proofing the Core Infrastructure to Manage the Bandwidth Explosion
John Hudson (Semtech Corp, Canada)
Traditional broadcast infrastructures only had to support one version each of SDTV and HDTV, plus extensions such as RGB 4:4:4 for better chroma keys. Now we need to support 4:4:4:4 for external keys, high dynamic range (HDR) imaging, stereoscopic 3D, a 3D disparity channel, Quad-Full HD, higher frame rates etc, all of which drive real time streaming media bandwidth requirements. How do we accommodate these new demands and stay future proof within our core broadcast infrastructure? This paper outlines the latest developments, at the technical and standardization levels, to handle the emergence of new production formats. It examines changes to the studio infrastructure which add the flexibility needed to accommodate new production formats alongside existing formats, with maximum compatibility and minimum confusion. It then suggests methods to greatly increase the observability of studio networks, and improve functionality and compatibility at the control and monitoring plane.
Presenter bio: John Hudson is Director of Product Definition and Broadcast Technology in the Gennum Products Group of Semtech Corporation. His responsibilities include technology strategy, product definition and international standardization for Semtech GPG's video and datacom business. Hudson has spent 28 years in the broadcast industry beginning his career as a design engineer at Sony Broadcast and Professional Europe. He joined Gennum in 1999 and has been instrumental in developing the company's video and multi-media semiconductor business. As an active member of SMPTE and SMPTE Fellow, Hudson serves as Co-chair of TC 10E - Essence, and Chair of the 32NF40 Working Group on SDI Mapping. He is the author of several SMPTE Standards, and actively contributes to the development of real-time streaming media interfaces for video and D-Cinema production. Hudson is actively involved in the formation and development of the HDcctv Alliance™ and as chair of the technology committee, his responsibilities include the development of all standards and compliance testing programs. He attained a HND in Electronics and communications engineering from the Farnborough College of Technology in 1988, is the author of 10 patents on video processing and signal integrity solutions for multi-media applications and regularly contributes technical papers and presentations to seminars and technology events in both broadcast and CCtv industries.
John Hudson

Evolving Broadcast Infrastructure (Part 1)go to top

Room: Salon 2
11:00 Production Media Data Centers: Scalable computing, networking, virtualization, and adaptive bit rate encoding
Tom Ohanian (Cisco, USA)
With "TV Everywhere" offerings driving the consumption of content, significant needs have developed in addressing the requirements of digital media supply chains. For content providers and service providers, architecting and implementing solutions to serve "TV Everywhere" require flexible and agile infrastructures. The concept of a Virtualized Production Media Data Center combines scalable computing, dense networking, and the virtualization of media applications to address the technology and business process change requirements for the Media & Entertainment industry.
Presenter bio: Tom Ohanian is a member of the Digital Media Strategy team at Cisco Systems. He was on the founding team at Avid Technology and is the co-inventor of the Avid Media Composer, Film Composer, and Multicamera Systems. He has extensive broadcast engineering, production, and post-production experience and is an Academy Award and two-time Emmy recipient for scientific and technical invention.
Tom Ohanian
11:30 A study of the optical distribution costs of multichannel baseband digital broadcasts over a FTTH network
Takeshi Kusakabe (Japan Broadcasting Corporation (NHK), Japan); Takuya Kurakake (Nhk(Japan Broadcasting Corporation), Japan); Kimiyuki Oyamada (NHK (Japan Broadcasting Corporation), Japan); Yoshihiro Fujita (Ehime University, Japan)
We have previously proposed a baseband time-division multiplexing method for the transmission of digital broadcasts over FTTH. Here, we evaluate the transmission equipment cost of the proposed method based on a simple assumed distribution network. We predict that the cost can be decreased to 11-36 % of that of conventional sub-carrier multiplexing (SCM) and FM conversion transmission methods. By analysing the dominant factors affecting the cost, we show that significant savings are achieved due to the fact that an optical signal can be received at a lower power using the proposed method than for signals transmitted using conventional methods.
Presenter bio: Takeshi Kusakabe received the B.E. and M.E. degrees in science and engineering from Waseda University, Tokyo, Japan, in 1999, and 2001, respectively. He joined Japan Broadcasting Corporation (NHK) in 2001 and worked at the broadcast engineering department. From 2004 to 2010, he engaged in the research and development on optical transmission of digital broadcasting signals for cable television at NHK Science & Technology Research Laboratories. Since 2011, he has been engaged in the research and development at Ehime University. His current interests are transmissions of baseband and modulated radio frequency signals of HDTV/UHDTV. He is also a member of ITE (Institute of Image Information and Television Engineers).
Takeshi Kusakabe
12:00 Beyond HD - what are the options - 4k or 3D - what will be successful and when?
Hans Hoffmann (European Broadcasting Union, European Union)
Many broadcasters are still rolling out their HD services, but the industry is looking into immersive media. Starting with an analysis about the drivers in the market the paper will conduct an analysis of the 6 most important technology parameters for enhancements in media experience. The presentation will give update information about the most recent standardization efforts in ITU-R/SMPTE/DVB/MPEG. The Eco-chain from content creation to the consumer will be investigated. The need standards will be described. The presentation will provide describe a project of the EBU, which has looked into the options of BeyondHD from a broadcasters point of view and had shot 4k@50p test content at the RAI production studios to perform compression with HEVC and to determine the required bit-rates in distribution on 4k consumer displays. 3D stereoscopic content in 1080p/50 per-eye has been generate of the same scenes. The presentation will at the end summarize the findings.
Presenter bio: Hans Hoffmann was born in Munich, Germany. He holds a diploma in telecommunication engineering from the University of Applied Sciences in Munich and a Ph.D. from Brunel University West London, at the School of Engineering and Design. From 1993 to 2000, Hoffmann worked at the Insitut fuer Rundfunktechnik in research and development for new television production technologies. In 2000, he joined the European Broadcasting Union (EBU) as a senior engineer in the technical department, currently he is Head of Media Fundamentals and Production Technology. Hoffmann has chaired the EBU project groups P/BRRTV and P/PITV, which were both involved in standardization activities such SDTI and file formats. Hoffmann is currently SMPTE Engineering Vice President. He chaired the SMPTE technology committee on Networks and File Management and has served as Engineering Director, Television. He has been involved in EBU activities on 3D and high-definition television production and emission and set up the HDTV testing laboratory at the EBU. Hoffmann is a fellow of SMPTE and a member of the Institute of Electrical and Electronics Engineers (IEEE), FKT (Germany), and the Society for Information Display (SID).
Hans Hoffmann

12:30 - 14:00

Boxed Lunch In Exhibit Hallgo to top

Room: Exhibit Hall

14:00 - 15:30

Evolving Broadcast Infrastructure (Part 2)go to top

Room: Salon 2
Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
14:00 Towards documenting AVC Proxies in MXF
J. Patrick Waddell (Harmonic Inc., USA)
Inclusion of AVC coding within MXF has been a "hot topic" within SMPTE this year. One aspect of that interest has been documenting an interoperable "proxy" file format using AVC video and AAC audio. A SMPTE RDD is in preparation to do just that and the presentation will focus on the development of that document.
Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC's TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC's Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
J. Patrick Waddell
14:30 4K TV Capture: An Early Experience Sharing
Jerome Vieron (ATEME, France); Matthieu Parmentier (FranceTelevisions, France)
As the French public broadcaster, editor of 13 channels, 4 of which HD, francetélévisions studies 4KTV broadcasting scenarios for its premium programs. Enhancing the sense of realness, the viewing comfort and creating an immersive experience, such is the quest of any incumbent broadcaster looking to embrace the future of television. francetélévisions has undertaken 4K/60p production experiments to evaluate future workflows, and specifically work on adapting filming methods and materials. The Quality of Experience is evaluated taking into account the impact of compression on both 4K digital content and scanned films, with a specific attention to noise levels. After providing an early first report of available 4K cameras suitable for TV applications, the impact of compression technology on different types of 4k content will be presented, with a particular focus on HEVC (High-Efficiency Video Coding) codec as the natural compression standard for upcoming 4KTV applications.
Presenter bio: Jérôme Viéron received the Ph.D. degree in Signal processing and telecommunication from Rennes 1 university in 1999. He joined Thomson R&D France as Research Engineer working on Advanced Video Coding. He is an active contributor to standardization efforts led by ISO and ITU-T groups. He was very active to the H264/MPEG-4 SVC standardization process. In 2007, he joined the Video Processing and Perception Lab of Technicolor R&I as Senior Scientist and explore new technologies for future video coding applications and Standards. He joined ATEME in 2011, as Advanced Research Manager. He is in charge of French and European Research programs and works on new generation video coding technologies. He is an active contributor to the standardization process of HEVC (High Efficiency Video Coding) the future video coding standard, and is implied in the 4EVER (for Enhanced Video ExpeRience) consortium which aims at researching, developing and promoting an enhanced Television Experience.
Presenter bio: Matthieu Parmentier works as a R&D project manager at francetelevisions. He is vice-chairman of the the EBU strategic project FAR: Future of Audio and Radio production, including the well-known PLOUD group, editor of the R128 loudness recommendation. Matthieu started his audio career recording classical music CDs. He joined francetelevisions in 1999 as a sound engineer for live programs. From 2003 to 2007, as a news reporter, he was in charge of sound recording, video editing and outdoor satellite transmissions. Since 2008, he has been working as manager for multichannel audio and HD video development projects. Matthieu holds two license degrees in sound recording and video post-production and a master degree in audiovisual research (Toulouse University).
Jerome VieronMatthieu Parmentier
15:00 Systemization of Network-Based Genlock
Paul Briscoe (Consultant & SMPTE, IEEE, Canada)
Traditional synchronization systems based on blackburst, tri-level sync, DARS and timecode have little relevance in the networked world, and today represent a cumbersome and dated infrastructure overhead. These legacy systems, while continuing to provide utility, do not map into the future systems which are evolving today. SMPTE is working on a universal method for delivering any reference to a virtually unlimited number of devices over an IP network. This technology spans everything from simple configurations of a few pieces of gear to campuses, regional and global networks. This paper will investigate the opportunity for system designers to migrate to this new infrastructure, discussing the core network technology on which it is based, as well as system-level deployment considerations. In addition to replacement of existing synchronization architectures, it will explore new opportunities not possible using legacy methods.
Presenter bio: Paul Briscoe Bio Paul began his career in the broadcasting industry in 1980 at the CBC in Toronto. Specializing in the then-new arena of digital television, he was one of the designers of the Toronto Broadcast Center, with particular focus on the plant routing system, computer graphics facilities and overall systemization and timing. Prior to CBC (and during a brief hiatus), he was involved in technology startups and provided system and product design consultation to various clients. He jumped ship from CBC in 1994 to join Leitch Technology as Product Engineer, defining products for the new digital era. Over his 19 years at Leitch (subsequently Harris Broadcast, now Imagine Communications), he was a Project Leader, Development Group Leader, R&D Manager, Manager of Strategic Engineering and Principal Engineer. He left Harris Broadcast in November, 2013, and now provides system, technology, design and standards consultation to the ever-evolving media industry. He has several patents granted and in process, is a member of SMTPE and IEEE, and is an active participant on numerous SMPTE standards committees. A lifelong Radio Amateur, Paul is also an avid curler in the winter and cyclist and gardener in the summer.
Paul Briscoe

Sound Techniques (Part 1)go to top

Room: Salon 1
14:00 Adventures In Cinema Sound-The Birth Of A New Technical Committee
Brian Vessa (Sony Pictures Entertainment, USA)
The current Standards for calibration of the sound produced in cinemas world wide are based on work started in the 1970's employing acoustical real-time analysis of pink noise injected into the cinema sound system at the normalized output stages termed The B-Chain. Two years ago SMPTE convened a Study Group to evaluate these Standards in light of the evolution from analog to digital technology for the program material, the theatrical equipment, the test instrumentation and the methods of acoustical analysis. Dozens of meetings, and thousands of man-hours of discussion and testing have produced a wide ranging Report, soon to be published. Brian Vessa, Chair of the Study Group, will summarize the findings in this session.
Presenter bio: Brian Vessa is an audio professional with over 35 years of experience. After attending UCLA Engineering School he became a recording engineer, producing albums and recording orchestras. He was known for hot-rodding studio gear. Brian transitioned into film post as a music editor and sound editor, became a re-recording mixer at Cannon Films and MGM, then handled audio restoration at NT Audio. He was hired by Sony Pictures in 1998, and today is their Executive Director of Digital Audio Mastering and representative to DCI. Brian is a member of the Academy Sound Branch, SMPTE and AES. He chairs the SMPTE B-Chain Study Group as well as D-Cinema and IMF AHG's. He has written many audio specifications, including a white paper on near-field mixing for home theater that has been widely adopted. Brian enjoys recording jazz, rock and mixing live sound. He is a drummer and keyboardist in the LA area, an avid backpacker and an award-winning home wine and beer maker.
Brian Vessa
14:30 Further investigations into the interactions between cinema loudspeakers and screens
Brian Long (Skywalker Sound, Lucasfilm Ltd, USA); Roger W Schwenke (Meyer Sound Laboratories, USA); Glenn Leembruggen (Acoustic Directions and ICE Design Australia & Associate of Sydney University, Australia); Peter Soper (Meyer Sound Laboratories, USA)
Modern day data acquisition techniques allow the gathering of high resolution polar data to assess the performance of loudspeakers. While these techniques have become common in laboratory and engineering environments during product development, these same techniques can be applied to aspects of exploration beyond initial product development which affect the in-situ performance of loudspeakers. This paper will use modern high-resolution data acquisition techniques and analysis tools to investigate the complexity of the interactions between loudspeakers in typical locations behind the screen in a typical cinema presentation environment. Discussion will explore the impact on the loudspeaker responses of various types of screen surfaces and the distance between the screen and the loudspeakers. The measured performances will be compared to the standards for system response set out in ST-202 and the impact on patron listening experience will be assessed."
Presenter bio: With over 15 years in professional audio Long has a diverse and extensive knowledge regarding the design and implementation of sound reinforcement and playback systems for all types of installations ranging from simple single speaker events to massive show spectaculars and multi-channel media presentation environments. Long holds a Master of Fine Arts from the University of Southern California's School of Cinematic Arts, where he specialized in post-production audio and worked on advanced multi-channel audio concepts.
Presenter bio: Member of Meyer Sound Technical staff since 1986, holder of several design and technical patents for loudspeaker technology.
Brian LongPeter Soper
15:00 Frequency Response Versus Time-of-Arrival for Typical Cinemas
Louis Fielder (Dolby Laboratories Inc., USA)
Cinema equalization is typically based on the use of 1/3-octave, minimum phase filtering to adjust the spatial average of the steady-state magnitude response from multiple microphones to the X-curve. This paper explores one aspect of this process, namely whether the use of steady-state response is appropriate. To do this, the relationship between early-arrival and steady state spectral characteristics for typical cinemas was examined. The comparison between early-arrival and steady-state sounds was done via spectral analyses of impulse responses measured at multiple microphone locations within the audience seating area. The cinemas surveyed varied in size between 30 - 1500 seats and the time gating intervals varied from 4 ms to that equivalent to steady state. When this was done, front loudspeaker measurements showed little spectral tilt upward toward "brightness" for early arrival, compared to steady-state sounds, and a modest upward tilt for surround loudspeaker arrays in the largest cinemas.
Presenter bio: Louis Fielder received a BS. degree in electrical engineering from the Caltech in 1974 and an MS. degree in acoustics from the UCLA in 1976. Between 1976 – 1978 he worked on electronic design at Paul Veneklasen and Associates. From 1978 – 1984 he was involved in digital-audio and magnetic recording research at the Ampex Corporation. Since 1984 he has worked at Dolby Laboratories on psychoacoustics for audio design and audio coders for music distribution, transmission, and storage applications, i.e. AC-1, AC-2, AC-3, Enhanced AC-3, AAC, and Dolby E. Additionally, he has investigated perceptually derived limits for the performance for digital-audio conversion, low-frequency loudspeaker systems, and loudspeaker-room equalization. Currently, he is working on cinema equalization and acoustics. He is a fellow of the AES, a senior member of the IEEE, a member of the SMPTE and the ASA. He was on the AES Board of Governors during 1990 –1992, President during 1994 – 1995, and Treasurer 2005 - 2009.
Louis Fielder

15:30 - 16:00

Breakgo to top

16:00 - 17:30

Evolving Broadcast Infrastructure (Part 3)go to top

Room: Salon 2
Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
16:00 Broadcasting video over the Cellular network and the Internet
Michael Payne (Vislink, USA); John Wood (Vislink, Inc., USA); Nuraj L Pradhan (Vislink, Inc, USA)
Huge chunks of the off-air broadcast and microwave television spectrum have slowly and systematically been lopped off in deference to the burgeoning demand and unquenchable thirst for Broadband Services. The first wave came with the DTV conversion and then again with the 2 GHz ENG BAS analog to digital spectrum reduction. All the while, the volume of news gathering content and distribution has dramatically increased to keep pace with the public's insatiable hunger for these ever evolving media rich services. So how can the broadcaster function in the midst of this escalating and diametrically opposed broadcast environment? The answer is through increased efficiencies in modulation and encoding techniques and leveraged augmentation of 3G/4G wireless infrastructures. This paper explores the new use of bandwidth-reducing modulation techniques and the shift from the use of MPEG-2 to MPEG-4 (H.264) video encoding, thereby creating higher value propositions for this encroached upon television broadcast spectrum.
Presenter bio: Nuraj Lal Pradhan is a Wireless Network Engineer at Vislink, Inc. In this role Nuraj utilizes his expertise in design and development of protocols and technologies in Core IP, Cellular and Wireless Sensor/Mesh Networks. His focus is the development of video adaptation algorithms over wireless (cellular and mesh) networks. Nuraj earned his Ph.D. in Electrical Engineering from the City University of New York. In addition, Nuraj holds a Master of Science Degree in Communication Networks and Services, a Master of Engineering Degree in Telecommunications as well as a Bachelor of Science Degree in Electronic Engineering. His achievements include a Patent Disclosure Application for “Distributed Power Management Algorithm for Mobile Ad-Hoc Wireless Networks”. Nuraj's work and education in the wireless and telecommunications field has provided worldwide exposure to standards and working environments including Europe, Asia and the USA.
Nuraj L Pradhan
16:30 Multiformat Operation - System Implications and Solutions for Routing Switchers
Alan Smith and Kim Francis (Snell, United Kingdom)
This paper discusses the formats and media involved when routing simultaneous, multiple video and audio formats, on a variety of physical interconnections, the implications of operating in such an environment, proposes possible operational practices, and reviews practical solutions available. The simultaneous use of multiple video and audio formats, on a variety of physical interconnections, coupled with the demand for increased efficiency, has created new challenges for today's systems engineers and planners when specifying a routing switcher. Transitioning to various high-definition video formats and increasingly dense audio formats has increased the overall complexity of these multi-format systems—a 36,864 x 36,864 embedded audio matrix for an 1152 x 1152 video router! Simultaneously, audio and video processing requirements should be accommodated. Internal processing enables system and operational efficiencies: Control is simplified; a flexible input/output arrangement allows easy reconfiguration between uses. Finally, a glimpse into the future - will it get easier?
Presenter bio: Currently Senior Product Manager for Snell Limited UK responsible for Advanced Routing and Processing Systems. Previous employments include 20 years with Vistek Electronics Limited (UK). I was Commercial Director prior to takeover by Probel (UK) and subsequent merger with Snell and Wilcox. Qualifications: BSC Hons Degree in Physics Chartered Engineer (C.Eng) Member of IEEE
Kim Francis
17:00 Here Comes Ethernet
Stephen H Lampen (Belden, USA)
Ethernet has been around since 1973, and you're probably aware of many companies who have struggled to make it work for audio and video applications. But those are proprietary systems where often Box A can't talk to Box B. So IEEE, which owns the Ethernet standard has been working on a re-write of the Ethernet standard called 802.1BA AVB, and the AVB is for audio and video bridging. Finally, this standard may herald a new way to design, install and operate audio and video facilities.
Presenter bio: Steve Lampen has worked for Belden for twenty-one years and is currently Multimedia Technology Manager and also Product Line Manager for Entertainment Products. Prior to Belden, Steve had an extensive career in radio broadcast engineering and installation, film production, and electronic distribution. Steve holds an FCC Lifetime General License (formerly a First Class FCC License) and is an SBE Certified Broadcast Radio Engineer. On the data side he is a BICSI Registered Communication Distribution Designer. In 2010, he was named “Educator of the Year” by the National Systems Contractors Association (NSCA), and in 2011 was named “Educator of the Year” by the Society of Broadcast Engineers. His book, "The Audio-Video Cable Installer's Pocket Guide" is published by McGraw-Hill. His column "Wired for Sound" appears in Radio World Magazine. He can be reached at steve.lampen@belden.com
Stephen H Lampen

Sound Techniques (Part 2)go to top

Room: Salon 1
16:00 Tutorial on Critical Listening of Multi-channel Audio Codec Performance
Sunil Bharitkar and Grant Davidson (Dolby Laboratories, USA); Louis Fielder (Dolby Laboratories Inc., USA); Poppy Crum (Dolby Laboratories, USA)
Listening for impairments introduced by multichannel audio codecs is an important task. Classical objective methods are not adequate in assessing audio coding schemes. Accordingly, the ITU-R BS.1116 & 1534 recommendations provide guidelines for subjective evaluation of codecs. This paper provides a tutorial on the proper conditions to do reliable codec testing. Several key components covered are, proper experimental design, selection of listening panel and training of listeners, developing the test methodology, selecting balanced program material, loudspeaker/room and sound-field requirements, listening for artifacts, and statistical analysis. This paper addresses these various components including the sound-field requirements since, as per the ITU: "The characteristics of the reference sound field at the listening area are most important for the subjective perception of, or the quality assessment of, auditory events and their reproducibility at other listening places or rooms. These characteristics result from the interaction of the loudspeaker(s) and the listening room".
Presenter bio: Sunil Bharitkar, received his Ph.D. in Electrical Engineering from the University of Southern California (USC) in 2004. He has published over 50 technical papers and has 6 patents in the area of signal processing applied to audio and acoustics, and a textbook (Immersive Audio Signal Processing) from Springer Verlag. He co-founded Audyssey Laboratories and had been the VP of Research before joining Dolby Laboratories in the Office of the CTO as Director of Technology Strategy. His room equalization research, at both USC and Audyssey, has resulted in several patented or patent-pending co-inventions. His area of research is in signal processing applied to audio and acoustics using theory and knowledge from acoustics, signal processing, and auditory perception. Some of his recent research leading to inventions include room equalization, dynamic noise compensation for automobile/home-theater/airplane environments, bandwidth extension of speech signals affected by telephony channels, psychoacoustic and physical bass extension, surround envelopment, noise compensation and suppression for telephony, and spatial/3-D audio.
Sunil Bharitkar
16:30 Scalable Format and Tools to extend the possibilities of Cinema Audio
Charles Robinson, Sripal Mehta and Nicolas Tsingos (Dolby Laboratories, USA)
Surround sound has been making cinematic story telling more compelling and immersive for over 30 years. The first widely deployed surround systems used magnetic recording. Later, optical recording became standard, enabling up to 7.1 channels of audio. With the transition from film to digital distribution there is an opportunity for the next generational step forward. In this paper we describe a new surround sound format that dramatically advances the capabilities of cinema sound. The format was developed in close cooperation with industry stakeholders and was specifically designed to provide the most desired new capabilities and provide a path for future enhancements, while respecting and leveraging the strengths and know-how of the current sound format and pipeline. In particular, the new system maintains and advances the ability to deliver impeccable audio quality, and flexibly extends the creative possibilities to meet the needs and aspirations of both content creators and exhibitors.
Presenter bio: Charles Robinson received BSEE and MSEE degrees from the University of Illinois, where he specialized in signal processing and began his professional career just as real-time digital audio signal processing was becoming a practical reality. Since joining Dolby Research in 1995 areas of research have included, acoustics, audio coding, interactive audio and spatial audio with applications to broadcast, gaming and cinema. Mr. Robinson has authored or coauthored over a dozen patents in audio signal processing, contributed to two Emmy-award winning products, and is a member of AES and IEEE.
Charles Robinson
17:00 Lee de Forest and the Invention of Sound Movies, 1918-1926
Mike Adams (San Jose State University, USA)
Lee de Forest received his Ph.D. in physics from Yale in 1899 and entered the 20th Century as an inventor. By 1906 he had patented his signature invention, the three-element vacuum tube he called the "Audion." Beginning in 1918 he improved upon the earlier work of Bell and Ruhmer and patented a system of writing sound on motion picture film for synchronized talking pictures. Between 1920 and 1926 he worked with fellow inventor Theodore Case to develop the Phonofilm system of variable density recording. De Forest presented his work to the SMPE on four occasions between 1923 and 1926. Even though the de Forest system would not end up the preferred one for sound, his tube would be the key as it allowed amplification of audio using loudspeakers which made it possible for audiences to experience talking pictures. In 1960 de Forest received an Oscar for his sound-on-film contributions.
Presenter bio: Mike Adams has been a radio personality and a film maker. Currently he is a professor of radio, television, and film at San Jose State University, where he has been a department chair and an associate dean. As a researcher and writer of broadcast and early technology history, he created two award-winning documentaries for PBS, “Radio Collector,” and “Broadcasting's Forgotten Father.” He has had published numerous articles and four books, including "Charles Herrold, Inventor of Radio Broadcasting," and "Lee de Forest, King of Radio, Television, and Film," Springer Science, 2012.
Mike Adams

19:00 - 22:00

Honors and Awards Dinner & Ceremonygo to top

22:15 - 23:15

Afterparty with SMPTE Jamgo to top