Program for 2012 SMPTE Annual Technical Conference
Tuesday, October 23
Tuesday, October 23, 08:00 - 09:00
Room: Ray Dolby Ballroom Terrace
Tuesday, October 23, 09:00 - 09:15
Rooms: Salon 1, Salon 2
Tuesday, October 23, 09:15 - 10:15
Room: Salon 1
- 09:15 Keynote Address
- Founder & CEOPresenter bio: A pioneer and innovator in TV and digital media, Anthony Wood is the Founder and CEO of Roku, a name that means “six” in Japanese to represent his sixth company. In the early days of Roku, Anthony also served as the vice president of Internet TV at Netflix, where he developed what is known today as the Roku streaming player, originally designed as the original video player for Netflix. Prior to Roku, Anthony invented the digital video recorder (DVR) and founded ReplayTV, where he served as President and CEO before the company's acquisition and subsequent sale to DirecTV. Before ReplayTV, Anthony was Founder and CEO of iband, Inc., an Internet software company sold to Macromedia in 1996. The code base developed by Anthony at iBand became a central part of the original core code of Macromedia now known as Adobe Dreamweaver. After selling iBand, Anthony became the vice president of Internet Authoring at Macromedia. Earlier in his career, Anthony was Founder and CEO of SunRize Industries, a supplier of hardware and software tools for non-linear audio recording and editing. Anthony holds a bachelor's degree in electrical engineering from Texas A&M University.
Tuesday, October 23, 10:15 - 10:45
Tuesday, October 23, 10:45 - 12:15
Applying File-Based WorkflowsRoom: Salon 1
- 10:45 The Pipe Dream Becomes Real: Advertising Workflows Come of Age
- The past year has been incredibly eventful in the development of advertising workflows. We can now embed a digital version of the advertising slate with delivered commercials, using the AMWA's AS-12. BXF can be used to exchange the schedule of commercials, instructions to move them from point A to B, and their metadata. It's also developing the ability to move copy rotation instructions from Agency to Broadcaster, filling the biggest gap existing today in the workflow. Ad-ID bridges all of this, making unique commercial identification simple. With an ever-expanding array of delivery platforms, as well as targeted advertising, maximum efficiency for advertising workflows has gone from a nice idea to a must-have. The good news is that we now have the tools to make it all work. We'll show how the whole thing fits together today, using industry standard approaches, taking the pipe dream of automated advertising workflows to reality.Presenter bio: Chris Lennon serves as President & CEO of MediAnswers. He is a veteran of over 25 years in the broadcasting business. He also serves as Standards Director for the Society of Motion Picture and Television Engineers. He is known as the father of the widely-used Broadcast eXchange Format (BXF) standard. He also participates in a wide array of standards organizations, including SMPTE, SCTE, ATSC, and others. In his spare time, he's a Porsche racer, and heads up a high performance driving program.Presenter bio: Harold S Geller is Chief Growth Officer of, Advertising Digital Identification LLC (Ad-ID) a US Based advertising-metadata system, the UPC code for Ads across all platforms, which is a joint venture of American Association of Advertising Agencies (4A’s) and the Association of National Advertisers (ANA). Harold speaks and writes extensively regarding interoperability, digital workflow and metadata in advertising and is the co-author of four white papers on the subject. Harold’s advertising career spans nearly 30 years, in the United States and Canada. He has worked in media buying/planning, account management, financial, and technology roles at MindShare, Ogilvy & Mather, and McCann Erickson, and the defunct Ted Bates and Foster Advertising. Harold is a graduate of radio and television broadcasting from Seneca College (Toronto, Ontario, Canada).
- 11:15 Lessons Learned Implementing FIMS 1.0
- This presentation describes practical lessons learned while implementing services interfaces in accordance with the Framework for Interoperable Media Service (FIMS) 1.0. FIMS is a framework of service definitions for implementing media related operations using a Service-Oriented Architecture (SOA) approach Experiences gained through building a portable test harness that implements data driven simulators for both Service Consumer (orchestration layer) and Service Provider interface will be shared.Presenter bio: Ian Hamilton has been an innovator and entrepreneur in Internetworking infrastructure and applications for more than 20 years. As a founding member of Signiant, he has led the development of innovative software solutions to address the challenges of fast, secure content distribution over the public Internet and private intranets for many of the media and entertainment industries' largest companies. Prior to Signiant, Ian was Chairman and Vice President of Product Development at ISOTRO Network Management. He was responsible for launching ISOTRO's software business unit and created the NetID product suite before the company was successfully acquired by Bay Networks. Ian held senior management positions at Bay Networks and subsequently Nortel Networks, from which Signiant emerged. Previously Ian was a Member of Scientific Staff at Bell Northern Research performing applied research and development in the areas of Internetworking and security.
- 11:45 Developments in the Realization of Practical File Based Workflow Environments
- Description: The imperative of file-based content environments has been compelling, but equally challenging. Media-based content environments have matured over virtually all of recorded history, and paradigms and tools have become so ingrained that their presence and utility have become second nature. To successfully make this transition tools are necessary to facilitate the workflows and other attributes of a true file-based infrastructure. LTO5, with its high density and throughput and its fundamental reliability, coupled with the Linear Tape File System (LTFS) are two such innovations. The fact that both technologies are well documented, standardized and multi-sourced is another essential component and leading indicator for positive contribution to file-based environments. This paper will discuss the business and technical necessities of moving to a file-based workflow, the history and attributes of LTO tape, the development and features of LTFS, and how all of these pieces can come together to create a modern environment.Presenter bio: David Pease has worked in the computer industry for more than 40 years. After running a successful consulting company for many years, he joined IBM Research in the early 1990s. At IBM he has concentrated on storage-related research; he has contributed to various projects, including Tivoli Storage Manager (TSM), the DVD standard and specifically the UDF file system, and most recently he led the development of LTFS (the Linear Tape File System). He recevied his Master's and Ph.D. in Computer Engineering from U. C. Santa Cruz.Presenter bio: Andrew G. Setos has spent his entire career at the cutting edge of audio-visual innovation, from production to distribution and exhibition and in virtually every form of content play. He has collaborated with some of the industry's most prolific creative talents and business executives to help realize their visions. He is currently CEO of BLACKSTAR Engineering Inc., a firm that advises on the intersection of technology and media. Most recently he was President, Engineering for the Fox Group where he was involved in almost every aspect of content creation and distribution. Previous to Fox he was the lead engineering and operations executive at the company that launched MTV, VH-1 and Nickelodeon. His role is summarized in the recently published book I Want My MTV. Before that he spent several years at WNET as Chief Engineer where he was involved in many innovative, award winning productions, such as Live from Lincoln Center, Dance in America, Bill Moyers Journal and the MacNeil/Lehrer News Hour. He has applied for and been granted a variety of patents. Along the way he has received many distinctions, including being elected a Fellow of the Society of Motion Picture and Television Engineers and accepted three Emmy's for Engineering from the Academy of Television Arts & Sciences, the most recent being the Charles F. Jenkins Lifetime Achievement Award. Andrew holds a Bachelor of Science degree from Columbia University School of Engineering and Applied Science.
Real-Time WorkflowsRoom: Salon 2
- 10:45 GPU-Based Real-Time 4K RAW Workflows
- Advances in digital imaging technology are fundamentally changing the cinema workflow and the tools artists and engineers traditionally use. Relatively inexpensive 4K resolution digital motion picture cameras capable of capturing and storing RAW sensor data with a wide dynamic range, high color gamut, and high bit depths all at frame rates that have traditionally been the domain of broadcast video are now available. Implementing a RAW workflow that provides real-time interactivity and a production path where all artistic choices are non-destructive requires a great deal of compute as every image displayed needs conversion from RAW sensor data to display oriented imagery and colorimetry. This highly parallel operation is well suited to the capabilities of modern graphics processing units (GPUs). This paper will present best practices for optimal GPU compute core and memory usage as well as efficient data transfer schemes for sensor data processing and display.Presenter bio: Tom is a Senior Applied Engineer for Media & Entertainment in NVIDIA's Professional Solutions Group where he focuses on the use of GPUs in broadcast, video and film applications ranging from pre-visualization to post production and live to air. Prior to joining NVIDIA, Tom was an Applications Engineer at SGI. Thomas has a M.S. degree in Computer Science from the Computer Graphics Lab at Brown University and a B.S. Degree from the Rochester Institute of Technology.
- 11:15 Dynamic Rate Control Technologies enabling Priority Based Bandwidth Allocation for IP News Gathering Networks
- In this paper, we propose an IP based news gathering network where terminals share bandwidth in accordance with the DiffServ model. Seamless route connection of IP networks and dynamic bandwidth allocation enables speedy and accurate coverage. Therefore, we developed two key technologies: a dynamic rate control for live video transmission and a modified TCP that considers transmission priority. This rate control adjusts each encoding rate of multiple videos that share a common path to avoid video interruption. The developed TCP allocates bandwidths at an appropriate utilization ratio with consideration of their priority while the conventional TCP allocates bandwidths equally among TCPs in the common path, and this protocol maintains backward compatibility with the conventional TCP. We evaluated these technologies by performing transmission experiments and proved that both live flows and file based flows can share network bandwidth appropriately by using the maximum bandwidth of the network.Presenter bio: Shuhei Oda has 7 years' experience in the broadcasting industry, starting with program production and system operation in 1999, before he moved into NHK Science and Technical Research Laboratories. He has been working in the area of video transmission systems for program production using IP networking technologies since 2006. His research interest covers traffic control and management of IP networks for the purpose of program production and exchange, and practical technologies dedicated to reliable and speedy program production.
- 11:45 Real Time File System for Content Distribution
- This presentation gives a deep view into the development of a file system, especially designed for scalable media files including JPEG 2000 and H.264 SVC. By applying specially developed techniques, including the Substitution Strategy, a real-time capable file-system can be built, even if the mass storage, or the interface to it, is too slow to deliver the data in the desired time. Rather than skipping whole files, new caching strategies will be shown that again, take advantage of the file-inherent scalability. The presented system also comprises an advanced user-rights-management that allows for granting access-rights to certain parts of a scalable file, rather than granting rights to whole files. Users will therefore get a different version of an image or video, dependent on their current access-rights. Due to the Media Repackaging Component, these customized versions will be generated on the fly, if a user requests it.Presenter bio: Heiko Sparenberg, born in 1977, received his Diploma degree in Computer Science in 2004 and a Master degree in 2006. He joined Fraunhofer IIS in Erlangen as a software engineer in 2006. Today, Heiko is Head of the group Digital Cinema and responsible for several software developments, e.g. the easyDCP software suite for digital cinema. In 2015, he received a PhD degree for his work with focus on improved data workflows in a professional movie post-productions using scalable media codecs including JPEG2000 and H264.SVC.
Tuesday, October 23, 12:00 - 20:00
Room: Exhibit Hall
Tuesday, October 23, 12:30 - 14:00
Room: Hollywood Ballroom-Mezzanine Level of the Loews Hollywood Hotel
- 12:30 Luncheon Keynote: NBC's Innovative Use of Technology at the 2012 Olympics
- Please join SMPTE's Luncheon Keynote Speaker, Darryl Jefferson of NBC's Olympics International Broadcast Center, as they present an intriguing and enlightening view of the behind the scenes innovations and advanced technologies that enabled NBC to bring the 2012 London Olympics to virtually every corner of the world. Planned topics include a "Big Picture" look at transmission, inter-continental production (@home efforts), project overall size and scope, the MAM, Highlights and Streaming Factories and New Media Deliverable. This is a "don't miss" event!Presenter bio: With a career that cuts across television, film, and sound, Darryl Jefferson was named Director of Post Production Operations for NBC's Olympic division in 2008. Jefferson oversees and maintains the division's Stamford facility, where he also acts as the Highlights Factory Project Manager, and directs technical operations for NBC Sports Digital Group. In his current position, Jefferson took the Highlights Factory from conception through implementation at the London Olympic Games, did the same at the Vancouver , 2010 Winter Games. In London, the system delivered web, broadband, live stream, and VOD clips during the Olympics, creating 3000 highlight packages in 17 days. During the games, NBCOlympics.com saw upwards of 57 million unique visitors and 1.5 billion page views, shattering the records of previous games, and taking content delivery technology to a whole new level. Jefferson is a four-time Emmy Award Winner for New Approaches in Broadcasting, Short Form 2008, 2009, 2010, as well as a 2010 Technical Emmy Award Tech Team, Studio.
Tuesday, October 23, 14:15 - 15:45
Performance Issues in File-Based WorkflowsRoom: Salon 1
- 14:15 Performance Parameters in File Based Workflows
- Establishing a high system performance value for rich-media file-based workflows is tightly coupled to storage bandwidth. Configuring small scale storage solutions can be straight forward and simple. However, larger enterprise class systems that intend to grow, that must bridge other media platforms and peripherals, and need to support multiple sets of clients and associated workflows require a proper storage solution with few limitations. The hidden issues that become performance killers in a large scale storage solution are frequently misunderstood. This paper will present some of those hidden parameters; provide examples on how systems can be designed for scalability in both capacity and bandwidth; and show that by proper planning and implementation the consequences of a poorly designed, under rated system can be alleviated.Presenter bio: As Chief Technologist at Diversified Systems, Karl Paulsen provides technology driven engineering services for projects related to media asset management, advanced digital video systems, workflow, and media storage technologies. Actively involved in television engineering for over 35 years; Karl held positions as CTO, VP Engineering and Director of Engineering for leading systems integration companies, broadcast television stations, mobile, CGI and post-production companies. Karl is a SMPTE Fellow, Standards Committee participant, SBE Lifetime Certified Professional Broadcast Engineer, and an IEEE member. He is a recognized author and industry technologist, publishing over 150 articles for TV Technology magazine in his continuing series ‘Media Servers' which focuses on servers, storage, file-based workflow and media management. Karl authored the books ‘Moving Media Storage Technologies' and ‘Video and Media Servers: Applications and Technology', and has held SMPTE manager and chair positions for the Pacific Northwest Section.
- 14:45 Optimised IP multicast architectures for real-time digital workflows
- Real-time digital workflows are now commonly being distributed over Internet Protocol (IP) networks, across the entire broadcast chain from production to distribution. Many deployments are leveraging IP multicast for optimising the delivery of a source to a set of diverse end points such as video servers, quality control units, video monitoring, time & sync slaves, etc. This paper focuses on the evolution of IP multicast delivery, architectural best practices, security considerations and hardware performance requirements.Presenter bio: Thomas Kernen is a Technical Leader in Cisco's Service Provider Video Software and Solutions (SPVSS) architecture team. His main area of focus is defining architectures for transforming the broadcast industry to an All IP Video infrastructure. Thomas is a member of the IEEE Communications and Broadcast Societies, the Society of Motion Picture & Television Engineers (SMPTE) and the Royal Television Society (RTS). He is active within a number of trade and industry organisations including the Digital Video Broadcasting (DVB) Project, the SMPTE Standards Committees and the European Broadcasting Union (EBU) working groups. Prior to joining Cisco, Thomas spent ten years with various telecoms operators, including a FTTH triple play operator, for whom he developed their video architecture.
- 15:15 Being The Change You Wish To See: Changing Broadcast Schedules Right Up To Air
- The Internet has caused us to think about the words "dynamic" and "media" in new ways. Viewers now have access to whatever they want, whenever they want. Advertising is no exception. Advertisers expect the right ad to be shown to the right person, on the right device. This includes changing their minds about what they want to advertise, when, and where, right up to the time that the viewer sees the ad. Sounds like a nightmare, doesn't it? It used to be. Fortunately, we now have SMPTE's Broadcast Exchange Format (BXF), which is perfect for this task. We'll look at real-world cases in which BXF is enabling dynamically changing delivery of content, right up to the time the viewer sees it. So, don't fear change, embrace it. Oh, and you can expect to not only save money doing this, but also find new revenue.Presenter bio: Chris Lennon serves as President & CEO of MediAnswers. He is a veteran of over 25 years in the broadcasting business. He also serves as Standards Director for the Society of Motion Picture and Television Engineers. He is known as the father of the widely-used Broadcast eXchange Format (BXF) standard. He also participates in a wide array of standards organizations, including SMPTE, SCTE, ATSC, and others. In his spare time, he's a Porsche racer, and heads up a high performance driving program.
Algorithms and CompressionRoom: Salon 2
- 14:15 HEVC - Enabling commercial opportunities through next generation compression technology
- High Efficiency Video Coding (HEVC) is near completion by the ITU-T | ISO/IEC Joint Collaborative Team on Video Coding (JCT-VC). The aim of HEVC is to revolutionize the compression world with a potential 50% bitrate saving over AVC, or H.264 / MPEG-4 AVC and even more dramatic bandwidth savings compared to MPEG-2. HEVC is already attracting much interest from acquisition to distribution and delivery to the home over all networks. Forecasts say 90% of IP traffic will be video by 2015, making HEVC an attractive enabler for new types of video consumption, from mobile devices served over unmanaged networks to high-end 4K TV to the home. This paper compares simulation results from the JCT-VC HEVC test model against an industry-leading AVC encoder. The paper also examines the behavior of selected HEVC tools that facilitate compression gains over AVC. Finally, it explores the significance of these efficiency gains for a variety of applications.Presenter bio: Lukasz has joined Ericsson Television in 2007 and has worked on various aspects of video processing and compression research. The Ericsson Television Compression Algorithms R&D group specialises in video compression performance. The group has developed an advanced compression research model to investigate pre-processing, multipass encoding and general coding performance. Knowledge from this work has formed the foundation of Ericsson Television's real-time encoding products. Looking forward, the group is focused on researching and developing the technology to meet the future compression demands of our customers. MEng. Electronics and Telecommunications, Gdansk University of Technology Poland. PhD Candidate, University of Surrey, UK
- 14:45 Automatic Interlace or Progressive Video Discrimination
- Video content originates from a wide variety of sources. Even within one programme, several different video technologies may have been used during production. This paper discusses an algorithm that is able to reliably identify progressive and interlace frames. The algorithm is based on calculating a metric based on the degree of "interlacing artefacts" produced when adjacent fields from different frames are re-interleaved to reform a frame. The metrics are analysed over multiple frames to detect whether the material originates from a progressive or interlace source. This process has successfully been adapted to correct film-phase errors found in telecined archive material.Presenter bio: Manish Pindoria works as an Engineer at BBC Research and Development, currently focusing on digitisation and signal processing for archive applications. Prior to the BBC he designed image processing algorithms and hardware (ASIC, FPGA) for products including broadcast reference monitors, 4k camera systems and medical image processing units for Sony Broadcast and Professional Research Labs (BPRL). Manish holds a masters degree in Engineering Science from Oxford University. He is the co-inventor of 6 patents.
- 15:15 Spatial Concealment for Damaged Images Using H.264/AVC Intra Prediction and Neighborhood Cliques
- Intra prediction methods introduced by H.264/AVC and furthered by HEVC, beyond enhancing intra coding efficiency also provide a potent tool for spatial error/loss concealment and digital film restoration e.g. scratches. In this paper, a novel H.264/AVC intra prediction based algorithm for spatial concealment is introduced. The proposed algorithm utilizes reliable intra prediction direction information from available neighboring regions in the same image (or video frame) and synthesizes intra prediction directions most suitable for erroneous/lost/damaged image regions. The synthesized intra prediction directions are used to conceal the underlying artifacts through pixel domain interpolation. Distinguishing features of the current work contributing to its success are its use of (a) an accuracy assessment and consequent weighting of available neighbors' information, and (b) conditional propagation of available neighbors' information based on the concept of 'neighborhood cliques'. Both features significantly improve the reliability of interpolation results. Proposed framework enables using both causal and non-causal information.Presenter bio: Seyfullah Halit Oguz received his B.Sc. (1987) and M.Sc. (1990) degrees in Electrical and Electronics Engineering respectively from the Middle East Technical University and Bilkent University in Ankara, Turkey. He received his Ph.D. degree in Electrical Engineering in 1999 from the Electrical and Computer Engineering Department of the University of Wisconsin – Madison. In his engineering career prior to joining Qualcomm Inc., Dr. Oguz worked for Los Alamos National Laboratory (Group NIS-1), EMC Corporation and Sand Video Incorporated. He joined Qualcomm Inc. in July 2003 initially taking part in the MediaFLO project. Currently, Dr. Oguz is a Senior Staff Engineer member of the Multimedia Team in the Strategic IP Division of Qualcomm Inc. Dr. Oguz, Seyfullah is the author/co-author of 24 refereed journal and conference papers, and served as a reviewer for many prominent journal and conferences. He holds 15 granted US Patents. Dr. Oguz is a member of SMPTE, IEEE and ACM.
Tuesday, October 23, 15:45 - 16:15
Room: Exhibit Hall
Tuesday, October 23, 16:15 - 17:45
The Unexpected in File-Based WorkflowsRoom: Salon 1
- 16:15 High-speed format converter with intelligent quality checker for file-based system
- NHK broadcasting is shifting to file-based systems for its TV production and playout systems including VTRs and editing machines. A variety of codecs and Material eXchange Format (MXF) formats have been adopted for broadcast equipment. These include MPEG-2/AVC and OP1a/OP-Atom. Video files need to be converted into the selected codec and format to operate efficiently. The quality of video and audio must be checked during this conversion process because degradation and noise may occur. This paper describes equipment that can quickly convert files to multiple formats as well as intelligently check the quality of video and audio during the conversion. The equipment automatically adjusts thresholds to detect errors in the quality check, depending on the type of codec and the spatial frequency of each area, which is divided into 16 sub-areas. Furthermore, this can be done in less time than the actual video length by optimizing the software processing performance.Presenter bio: Kenichiro Ichikawa received his B.S. degree from the Keio University in 2002 and M.S. degree from the Keio University in 2004. Following graduation, he joined NHK (Japan Broadcasting Corp) and built his career as a video engineer through studio program production and live telecasts. He is currently involved in the development of Super Hi-Vision Systems, particularly video and master control systems. He belongs to Super Hi-Vision System Design & Development Division.
- 16:45 Corralling the Chaos of Ancillary Data within Multiple File Formats
- SDI Based Workflows and formats were "iron clad" and well defined. These were the good old days when devices interconnected with ease thanks to the rigor and breadth of SMPTE Standards for SDI. Nowadays, with the great flexibility of File Based Media Workflows and the multitude of formats needed for different applications, we are dealing with incompatible wrappers and inconsistent or non-extendable ANC data carriage. This paper will look at these evolving workflows and the resulting Wild West of Files. More specifically the paper explores the challenges faced with the handling of ANC data such as AFD, captions, ad insertion triggers, Dolby's Dialnorm, etc., within various file formats. The paper then describes the unified and extensible approach offered by SMPTE436M for the carriage of ANC data within MXF wrapped files. Could SMPTE436M be the champion we need to restore order to the Wild West and corral some of the chaos?Presenter bio: Sara Kudrle is currently the Product Marketing Manager for Monitoring and Control within the Strategic Marketing group of Grass Valley, a Belden Brand. Sara received her degree in Computer Science with a Minor in Mathematics from California State University Chico. Sara's 15 plus years as an engineer in the Broadcast industry started at Tektronix where she worked in VideoTele.com. From there, she joined Continental Electronics, working within the TV Transmitter group where she was responsible for developing exciter control software. From there she joined Miranda/NVISION and was responsible for several projects within the Router Control group. Sara has authored several papers for NAB, PBS and SMPTE conferences and has been published in the SMPTE Motion Imaging Journal and Broadcast Engineering. Sara's paper on "Fingerprinting for Solving A/V Synchronization Issues within Broadcast Environments,” received the 2012 Journal Award for best article. Sara is active within SMPTE serving on several committees and within the standards community. Sara is a current SMPTE Secretary/Treasurer and former Section Manager for Sacramento as well as the Western Region Governor for SMPTE. She is also a member of IEEE.
- 17:15 And the winner is... Workflows for Judging Content Submissions at Siggraph and VES
- With the proliferation of formats and tools for media creation, providing a uniform arena in which to judge creative submissions for peer group recognition is a difficult and potentially labour intensive problem. This paper discusses a workflow and supporting software developed to support the uniform submission, judging, and display of content for the Visual Effects Society and ACM Siggraph Awards.Presenter bio: Ben started at Sohonet in 2000 and is now in charge of Sohonet's global technology programme and support engineers. Ben has developed a number of groundbreaking QC and storage solutions for the media and entertainment industries, including the electronic cinema submission process for SIGGRAPH and VES awards process for many years. Ben was previously part of the Oscar and Emmy award-winning development team at Lightworks and has worked on many research projects and written a variety of articles regarding the future of media production.
Perception and the Human Visual SystemRoom: Salon 2
- 16:15 Quantitative Evaluation of Human Visual Perception for Multiple Screens and Multiple CODECs
- Great consumer experiences are created by a convergence of sight, sound, and story. This paper is an in-depth quantitative analysis of the neurobiology and optics of sight. More specifically, we examine how principals of vision science can be used to predict the bit rates and video quality needed to make video on everything from smartphones to Ultra HDTV a success. We present the psychophysical concepts of simple acuity, hyperacuity, and Snellen acuity to examine the visibility compression artifacts for MPEG2 and MPEG4/H.264. We also take a look at the newest emerging International compression standard HEVC. We investigate how the various sizes of the new compression Units (CU, PU, and TU) in HEVC would be imaged on the retina, and what that could mean in terms of the HEVC video quality and bit rates we would likely need to deliver quality content to smartphones, tablets, HDTV, 4K TV, and Ultra HDTV.
- 16:45 Perceptual Signal Coding for More Efficient Usage of Bit Codes
- As the performance of electronic display systems continues to increase, the limitations of current signal coding methods become more and more apparent. With bit depth limitations set by industry standard interfaces, a more efficient coding system is desired to allow image quality to increase without requiring expansion of legacy infrastructure bandwidth. A good approach to this problem is to let the human visual system determine the quantization curve used to encode video signals. In this way optimal efficiency is maintained across the luminance range of interest, and the visibility of quantization artifacts is kept to a uniformly small level.Presenter bio: Scott Miller is a senior member of the technical staff at Dolby Laboratories where he serves in the Imagaging Research group. He specializes in image display technology and video signal processing, most recently working on Dolby's Professional Reference Monitor. He received a B.S. in electrical engineering from Cornell University and has spent nearly 30 years working in the video industry, including several years with Panasonic Research where he helped develop the Emmy award winning Universal Video Format Converter.
- 17:15 Human Perception & Advancements in File-Based Quality Control
- When retrieving video pictures from digital video tape or film prints, artifacts are generally introduced via methods that are difficult to detect without use of the human visual system, since many of these artifacts do not have a common, mathematically definable pattern to them. These artifacts can include film tearing, film dirt, analog noise, block-based digital drop outs and others. This paper covers a newly designed metric and the implementation methods used to automatically find these types of artifacts without need of an external reference, substantially functioning and locating artifacts in the same method as the human visual system. The paper also shows the viability of this metric in a system, and how this metric is useful and cost-saving for file-based content preparers compared to existing, manual processes for content review.Presenter bio: Eric Carson has been listening to customers & building innovative products to meet their needs in the digital media test & measurement space for over 10 years. After designing and overseeing product engineering for Blu-ray physical and video test tools, Eric was directly responsible for the start-up of Digimetrics, where he oversaw the creation of Aurora QC, Hydra Player and AutoFix correction tools. In addition to his contributions in system architecture (with two patents awarded for video artifact detection) and engineering oversight, he additionally took responsibility for managing the business until Digimetrics' technology was exclusively licensed to Tektronix in January, 2015. Now with Dalet, Eric remains heavily involved in forward looking customer conversations and solution architecture, and he continues to enjoy the opportunity to watch and coach baseball & softball players around the world.Presenter bio: Atul is a software engineer at DCA Inc., and is the co-author of some of the quality algorithms used in automated QC software 'Aurora'.
Tuesday, October 23, 18:00 - 20:00
Room: Exhibit Hall
Wednesday, October 24
Wednesday, October 24, 08:00 - 09:00
Room: Ray Dolby Ballroom Terrace
Wednesday, October 24, 09:00 - 10:30
New Regulations and ImplementationsRoom: Salon 1
- 09:00 Compliance with FCC Rules for IP Distribution of Video Programming
- In January, 2012, the Federal Communications Commission adopted rules requiring closed captioning of IP-delivered video programming that has aired on television. The rules apply to video programming owners (i.e., copyright holders), video programming distributors (i.e., websites), and manufacturers of apparatus designed to receive or play back video programming. These rules begin to take effect on September 30, 2012. This presentation will describe the rules, compliance, and the status of SMPTE Timed Text as a "safe harbor" for compliance. It will also cover the status of ongoing accessibility initiatives at the FCC.Presenter bio: Ms. Neplokh is the Chief Engineer of the Media Bureau at the Federal Communications Commission where she advises the Bureau Chief on a variety of technology issues related to cable television, broadcast television, and cable broadband service. She also serves as the FCC Co-Chair of the Video Programming Accessibility Advisory Committee, which is charged with providing recommendations on closed captioning and video description of television programming. Prior to joining the FCC, she worked as a software engineer at a telecommunications equipment manufacturer, designing the internals of a high-speed IP router. Before that, she worked for Carnegie Mellon University in the systems development group, writing software to monitor the campus network. Ms. Neplokh has a B.S. in Electrical and Computer Engineering from Carnegie Mellon University and a J.D. from the Georgetown University Law Center.
- 09:30 Closed Captioning Challenges for IP Video Delivery
- New FCC regulations require closed captions from TV broadcasts to be available when these videos are delivered by IP. This presents a number of challenges in content authoring, asset management, and delivery. To address these challenges, SMPTE created a new specification called SMPTE 2052 (SMPTE Timed Text). This paper will discuss the new regulations and best practices for the different workflows involved, such as: file-based authoring of closed captions for broadcast and IP compatibility, translating existing CEA-608 and CEA-708 broadcast closed captions data into SMPTE 2052, common pitfalls and workarounds, and current SMPTE activities to help address these challenges.Presenter bio: Jason Livingston is a developer and product manager with CPC Closed Captioning. He is well known for providing closed captioning software solutions to the industry. His recent projects include development of captioning software with speech recognition capabilities, and implementation of the latest SMPTE and CEA closed captioning standards.
- 10:00 Post-Deployment Considerations for use of SMPTE Timed Text
- Post-Deployment Considerations for use of SMPTE Timed Text The US Federal Communications Commission has selected SMPTE-Timed Text (SMPTE-TT; SMPTE ST2052-1) as the "Safe Harbor" format for Broadband (IP) Captioning of previously-televised content. For many content providers, the deadline to begin captioning IP-delivered content has passed and implementation is underway.This presentation provides a content provider's deployment story.Presenter bio: Craig Cuttner is senior vice president, Advanced Technology, for Home Box Office, responsible for all projects related to advanced technology architecture in the Technology Operations area. He oversees the planning of distribution technology architecture used to serve HBO's core and new business platforms, and the establishment of technical standards for new technologies of interest to the company. He was named to this position in November 2003. Previously, he was vice president, Technology. Cuttner joined HBO in 1982 as a system engineer. Cuttner has been active in HDTV since the late 1980s, contributing to many aspects of HDTV industry-wide. He has also been involved in strategic work since the mid 1990's on video on demand. He was named a Fellow in the Society of Motion Picture and Television Engineers in 2000, is a member of the Society of Cable Telecommunications Engineers Engineering Committee and is chair of SCTE Digital Video Subcommittee Working Group 1 on Encoding and also the National Academy of Television Arts and Sciences Technical Emmy Committee. Cuttner has over one dozen patents and patents pending. Cuttner holds a BS degree in Industrial Management from Georgia Tech.
Room: Salon 2
- 09:00 Leveraging Fiber Properties to Our Advantage
- A strand of optical fiber is inherently thin, flexible, and light weight. How can we leverage these properties to improve the fiber installation process and make it easier to adapt to changing facility needs? A new infrastructure / installation technology called "Air Blown Fiber Infrastructure" facilitates this approach. Using a point-to-point network of high density tubes as a highway, 3000 ft. of 24 strand- fiber can be blown (installed) from source to destination across a facility in just 30 minutes. Once the tube network is in place, changes can be made at a fraction of the time and cost of conventional fiber networks, without disruption to the network or the facility. Technical Discussions include: • What ABF looks like and the science • Design Considerations (intra-building, campus) • Tube Bundle Specifications and Limitations • Fiber Bundle Installation Considerations and Options • Jetting Specifications, Limitations and Testing • Termination OptionsPresenter bio: John brings a wealth of knowledge to Fiber Core Networks as Director of Operations. His unique understanding of broadcast, cable, and communication systems was gained over the last 30 years working with such industry leaders as Comcast, CNN, and Turner Entertainment Networks. During his career he has held positions as CATV Technical Engineer, Broadcast Design Engineer, Manager of Headends, Director of Engineering, and now Director of Operations. It is this experience that allows him to work with clients to develop the ideal solution for both their enterprise infrastructure strategies as well as individual systems design.
- 09:30 Trends in wireless high-bandwidth display technology
- The newest generation of high-resolution digital display interfaces now has a new face: Wireless connectivity. Several systems, including generic WiFi (802.11)-based products, have already come to market. Two of them - 6 GHz wireless high-definition interface (WHDI) and 60 GHz wireless HD (WiHD) - are competing head to head in the consumer electronics space, while a wide-channel, short range implementation (ultra wideband, or UWB) is also making headway. All of these systems support full bandwidth HDMI and DisplayPort signals (10 Gb/s) with low latency, making them attractive as well for 3G camera-to-monitor links for field video production. This paper will describe each system and explain their advantages and disadvantages, as well as the differences between them. (A WHDI link can also be used to run the presentation at the conference.)Presenter bio: Pete Putman is a technology consultant to Kramer Electronics USA; engaged in product development and testing, technology training, and educational marketing programs. Pete is also a contributing editor for Sound and Communications magazine, the leading trade publication for commercial AV systems integrators. He publishes HDTVexpert.com, a Web blog focused on HDTV, digital media, wireless, and display technologies. Pete holds a Bachelor of Arts degree in Communications from Seton Hall University, and a Master of Science degree in Television and Film from Syracuse University. He is an InfoComm Senior Academy Instructor for the International Communications Industries Association (ICIA), and was named ICIA's Educator of the Year for 2008. He is a member of both The Society of Motion Picture and Television Engineers (SMPTE) and Society for Information Display (SID).
- 10:00 Next-generation techniques for the protection and security of IP transport
- Few in the professional video community foresaw IP's rapid ascent to its position as a, if not the, dominant video transport protocol. To many, IP lacks the control and protection so critical to video networking. While today's IP network infrastructure, driven by the speed and capacity requirements of data centers and cloud-based services, is now capable of carrying professional video in a controlled, usable manner, significant concerns remain for the best way to control, monitor and protect services in wide area routed networks. This paper will focus on recently-developed techniques for real-time data flow protection now undergoing trials and initial deployment, including: delay offset launch network stream feeding for dual-path protection (enabling simultaneous network hits on dual-path connectivity); single path protection using control techniques for dynamic end-to-end movement of data buffering (bandwidth savings and effective dual-path protection); and RTP source coherence across multiple sources for seamless source and destination protection.Presenter bio: Chin Chye Koh holds a Ph.D. and M.Sc. in Electrical and Computer Engineering from the University of California Santa Barbara for work on the perception of visual quality in relation to image and video compression. He received his B.Sc. degree in Electrical Engineering from Washington State University. As Senior Solutions Architect at Nevion USA, he has responsibility for the development of system solutions primarily focused on contribution video transport in managed media networks. Prior to his position as Solutions Architect, Dr. Koh was Product Manager for the Ventura line of modular video transport solutions and before that, Member of Technical Staff responsible for algorithm research and development for video compression and transport solutions. His post-graduate work included positions at Intel Corporation in Arizona and Philips Research in The Netherlands. Dr. Koh was also a research and development engineer at Pepperl+Fuchs, Singapore, where he developed sensor modules for factory automation.
3D ProductionRoom: Theatre: Chinese 6
- 09:00 Unconstrained 2D to Stereoscopic 3D Image and Video Conversion using Semi-Automatic Energy Minimization Techniques
- We present a method for semi-automatically converting unconstrained 2D images and videos into stereoscopic 3D. User-defined strokes for the image, or over several keyframes, corresponding to a rough estimate of the scene depths are defined. After, the rest of the depths are solved, producing depth maps to create stereoscopic 3D content. For video, to minimize effort, only the first frame has labels, and are propagated over all frames by a robust tracking algorithm. Our work combines the merits of two energy minimization techniques: Graph Cuts and Random Walks. Current efforts rely on automatic or manual conversion by rotoscopers. The former prohibits user intervention, or error correction, while the latter is time consuming and prohibits use in smaller studios. Semi-automatic is a compromise to allow more faster and accurate conversion, decreasing the time for studios to release 3D content. Results demonstrate good quality stereoscopic images and video creation with minimal effort.Presenter bio: Raymond Phan is a Ph.D. candidate with the Department of Electrical and Computer Engineering at Ryerson University in Toronto, Ontario, Canada. He obtained Bachelors of Engineering in Computer Engineering (2006), and his Masters of Applied Science in Electrical and Computer Engineering (2008) from Ryerson as well. Ray's research interests include computer vision, image processing, stereo vision, 2D to 3D conversion and 3DTV. In 2008, Raymond received the Ryerson University Gold Medal - the highest accolade a graduating student from Ryerson can receive, signifying the significant volunteer contributions made to the university, to their department and program. In 2010, Raymond was awarded with the Natural Sciences and Engineering Research Council of Canada (NSERC) Vanier Canada Graduate Scholarship - the most prestigious award for Ph.D. study in Canada. Ray is also a part-time instructor with Ryerson, as well as serving as a volunteer, chair and co-chair on many academic and university committees.
- 09:30 Image Enhancement Using Similarity-based Color Matching for High-quality Stereoscopic 3D Image Acquisition
- Stereoscopic three-dimensional (S3D) movies often suffer from the inconsistency problem between left and right images acquired by a stereo camera due to unstable filming environment. This research introduces a novel image enhancement algorithm using similarity-based color matching of S3D images. The proposed algorithm first partitions both reference and target images into multiple sub-blocks, and decomposes them into reflection and illumination components using retinex theory. Color correction is performed by matching histograms of a corresponding pair of blocks based on the structural similarity index measure (SSIM). The color corrected images are finally enhanced by removing noise using a priori trained dictionary-based patches. We can make high-quality S3D images from imperfect input images acquired under critical conditions including limited dynamic range, unstable calibration of stereo camera pairs, and low signal-to-noise ratio (SNR). The proposed method can be applied to high-quality panorama images, frame difference-based video tracking, and similarity-based image analysis.Presenter bio: Wonseok Kang was born in Jeju, Korea in 1983. He received the B.S. degree in electronic engineering from Korea Aerospace University, Korea, in 2010.Currently, he is pursuing M.S. degree in image processing at Chung-Ang University. His research interests include image restoration, computational camera and data fusion.
- 10:00 3D Production Issues during the London 2012 Olympic Games
- For the first time the Olympics were telecast in 3D. In the past, some 3D coverage was available on a closed circuit basis of limited events. The London Olympics 3D Channel covered multiple sports, both live and ENG coverage, and provided a full up 3D channel of over 275 hours of 3D programming. The core of the 3D coverage was provided with (3) OB Van remote production units as well as (6) single camera EFP production units. A variety of stereoscopic rigs were used in each of (4) venues along side the Panasonic ENG/EFP P2 3D Camcorder. Some special stereo cameras were also used including: pole cameras, rail cameras, RF cameras and underwater cameras. The paper will present the unique challenges to providing 3D coverage, from organizing the 3D channel as well as the technical challenge of covering sports in 3D while accommodating the full up 2D production, finally, what worked and what did not.Presenter bio: Jim has worked in radio and television broadcasting for over 32 years including the ABC Radio Network, the ABC Television Network, the Advanced Television Test Center, and the Atlanta Olympic Broadcast Organization. Most recently he was EVP, Digital Television Technologies and Standards, for the FOX Technology Group. At FOX he led the development of progressive camera systems to replace film for television, 480p30 video production systems (FOX Widescreen), and the FOX HD splicing system design and deployment for the FOX Network. Previous to FOX, Jim was the Head of Engineering for the 1996 Atlanta Olympic Games where he championed the development of the first all digital, disk based, super slo-motion camera/recording system (Panasonic/EVS). Jim has been involved with the Olympic Host Broadcaster since 1993 and has been involved in the Atlanta (1996), Sydney (2000), Torino (2006) and at the London (2012) games, assisting with the technical production and distribution of the Olympic 3D TV channel. He attended the School of Engineering at Columbia University in the City of New York where he attained his Bachelor of Science in Electrical Engineering in 1980 and his Masters of Science in Electrical Engineering in 1990. Jim is a Fellow of the SMPTE and is involved in standards development at SMPTE, the International Telecommunications Union, and the ATSC including work on RP 85 Audio Loudness Control for DTV. He has received two Technical Emmy awards for his work at the ATTC in the Development of the ATSC standard and for the FOX HD Splicing System. Jim lives in Pacific Palisades, CA with his wife, Maggie and two teenage children, Jake and Juliana.
Wednesday, October 24, 10:30 - 18:30
Room: Exhibit Hall
Wednesday, October 24, 10:30 - 11:00
Room: Exhibit Hall
Wednesday, October 24, 11:00 - 12:30
AcquisitionRoom: Salon 2
- 11:00 High Performance Optics for a New 70mm Digital Cine Format
- This paper details technical features of the first series of high-speed prime lenses specifically designed for a new 70mm digital cine format. These new lenses offer full aperture (f/2.5) performance at or near the diffraction limit from near-UV to near-IR over a 48mm x 20.25mm image area. Additionally, these lenses are designed to work properly with optical filters inserted between the lens and sensor. Twelve focal lengths are under development, ranging from a 27mm ultra-wide to a 300mm telephoto. In addition to traditional externally geared controls, all lenses have internal motors for focus and aperture. High-resolution metadata is continuously transmitted to the camera, including focus distance, aperture, temperature, and individual lens identification. A replaceable internal filter near the aperture stop permits a wide range of creative effects, including soft-focus.Presenter bio: Dr. Caldwell has been a professional lens designer since 1985, and has completed more than 500 design projects. More than 100 of these designs have been fabricated, ranging from mass-market camera lenses to ultra-high performance zoom and reconnaissance lenses. He founded Caldwell Photographic Inc. in 2001, and is actively involved in optical product development and manufacture in addition to lens design. Areas of particular interest include broadband UV-VIS-IR optics and high-performance large aperture optics. Recently developed products include the 60mm UV-VIS-IR lens currently licensed for manufacture to Coastal Optical Systems, and a new 120mm UV-IR lens manufactured by Caldwell Photographic. Dr. Caldwell has worked as a consultant and contractor to Panavision for more than 12 years.
- 11:30 Focusing on lens metadata
- Motion pictures are increasingly created from a combination of real and virtual subjects. This involves the creation of a "virtual" camera that must closely emulate the camera behavior during principal photography. Use of highly dynamic camera moves during a shot have made it nearly impossible to deduce from the images and written notes the lens settings and characteristics. It is now both possible and necessary to record on a frame-by-frame basis the status of the taking lens. Modern post-production workflow, especially for stereoscopic 3D, increasingly demands an accurate record of the lens settings to facilitate compositing. This paper will discuss the following: - the objectives and advantages of having these data - the current state of the art of lens metadata in the industry - techniques for acquiring, preserving, disseminating and using these data - whether a standard or RP is desirable and achievablePresenter bio: From his student filmmaker days in London through industrial design work to his founding role in industry technical organizations, Visual Effects Society Fellow, Jonathan Erland has been engaged in both the dramatic and technical side of the story-telling process for over 50 years. A member of the Star Wars VFX crew, he has six patents and four Academy Awards for innovative technologies. A Life Fellow of SMPTE, he's authored 20 papers, served as Program Chair, and received the Journal Award and Fuji Gold Medal. At AMPAS, he has served as a Governor, establishing Visual Effects as a branch. He's also a member of the Science and Technology Council, Scientific and Engineering Awards and numerous other committees. He's received an Academy Commendation for "solving High-Speed Emulsion Stress Syndrome in film stock" and the 2012 John A. Bonner Medal for "outstanding service and dedication in upholding the high standards of the Academy."Presenter bio: Ron Fischer is the Technical Director of Universal Virtual Stage 1, a green screen virtual production facility located on the NBC Universal Studios lot, but working around the world. The facility has hosted a wide variety of film, commercial and television productions including Fast Five, Battleship, Xxit, Toyota "Kingdom", etc. Ron's previous credits include virtual set and motion capture systems for Alice in Wonderland, Beowulf at Sony Imageworks, as well as work at Disney Feature Animation and Silicon Graphics.
- 12:00 Computational photography for dust and scratch detection on transparent photographic material
- This work pertains the digital restoration of motion-picture films. A new method for the automatic detection of blemishes on any kind of transparent photographic material (still and moving images, silver-based and dye-based material) is presented. It consists in an innovative combination of different illumination techniques and computational photography. The image layer is a random dispersion of microscopic elements (e.g. silver particles in b&w material) and its interaction with light is isotropic. Dust, scratches and other irregularities of the film surface produce shadows and reflections that are strongly dependent on the provenance of light. The acquisition of multiple images with different geometries of illumination, and the analysis of the differences between them, is found to be an effective method to emphasize irregularities in the film surface. Moreover, cross-polarization technique is found to improve the blemish detection. We present the description of the experiments that determined the method in its details.Presenter bio: I received my BSc degree in Technologies for Conservation and Restoration of Cultural Heritage in 2004 and my MSc degree in Science for Cultural Heritage in 2009, both from the University of Florence. In 2006 I started working on various projects related to Conservation Science, Color Science and Digital Imaging, in the framework of the digitization of the Florentine museum heritage, collaborating with public and private institutions. During these years my main affiliation was with the Institute of Applied Physics (IFAC-CNR). Since 2007 I have been a member of the European group CREATE (Colour Research for European Advanced Technology Employment) which promotes and exchanges research and knowledge through a series of conferences and training courses. In 2010 I was selected for one of the two PhD positions in Imaging Science posted by the Imaging & Media Lab (IML) of the University of Basel, in collaboration with the Images and Visual Representation Group (IVRG) of the Ecole Polytechnique Fédérale de Lausanne (EPFL). Since September 2010, under the supervision of Prof. Rudolf Gschwind, I have been working in Basel at the Imaging & Media Lab, on the digital reconstruction and permanence of photographs and motion-picture films by digital image processing.
Interoperability through StandardsRoom: Salon 1
- 11:00 W3C Timed Text Updates
- The paper will detail recent work in the W3C Timed Text Working group including updates in the Second Edition of Timed Text 1.0, as well as work in progress on the next version of Timed Text to accommodate the work of SMPTE-TT. We will also present updates on interoperability profiles and validation.Presenter bio: As part of Microsoft's Accessibility Business unit, Sean Hayes is responsible for fostering innovation and tracking and developing standards in the accessibility area. Since joining Microsoft in 2000, Sean has worked to drive towards truly open standards that allow media and software to be developed universally and available to all. He believes today's solutions only scratch the surface of what the power of technology can make possible. He was an active member of the TEITAC activity, The European M376 accessibility work and the W3C Web Accessibility Initiative. He participates actively in SMPTE 24B and a number of W3C groups and is currently chair of the W3C Timed Text Working Group. Before Joining the Accessibility group, Sean spent his first 5 years at Microsoft in the Digital Media division, working on Digital Television and HD DVD. Prior to joining Microsoft, Sean spent 11 years at Hewlett Packard Research Labs, and dedicated five years to the digital media department studying advanced video techniques, including 3D video sprites and models for flexible storytelling using fuzzy logic. He eventually became involved in the DVB standards body, which ultimately led to his role at Microsoft. Sean holds a bachelor's of science in computer science from the University of London.
- 11:20 SMPTE Timed Text: Update from the 24TB Captions Ad Hoc Group
- The US Federal Communications Commission has selected SMPTE-Timed Text (SMPTE-TT; SMPTE ST2052-1) as the "Safe Harbor" format for Broadband (IP) Captioning of previously-televised content. This presentation provides a status report on the work of the SMPTE 24TB Captions Ad Hoc Group as well as the status of items that may be of interest to IP Captioning users.Presenter bio: Craig Cuttner is senior vice president, Advanced Technology, for Home Box Office, responsible for all projects related to advanced technology architecture in the Technology Operations area. He oversees the planning of distribution technology architecture used to serve HBO's core and new business platforms, and the establishment of technical standards for new technologies of interest to the company. He was named to this position in November 2003. Previously, he was vice president, Technology. Cuttner joined HBO in 1982 as a system engineer. Cuttner has been active in HDTV since the late 1980s, contributing to many aspects of HDTV industry-wide. He has also been involved in strategic work since the mid 1990's on video on demand. He was named a Fellow in the Society of Motion Picture and Television Engineers in 2000, is a member of the Society of Cable Telecommunications Engineers Engineering Committee and is chair of SCTE Digital Video Subcommittee Working Group 1 on Encoding and also the National Academy of Television Arts and Sciences Technical Emmy Committee. Cuttner has over one dozen patents and patents pending. Cuttner holds a BS degree in Industrial Management from Georgia Tech.
- 11:40 SMPTE Timed Text in the UltraViolet Common File Format
- SMPTE Timed Text has found its way into various electronic media delivery formats. One is the UltraViolet Common File Format (CFF) for use both as subtitles and closed captioning. This presentation will provide a background on the underlying technology, including W3C Timed Text and SMPTE Timed Text, and then focus on the extensions and constraints developed by UltraViolet.Presenter bio: Michael A Dolan is founder and president of Television Broadcast Technology, providing specialized professional encoders, test tools, and technical consulting in the field of digital television. He holds a BSEE degree from Virginia Tech '79 and has worked for and founded various leading edge computer graphics and real time systems companies since then, including early foundational work in W3C technology and analog data broadcasting. Mr. Dolan has been involved in digital television engineering for many years, including data broadcast system architecture, digital receiver design and compliance. He also currently chairs the ATSC Data Broadcasting Specialist Group (TSG/S13), co-chairs the CEA Working Groups on Digital Closed Captioning (R4SC3WG1) and Internet Captions (R4SC3WG15), co-chairs the SMPTE Committee on File Formats and Systems (31FS), co-chairs the DECE/Ultraviolet Technical Working Group, and is active in SCTE and W3C. Mr. Dolan is an SMPTE Fellow, a former SMPTE Governor for the Hollywood Region, authors the SMPTE Journal Almanac column, and holds several patents in computer web technology.
- 12:00 CE Device Implementation of SMPTE Timed Text: Navigating to the "Safe Harbor"
- In January, the FCC released a report and order on IP closed captioning to support provisions of the 21st Century Communications and Video Accessibility Act of 2010. The order places requirements on consumer video player devices regarding their ability to decode and present closed captioning in IP-delivered content. Devices which implement a SMPTE Timed Text decoder would be deemed to be in compliance with the new rules. This presentation explores the implications of the FCC's action on the implementation of consumer video players. CE manufacturers have been working in CEA to establish industry guidelines designed to establish a consistent framework for the implementation of SMPTE-TT. The end result is envisioned as being industry agreement on the definition of a standard "video player" that can decode and render captioned video from files or streams. The presentation will provide an update on the work and describe decisions and approaches agreed to date.Presenter bio: Mark K. Eyer is currently Director of Systems for the Technology Standards Office of Sony Electronics. He graduated Cum Laude with a B.S. degree from the University of Washington in 1973 and received an MSEE degree in 1978 from the same institution. For the past thirty years, Mark has been involved with the development of technologies and products related to secure and digital television. Mr. Eyer is the recipient of a variety of industry awards for excellence in standards. Mr. Eyer represents Sony in various standards committees in the US and contributes systems engineering expertise to development of Sony's digital consumer electronics products.
Wednesday, October 24, 12:30 - 14:00
Room: Mount Olympus Room-3rd Floor of Loews Hollywood Hotel
- 12:30 Peter Owen
- Chairman, IBC CouncilPresenter bio: Since 2002 Peter Owen has chaired IBC Council, a group of senior individuals drawn from broadcast related disciplines which acts as a sounding board for the Conference and Exhibition. It also assists in formulating the conference agenda and attracting leading industry contributors to the event. Trained as an electronics engineer in the mid 60's and introduced to analogue television technology at EMI Broadcast Equipment Division, conversion to digital television technology came about with a move to the IBA ( Independent Broadcasting Authority ) R&D labs whilst working with the team which produced the worlds first digital standards converter. In 1974 Peter was one of the founding members of Quantel where he stayed until his retirement from the supplier side in year 2000. During his time at Quantel Peter occupied many roles ranging from Head of Broadcast to Director of Engineering and whose duties included a close relationship with SMPTE standards groups and SMPTE conferences.
Wednesday, October 24, 14:15 - 15:45
Color ManagementRoom: Salon 2
- 14:15 Towards Higher Dimensionality in Cinema Color: Multispectral Video Systems
- The digital transition being experienced by the motion picture industry has afforded an increase in dimensionality in time and space, however, comparatively little effort has been put into expanding color. All practical motion imaging systems continue to rely on metamerism, wherein a 3-channel signal is sufficient to reproduce the color of real objects regardless of higher order spectral composition. Such treatments restrict cinema color, offering limitations in absolute color accuracy and gamut and exacerbating observer metamerism. Multiprimary reproduction focused on spectral accuracy or metamerism reduction may prove a better answer to enhancing the color experience. It also promises to open new color management paradigms for visual effects compositing of live action and CGI or for virtual cinematography. The proposed talk summarizes past and present research in multispectral video. Preliminary results from the design and construction of an abridged multispectral video capture and display system will also be presented.Presenter bio: David Long joined the faculty of the School of Film and Animation at Rochester Institute of Technology in 2007, where he is currently Program Chair and Associate Professor for the BS Motion Picture Science program. His research interests focus on color science and multispectral imaging. Previous to RIT, Long worked as a Development Engineer and Imaging Scientist with Kodak's Entertainment Imaging Division. At Kodak, his primary responsibilities included new product development and image science and systems integration for the motion picture group, focusing on film and hybrid imaging products. Long contributed to the design and commercialization of the Vision2 family of color negative films, as well as several digital and hybrid imaging products for television and feature film post-production. His work has earned him numerous patents and a 2008 Scientific & Technical Academy Award for contributions made to the design of Vision2 films. Long has a BS in Chemical Engineering from the University of Texas at Austin and an MS in Materials Science from the University of Rochester.
- 14:45 Issues in Color Matching
- To create a numerical description of color (e.g., X,Y,Z), one applies a Color Matching Function to spectral power distribution data acquired with an instrument such as a spectroradiometer. All the adjustments one makes to a video display or to video data depend on the accuracy of these numbers. Since 1931, the broadcast industry and others for whom color fidelity is crucial have depended on the 1931 CIE Color Matching Function (CMF). Recent (and continuing) advances in display technology, however, have exposed serious deficiencies in this CMF. These deficiencies have long been known to academic researchers, who have in the intervening years proposed several alternative CMFs. This paper reviews the critical flaws that render the 1931 CMF no longer reliable, and surveys the strengths and weaknesses of candidates for its replacement.Presenter bio: As a veteran of the software industry, Joel spent the first several years working at a graphic design studio managing color critical work stations. He then began writing his own software to calibrate PCs and home theater computers, and was then hired away to SpectraCal. In his current position as SpectraCal's Director of Software Development, he has presided over the development of one of the most sophisticated color management packages available. The research he's done developing this engine makes him uniquely suited to discussing color matching functions and their role in video calibration.
- 15:15 Accurate ACES Rendering in Systems Using Small 3DLUTs
- The ACES color space has unlimited dynamic range, however, it is difficult to implement ACES workflow with the grading systems currently in use. To this end, we propose custom Log ACES and High Saturated Log ACES methods. The custom Log ACES can process negative ACES values and can handle high dynamic range. HSLA expands the ACES color space to reduce vacant area and spread real color data area in order to use 3D LUTs more effectively. These two methods drastically improve accuracy of color reproduction even if the post-production system only supports a highly limited dimension or very small number of lattice points for its 3D LUTs.Presenter bio: After graduating from Tokyo Institute of Technology, Mitsuhiro Uchida entered Fujifilm and started research and development of color photographic film. He expanded his target to film camera, digital photo printer, and digital cameras. He joined AMPAS IIF committee in 2009. Since then, he and his team started to contribute to AMPAS-IIF activity attending IIF meeting at Hollywood every two month from Japan. He contributed wide spectrum of ACES, especially RRT development. Currently, while contributing logACES standard, he is trying to market Fujifilm's new product, CCBOXX.
Room: Salon 1
- 14:15 The Cloud and its Potential Role in the Production and Distribution of Multi-Screen Enabled Content
- Consumer behavior, distribution channel preference, and demographics are shifting toward increased consumption via multiple connected devices. We would like to share the results of a recent digital ecosystem study, and discuss how the Cloud might factor into these trends.Presenter bio: Jason Williamson is a Specialist Master with Deloitte's Technology Strategy & Architecture consulting practice and brings over 13 years of experience in the Media & Entertainment industry, specifically on digital transformation, content distribution, content management, and production workflows. His work focuses around advising clients with digital broadcast and filmed entertainment processes on new technology and processes in the media space. Jason has also applied his expertise and knowledge of digital media operations alongside relevant experience in data systems architecture to design and develop solutions fit for the growing demands and complexity of content lifecycle workflows.Presenter bio: Hanish is a Senior Manager in Deloitte's Technology, Strategy and Architecture practice who is focused on the Media & Entertainment industry. He has experience in shaping, leading and delivering successful complex technology solutions in the U.S. and Internationally. With 11 years of experience, Hanish has consulted nationally and internationally to corporate clients ranging from Media, Telecommunications, Financial Services, Life Sciences and High-Tech industries as well as Governmental departments. Hanish has experience of large scale transformation programs. His areas of experience include complex technology programs, IT cost reduction and governance, IT M&A, Target Operating Models for IT Organizations, vendor selections as well as planning and delivery of IT led programs. His most recent experience includes leading a number of Digital Media projects. His roles ranged from Metadata Management, Test Strategy for Unicast distribution, Technology Operating Model, International requirements and regulations, post production integration and requirements development
- 14:45 Leveraging the Cloud for File-based Workflows
- For some time the IT world has embraced the Cloud as a vehicle to help transform the economics their business infrastructure from a cap-ex to an op-ex model. The M&E industry, however, has been slow to adopt the Cloud - partly based on issues around high bandwidth content, security, and the still complex nature of some M&E workflows. With the increasing role of IT in media production, management and distribution, and with the improvements in security and lower costs of bandwidth, the Cloud is beginning to make its presence known in M&E. Whether its public, private or hybrid, the Cloud can provide economic, efficiency, and collaborative advantages to a media organization from a large studio to a small broadcast network. This paper will examine the various aspects of M&E workflow including digital content & archival storage management, digital asset management, editing, transcoding, DRM, distribution and its suitability for the Cloud.Presenter bio: Ron joined Verizon in 2009 as an Industry Partner in their Media & Entertainment practice before moving into his current role as Managing Principal in New Business Incubation. Prior to joining Verizon, Ron spent 20+ years in management roles with companies such as RKO, Arbitron, Sun Microsystems, and Ascent Media. His areas of expertise include business development, strategic planning, P&L management, sales/sales management, solutions development and consulting. Ron has spent the past 10 years focused on digital media workflows and technologies. He has been published in both the general as well as trade press including NY Times, NY Daily News, Barrons, Broadcasting & Cable and Broadcast Engineering.
- 15:15 A Cloudspotter's Guide to Migration
- Cloud adoption is growing at a 22% annual rate. SaaS apps revenue will reach $258 billion in 2020 (Forrester Research). This train is unstoppable. The economic and systems benefits are compelling and being leveraged by media companies worldwide. What does facility migration to the cloud involve? What low-hanging fruit can migrate now? What are the tradeoffs? This talk is a short tutorial on cloud basics with tips on migration. Aspects of architecture, application delivery, economics, open systems, reliability, QoS and security are considered. If you are a cloudspotter, this talk is for you.Presenter bio: Al Kovalick has worked in the field of hybrid AV+IT systems for the past 20 years. Previously, he was a digital systems designer and technical strategist for Hewlett-Packard. While at HP, he was a principal researcher and architect for a new product-class of signal synthesizer. He was also the principal architect of HP’s first VOD server. Following HP, from 1999 to 2005, Al was the CTO of Pinnacle Systems. After Avid acquired Pinnacle, Al served as an Enterprise Strategist and Fellow for six years. In 2011, Al founded Media Systems Consulting in Silicon Valley. His work focuses on all aspects of networked media systems, file-based workflows and cloud migration for media facilities. Al is an active speaker, educator, author and participant with industry bodies including SMPTE. He has presented over 50 papers at industry conferences worldwide and holds 18 US and foreign patents. In 2009 Al was awarded the David Sarnoff Medal from SMPTE for engineering achievement. Al has a BSEE degree from San Jose State University and MSEE degree from the University of California at Berkeley. He is a life member of Tau Beta Pi and a SMPTE Fellow. Al writes the Cloudspotter's Journal column for TV Technology magazine.
Wednesday, October 24, 15:45 - 16:15
Room: Exhibit Hall
Wednesday, October 24, 16:15 - 17:45
FinishingRoom: Salon 2
- 16:15 The Unfolding Merger of Television and Movie Technology
- HDTV utilizes "in-camera" or "in-switcher" rendering for live broadcast, wherein the interpretation of the scene for display happens immediately. Typical rendering is typically modeled upon an in-camera matrix and gamma boost, and a highlight "knee". Such shows are usually captured, processed, transmitted, and displayed at 50 or 60 fields or frames/sec. Telecine mastering of shows interprets and renders the scenes from a film negative, but not in real time. The rendering of colors during telecine is often more of a print film emulation than a video-style process. Such shows are usually mastered at 24 frames per second, and may be shows made for television, or cinematic movies being presented on television. The industry move from film-based capture to digital camera capture has begun to bridge the gap between the television image model and the telecine film-input model. The key new ingredient bridging this gap is the Reference Rendering Transform.Presenter bio: Gary Demos is the recipient of the 2005 Gordon E. Sawyer Oscar for lifetime technical achievement from the Academy of Motion Picture Arts and Sciences. He has pioneered in the development of computer generated images for use in motion pictures, and in digital film scanning and recording. He was a founder of Digital Productions (1982-1986), Whitney-Demos Productions (1986-1988), and DemoGraFX (1988-2003). He is currently involved in digital motion picture camera technology and digital moving image compression. Gary is CEO and founder of Image Essence LLC, which is developing wide-dynamic-range codec technology based upon a combination of wavelets, optimal filters, and flowfields.
- 16:45 Theatrical versioning in the content pipe - integrating digital cinema into end to end workflow
- Digital cinema compression, versioning and packaging is traditionally a cul-de-sac process within the life of a movie as content flowing through the "pipe" to different versions and delivery formats. With more integrated workflow and appropriate mezzanine files, the creation of digital cinema packages can become part of the flow of content from the creation to all downstream deliveries. Looking processes, system architecture and technology choice to show how this allows us to efficiently flow content from capture/creation to the final consumption point, be it the cinema, TV, tablet or mobile. Building work flows now that are extensible to the many coming new formats such as high frame rate, object oriented audio, wide colour gamut etc.Presenter bio: Richard Welsh is co-founder and CEO of Sundog Media Toolkit Ltd. He serves on the board of the Society of Motion Picture and Television Engineers (SMPTE) as International Governor for EMEA, Central and South America.Richard has worked in the cinema industry since 1999. He has been involved in various technology development work during this time, primarily in digital cinema and is named on patents in the area of visual perception of 3D. He started cloud software company Sundog in 2013, who specialise in scalable post-production software tools aimed at high end broadcast and movie productions.Prior to Sundog, Richard worked at Dolby Laboratories where he held various positions including Film Sound Consultant, Mastering Engineer and Director of Digital Cinema. Subsequently he was Head of Digital Cinema Operations at Technicolor. Richard holds a BSc (Hons) in Media Technology and an honorary Doctor of Technology degree from Southampton Solent University.
- 17:15 Low-latency transmissions for remote collaboration in post-production
- The post-production often involves several key parties who want to be in the control of the process - the director, the producer, the editor, and several technical experts - the colorist, the stereographer, the sound master, etc. Some decisions are more effectively done in real-time, interactively. However, the participants are often very busy, working on multiple projects in parallel and it is difficult for them to travel together for a collaborative session. We believe that future technology for low-latency high-quality transmissions of image and sound will enable remote real-time collaboration in post-production. As the capacity of optical networks is increasing, uncompressed transmissions of original content with minimal latency will be possible. We did several experiments with real-time remote collaboration in color grading and stereography to over 10000 km between Europe and West Coast US, using GLIF (Global Lambda Interchange Facility) network. We describe the key technology aspects and lessons learned.
Room: Salon 1
- 16:15 The Cloud - What does it mean for media archives?
- 16:45 Cloud Media Collaboration, Enter Stage Right: and Action
- As digital media files grow in size and complexity, media service providers spend more time and resources than ever before developing, transferring, storing and optimizing them. With disks still being flown across the world between various partners involved in productions, the industry clearly needs a more efficient, pragmatic solution for collaboration and service delivery. It's time the media industry stepped into the future with the cloud. Public clouds offer collaborative ecosystems in which providers can essentially work together "under one roof" to improve the efficiency of their services, including file conversion, media injest, file transfer acceleration, encoding/transcoding, long term object storage and content delivery. This presentation provides a technical overview of how public cloud ecosystems offer a media hub with high connectivity and cost effective access to computing resources. Compelling case studies will provide practical guidance for attendees hoping to move media operations (compute, applications, storage, etc.) to the cloud.Presenter bio: Robert Jenkins is the co-founder and CEO of CloudSigma and is responsible for leading the technological innovation of the company's pure-cloud IaaS offering. Under Robert's direction, CloudSigma has established an unprecedented open, customer-centric approach to the public cloud.
- 17:15 Delivering live multi-cam content to smart devices through cloud platforms
- Broadcasters must engage a new generation of multitasking viewers who no longer sit passively in front of their televisions but browse the internet and interact with social media while watching TV. Rather than risk losing viewers, broadcasters can provide original premium content — including unseen camera angles and highlights — to viewers via second screens. The large amount of unused content that sits on live TV production servers can be used to enrich the user experience and maximize the value of content. This paper will explore technology challenges in building open and scalable platforms to deliver high quality experiences on second screens, including: - Best practices in building near-live multi-camera replay platforms on top of standard live production environments - Overcoming challenges in cloud-based production and delivery to multi-screens - Integration with social networks, archives, stats and other third-party contentPresenter bio: Werner Ramaekers has worked in software engineering for the past 20 years. He started his career in the Belgian Military as technical expert on automated testing for telecommunications systems. He also created the first intranet solution for material identification purposes in 1996. In 2000 he left the Military and worked as an internet solutions architect to create highly scalable internet portals for clients in logistics and sports. In 2004 he joined the Belgian Public Service Broadcaster VRT to be Head of Development for the team in charge of developing web applications to let television viewers and radio listeners interact with and comment on the topics of the shows. He also was part of the Business Architects for VRT's transition from tape-based to file-based production. In 2007 Werner was asked by VRT to start the R&D initiative at VRT's medialab that would show the possibilities of the internet with the quality of broadcast. Together with his team he made it possible to pick and select video from different online resources and watch them as your own tv channel using a modified the set top box. The launch of the iPhone made it very clear to Werner there was a lot of opportunity for media in mobile so he started building mobile applications to help VRT explore the opportunities. After leaving VRT Werner joined EVS in 2011 where he serves as the Product Development Manager for the "Consumer Casting" solutions.
Wednesday, October 24, 17:45 - 18:30
Room: Exhibit Hall
Wednesday, October 24, 18:30 - 19:30
Room: Salon 1
Thursday, October 25
Thursday, October 25, 08:00 - 09:00
Room: Ray Dolby Ballroom Terrace
Thursday, October 25, 09:00 - 10:30
Room: Salon 1
- 09:00 3D Production Edit Work Flow, London 2012 Olympic Games
- For the first time the Olympics were telecast in 3D. In the past, some 3D coverage was available on a closed circuit basis of limited events. The London Olympics 3D Channel covered multiple sports, both live and ENG coverage, and provided a full up 3D channel of over 275 hours of 3D programming. Part of the Olympic 3D Channel every day was a (1) hour Summary program, presenting the best of the live 3D Coverage as well as the EFP single camera coverage captured that day. This is the first time a 3D daily program was attempted, using a hybrid edit work flow. The paper will discuss the work flow, including the capture of the ENG footage using the Panasonic P2 3D camera, EVS Servers and AVID Media Composer editing. Additionally the challenge of quick turn around and the QC process to insure the materials were 'stereo' correct. Final will cover the specific issues of what worked and what did not.Presenter bio: Jim has worked in radio and television broadcasting for over 32 years including the ABC Radio Network, the ABC Television Network, the Advanced Television Test Center, and the Atlanta Olympic Broadcast Organization. Most recently he was EVP, Digital Television Technologies and Standards, for the FOX Technology Group. At FOX he led the development of progressive camera systems to replace film for television, 480p30 video production systems (FOX Widescreen), and the FOX HD splicing system design and deployment for the FOX Network. Previous to FOX, Jim was the Head of Engineering for the 1996 Atlanta Olympic Games where he championed the development of the first all digital, disk based, super slo-motion camera/recording system (Panasonic/EVS). Jim has been involved with the Olympic Host Broadcaster since 1993 and has been involved in the Atlanta (1996), Sydney (2000), Torino (2006) and at the London (2012) games, assisting with the technical production and distribution of the Olympic 3D TV channel. He attended the School of Engineering at Columbia University in the City of New York where he attained his Bachelor of Science in Electrical Engineering in 1980 and his Masters of Science in Electrical Engineering in 1990. Jim is a Fellow of the SMPTE and is involved in standards development at SMPTE, the International Telecommunications Union, and the ATSC including work on RP 85 Audio Loudness Control for DTV. He has received two Technical Emmy awards for his work at the ATTC in the Development of the ATSC standard and for the FOX HD Splicing System. Jim lives in Pacific Palisades, CA with his wife, Maggie and two teenage children, Jake and Juliana.
- 09:30 Challenges, Solutions, and Lessons Learned for Content Protection from 2012 Olympics
- In a groundbreaking effort, during the London 2012 Olympic Games, NBCUniversal made available to its U.S. viewers coverage of every Olympic sport on a live basis on either broadcast television, cable television, online, or mobile. NBC paid $1.18B for the exclusive rights to the London Games on every platform, and the rights owner of the Olympic Games, the International Olympic Committee (IOC), recognized that it could play a significant role in ensuring the investment of its exclusive partner in the USA was maximized and NBC's exclusivity protected. This paper covers what NBC and the IOC did to protect Olympic content from being available via illegitimate means on distribution channels such as the Internet. We will cover both the technical and operational aspects of our efforts as well as a view of the evolution of our operations from Beijing through London and show how content protection technologies and operations have improved to better manage online piracy.Presenter bio: Mike Wilkinson is Director, Content Security Technology for NBC Universal. His responsibilities encompass all aspects of content protection technology, including fingerprinting, watermarking and forensics analysis, as well as internal content security auditing and investigations. Mike was responsible for the enterprise watermarking deployment within NBC Universal as well as the development and implementation of NBCU's internal fingerprinting system. Mike is a member of NBCU's internal Content Security Committee and the MPAA's Site Security Liaisons' Committee and the PreTheft Working Groups. Mike has been a member of NBCU's Anti Piracy team since 2005. Previously, Mike worked with Technicolor/Vidfilm for approx 9 years in various engineering roles, including Systems Engineer for the Digital Media Group. Mike received his Bachelors Degree in Organizational Management from the Masters College and served four years in the United States Navy working with advanced electronics systems.
- 10:00 Challenges and Solutions In Production/Post Production for the 2012 Olympics
- Challenges and Solutions In Production/Post Production for the 2012 OlympicsChallenges and Solutions In Production/Post Production for the 2012 OlympicsPresenter bio: Rajesh Rajah is a Solutions Manager for Cisco Videoscape/End-to-End IP Video solution, and has been with Cisco for over 12 years. He specializes in building architectures and technologies for enabling Cloud-based Video delivery & Video optimized network transport. He was earlier a Solutions Architect with diverse experience in End-to-end planning, design and deployment phases of SP Carrier Ethernet and Video/IPTV engagements. Prior to focusing on IPTV/Video, he was involved in designing IP NGN and MPLS-based networks for service providers worldwide. He is a CCIE and has been with Cisco for nearly 12 years and has been a speaker at Cisco Live/Networkers, National Association of Broadcasters (NAB) & other forums. He also has few issued patents and a few pending with US Patent office on Video, Carrier Ethernet and Cloud/Datacenter.Presenter bio: Harry Ryan has been in the Tele- Communications for over 25 years. Involved with Olympics through NBC since 2000 through this past 2012 London games, in the role of TCP/IP Network Architect.
Room: Salon 2
- 09:00 Towards using Audio for Matching Transcoded Content
- With the advent of multiple screens for viewing, transcoding and transformation of content is becoming a mainstay of content delivery systsems. But Transcoding implies that copies and versions of the same content can proliferate across various storage devices. It also means keeping track of content becomes a major problem both from copyright and recording/indexing perspectives. In this context, video-based copy detection has emerged as a major area of research. On the other hand, audio-based techniques have received much less focus but audio could provide very useful supporting copy detection cues. In this paper, we present a systematic investigation of how audio signatures undergo transformation under typical transcoding operations including bitrate changes, codec transformation, sample rate variation, and standard audio transforms like downmixing/volume normalization.Presenter bio: Dinkar Bhat received the B.Tech. degree in electrical engineering from the Indian Institute of Technology at Madras (now Chennai), the M.S. degree in computer science from the University of Iowa, Iowa City, and the Ph.D. degree in computer science from Columbia University, New York. He is Systems Engineer at Motorola Mobility where he has made many contributions to advanced set-tops in the area of video and audio, closed caption processing and transcoding. Prior to joining Motorola, he worked as a Principal Engineer at Triveni Digital, an LG Company, in the area of data broadcasting and stream monitoring. He has published in leading journals, such as the IEEE TRANSACTIONS ON PATTERN ANALYSIS, Society of Motion Picture Television Engineers (SMPTE), and National Association of Broadcasters (NAB) conferences. He holds patents in the area of digital television.
- 09:30 Using Name Spotting in Audio/Video Media Identification to Improve Media Discovery Service in Digital Object Architecture
- Digital object repository, a component of digital object architecture, stores large number of audio/video files (as digital objects) and provides access and retrieval to them. Sometimes metadata for audio/video files are almost absent. Lack of enough metadata limits media discovery service from fetching the files containing little metadata. In addition, the media discovery service excludes those files from the result set. Relevant information, such as names, can be extracted from the given content of an audio/video file and appended in metadata of the same audio/video file for enhancing the media discovery service. In this research, we use a Hidden Markov Model and Viterbi algorithm based name spotting module, known as IdentiFinder to extract names. The research will help to make large number of audio/video files visible to the media discovery service with the help of name extraction. Also, it will increase the user satisfaction by improving the search result set.Presenter bio: Manish Goswami came to USA to pursue MS in CS at California State Polytechnic University, Pomona in September 2010. While studying he had some educational experiences at the university. He worked as a research assistant for a program funded by the ‘National Science Foundation (NSF)' on ‘Digital Object Architecture (DOA)'. Presently he is working as a student assistant for I & IT web development department at Cal Poly Pomona. Before coming to US he worked as a Software Engineer at BrickRed Technologies Pvt. Ltd. India for more than 2 years. There he created and maintained the Brickred's website and some other in-house projects on PHP with the help of his reporting and project manager. His educational career includes bachelors and masters degree in Computer Applications in India. Outside of his professional interests he reads, cooks and plays basketball.
- 10:00 Practical Quality Assessment for Digitized Film Content
- The CineXPRES project introduces a practical quality assessment framework for digitized film, based on theoretical approach previously presented by one of the author at the SMPTE 2010 Fall Conference. A bottom-up workflow provides subjective quality reference from audience ratings of selected training contents shown on given displays. It also computes a permanent objective quality measure from degradation models and visual perception models. Bayesian network inference provides a "conditional subjective quality estimation" that depends on a given display. The same inference mechanism computes a "conditional objective quality estimation" through an expected cost function, by taking into account current image processing technologies and contextual information. Bayesian networks are powerful tools integrating expert knowledge and able to evolve with new information. These quality estimations must serve three purposes for long-term preservation: evaluation of the restoration work, comparison of similar contents before preservation and computation of a permanent content quality reference.Presenter bio: Background in Mathematics, Semantics and Film making, 30 years of experience in Professional Video and Film, Started as video artist. Realized special effects for film-making. Directed short films. R&D manager of teams dedicated to special effects software including film scanners and film printers' drivers since 1991. Technical manager of the European project "Limelight" aimed at the design of a complete digital film restoration system from 1994 to 1997. Founder and CEO of DUST company specialising in digital restoration and processing of film from 1997 to 2002. Author of automatic digital film restoration software. Technical Director of Doremi Technologies (Europe) from 2006 to 2012. Chief Scientific Officer of Highlands Technologies Solutions since January 2013. Designer of projection measurement system. Current interests include visual perception models, Measurement theory and Philosophy of Information.
3D Broadcast and DisplayRoom: Theatre: Chinese 6
- 09:00 They must be genlocked? - Missing standards in the 3d ecosystem
- In 2D video systems users understood the need to genlock equipment. That was never documented in any SMPTE standards or recommended practices. With the advent of digital 3D video production systems, this small oversight has provided considerable room for variation. With a SMPTE documentation project on this topic reaching closure, what has been learned from that effort and is there any additional documentation yet to be written? The presenter is the chair of the 32NF-40 AHG on 3D Production Timing and Sync.Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC's TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC's Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
- 09:30 Effects of viewing conditions on fatigue caused by watching 3DTV
- In order to enjoy a pleasant experience watching 3DTV, it is necessary to collect and analyze reliable safety assessment data. Evaluation experiments were conducted consisting of 500 adult participants watching 3D content for approximately one hour on commercially available 46 to 50-inch 3DTVs that require the use of shutter glasses. The degree of fatigue after watching the 3DTV was evaluated under various viewing conditions based on objective and subjective indexes of fatigue. The results of objective indexes showed that there was no statistical difference between watching 3DTV and traditional TV (i.e., watching 2D content without glasses) in degree of decline of visual and cognitive functions due to fatigue. On the other hand, the results of subjective indexes indicated that there were some differences between watching 3DTV and traditional TV in the sensation of fatigue, which may not be attributed to watching 3D content, but to wearing the 3D shutter glasses.Presenter bio: Toshiya Morita received a diploma in college of information sciences in 1984 from University of Tsukuba, Ibaragi, Japan. He joined NHK (Japan Broadcasting Corporation) in the same year, and has been with NHK Science and Technology Research Laboratories since 1989. He is a senior research engineer and engaged in research on vision psychology, eye movement analysis, stereoscopic display and methods of objectively evaluating TV programs. He is currently on loan to NICT (National Institute of Information and Communications Technology) as research expert and involved in evaluation of 3DTV and 3D programs in terms of comfort and safety.
- 10:00 High Performance Polarization-Based-3D and 2D Presentation
- Systems for delivering 3D content in digital cinemas are inherently lossy. In the absence of careful attention to design, component efficiencies, and maintenance, the result can be an image luminance below 4.5 fL, even on modest sized screens. In addition, aspects of a 3D system can determine the performance of 2D presentation. Given the large installed base, methods for increasing brightness and image quality are sought which leverage existing projector platforms. This paper evaluates loss mechanisms in 3D projection systems and shows that high-brightness 3D is feasible using lamp-based illumination. These solutions can be implemented using single-projector sequential 3D, for conventional sized screens (averaging 40'), as well as premium large format screens in excess of 60'.Presenter bio: Dr. Gary Sharp joined RealD as Chief Technology Officer in 2007 after RealD acquired ColorLink. In 2011, Sharp assumed the additional title of Chief Innovation Officer. In 1995, Sharp co-founded ColorLink, where he served as Vice President of Research and Development as well as Chief Technology Officer. Under Sharp's leadership, in 2005, ColorLink played an instrumental role collaborating with RealD to develop RealD's first cinema system. Sharp is the inventor on more than 70 US issued patents relevant to display technology, polarization optics and liquid crystal projection systems, including key patents related to RealD's Cinema System. He is a co-author of Polarization Engineering for LCD Projection (Wiley & Sons, 2005). Sharp earned a B.S. in Electrical and Computer Engineering from UCSD, where he focused on Optics. He later earned a Ph.D. in Electrical and Computer Engineering from the University of Colorado, Boulder.
Thursday, October 25, 10:30 - 14:00
Room: Exhibit Hall
Thursday, October 25, 10:30 - 11:00
Room: Exhibit Hall
Thursday, October 25, 11:00 - 12:30
Room: Salon 1
- 11:00 120 Hz-frame-rate Super Hi-Vision Capture and Display Devices
- NHK has been researching and developing Super Hi-Vision (SHV), with 33 megapixels (7,680-pixel by 4,320-line), as a next-generation ultra-high definition broadcast system. At the last year's SMPTE conference NHK reported that NHK had decided to double the frame rate of SHV video to 120 Hz to improve its quality of motion portrayal. In this conference, we will report on the 120-Hz SHV devices we have developed. One is a 120-Hz SHV image-capture device using three 120-Hz 33-megapixel CMOS image sensors. The sensor uses 12-bit ADCs and operates at a data rate of 51.2 Gbit/s. We have also developed a 120-Hz SHV projector using three 8-megapixel LCOS chips and e-shift technology. These 120-Hz SHV devices were exhibited at our Open House in May 2012 and demonstrated superb picture quality with less motion blur.Presenter bio: Hiroshi Shimamoto received the B.E. degree in electronic engineering from Chiba University, M.E. and Ph.D degrees in information processing from Tokyo Institute of Technology in 1989, 1991 and 2008, respectively. In 1991, he joined NHK (Japan Broadcasting Corporation). Since 1993, he has been working on research and development of UHDTV (ultrahigh-definition TV) cameras and 120-fps 8K image sensors at the NHK Science & Technology Research Laboratories. In 2005-2006, He was a visiting scholar at Stanford University. He is a member of the IEEE.
- 11:30 Development Of A 70mm, 25megapixel Electronic Cinematography Camera With Integrated Flash Recorder
- This paper will describe the system design of the world's first 70mm, 25 megapixel, electronic-cinematography camera with an integrated flash memory recorder. Prior to 2004 the only so called "4K" imaging systems consisted of a single line array of 4096 photo-sites or in some instances three 4096 line arrays. A color "4K" scan of an Academy 35mm cine frame would generate a digital image that would be the equivalent of a 29 megapixel area array camera sensor. In 2004, one camera manufacturer introduced a 4096 x 2048 pixel CCD camera for cinema applications and declared it was a "4K" camera. Soon many would follow, and today this particular piece of obfuscation is rampant in the motion picture industry. Despite all the technical challenges in creating the first "True 4K" large format digital cinema camera one of the greatest challenges we are faced with is how to end the "4K" confusion.Presenter bio: B. Petljanski received his B.Sc. and M.Sc. in EE from the University of Novi Sad in 1995. He also received his M.Sc. in EE and his Ph.D. in CE in 2001 and 2010, both from Florida Atlantic University. His captivation with digital imaging started at NASA Imaging Technology Space Center where he was involved in the development of high resolution cameras and recording systems. He is working currently in Panavision's Advanced Digital Imaging group as a senior engineer. At Panavision, he has been involved in conceiving and designing equipment for image acquisition, processing and storing. His special interest lies in optoelectronics, a magical area which studies the collection of photons and converts them into attractive images.
- 12:00 1080p50/60, 4K and beyond: Future Proofing the Core Infrastructure to Manage the Bandwidth Explosion
- Traditional broadcast infrastructures only had to support one version each of SDTV and HDTV, plus extensions such as RGB 4:4:4 for better chroma keys. Now we need to support 4:4:4:4 for external keys, high dynamic range (HDR) imaging, stereoscopic 3D, a 3D disparity channel, Quad-Full HD, higher frame rates etc, all of which drive real time streaming media bandwidth requirements. How do we accommodate these new demands and stay future proof within our core broadcast infrastructure? This paper outlines the latest developments, at the technical and standardization levels, to handle the emergence of new production formats. It examines changes to the studio infrastructure which add the flexibility needed to accommodate new production formats alongside existing formats, with maximum compatibility and minimum confusion. It then suggests methods to greatly increase the observability of studio networks, and improve functionality and compatibility at the control and monitoring plane.Presenter bio: John Hudson is Director of Product Definition and Broadcast Technology in the Gennum Products Group of Semtech Corporation. His responsibilities include technology strategy, product definition and international standardization for Semtech GPG's video and datacom business. Hudson has spent 28 years in the broadcast industry beginning his career as a design engineer at Sony Broadcast and Professional Europe. He joined Gennum in 1999 and has been instrumental in developing the company's video and multi-media semiconductor business. As an active member of SMPTE and SMPTE Fellow, Hudson serves as Co-chair of TC 10E - Essence, and Chair of the 32NF40 Working Group on SDI Mapping. He is the author of several SMPTE Standards, and actively contributes to the development of real-time streaming media interfaces for video and D-Cinema production. Hudson is actively involved in the formation and development of the HDcctv Alliance™ and as chair of the technology committee, his responsibilities include the development of all standards and compliance testing programs. He attained a HND in Electronics and communications engineering from the Farnborough College of Technology in 1988, is the author of 10 patents on video processing and signal integrity solutions for multi-media applications and regularly contributes technical papers and presentations to seminars and technology events in both broadcast and CCtv industries.
Room: Salon 2
- 11:00 Production Media Data Centers: Scalable computing, networking, virtualization, and adaptive bit rate encoding
- With "TV Everywhere" offerings driving the consumption of content, significant needs have developed in addressing the requirements of digital media supply chains. For content providers and service providers, architecting and implementing solutions to serve "TV Everywhere" require flexible and agile infrastructures. The concept of a Virtualized Production Media Data Center combines scalable computing, dense networking, and the virtualization of media applications to address the technology and business process change requirements for the Media & Entertainment industry.Presenter bio: Tom Ohanian is a member of the Digital Media Strategy team at Cisco Systems. He was on the founding team at Avid Technology and is the co-inventor of the Avid Media Composer, Film Composer, and Multicamera Systems. He has extensive broadcast engineering, production, and post-production experience and is an Academy Award and two-time Emmy recipient for scientific and technical invention.
- 11:30 A study of the optical distribution costs of multichannel baseband digital broadcasts over a FTTH network
- We have previously proposed a baseband time-division multiplexing method for the transmission of digital broadcasts over FTTH. Here, we evaluate the transmission equipment cost of the proposed method based on a simple assumed distribution network. We predict that the cost can be decreased to 11-36 % of that of conventional sub-carrier multiplexing (SCM) and FM conversion transmission methods. By analysing the dominant factors affecting the cost, we show that significant savings are achieved due to the fact that an optical signal can be received at a lower power using the proposed method than for signals transmitted using conventional methods.Presenter bio: Takeshi Kusakabe received the B.E. and M.E. degrees in science and engineering from Waseda University, Tokyo, Japan, in 1999, and 2001, respectively. He joined Japan Broadcasting Corporation (NHK) in 2001 and worked at the broadcast engineering department. From 2004 to 2010, he engaged in the research and development on optical transmission of digital broadcasting signals for cable television at NHK Science & Technology Research Laboratories. Since 2011, he has been engaged in the research and development at Ehime University. His current interests are transmissions of baseband and modulated radio frequency signals of HDTV/UHDTV. He is also a member of ITE (Institute of Image Information and Television Engineers).
- 12:00 Beyond HD - what are the options - 4k or 3D - what will be successful and when?
- Many broadcasters are still rolling out their HD services, but the industry is looking into immersive media. Starting with an analysis about the drivers in the market the paper will conduct an analysis of the 6 most important technology parameters for enhancements in media experience. The presentation will give update information about the most recent standardization efforts in ITU-R/SMPTE/DVB/MPEG. The Eco-chain from content creation to the consumer will be investigated. The need standards will be described. The presentation will provide describe a project of the EBU, which has looked into the options of BeyondHD from a broadcasters point of view and had shot 4k@50p test content at the RAI production studios to perform compression with HEVC and to determine the required bit-rates in distribution on 4k consumer displays. 3D stereoscopic content in 1080p/50 per-eye has been generate of the same scenes. The presentation will at the end summarize the findings.Presenter bio: Hans Hoffmann was born in Munich, Germany. He holds a diploma in telecommunication engineering from the University of Applied Sciences in Munich and a Ph.D. from Brunel University West London, at the School of Engineering and Design. From 1993 to 2000, Hoffmann worked at the Insitut fuer Rundfunktechnik in research and development for new television production technologies. In 2000, he joined the European Broadcasting Union (EBU) as a senior engineer in the technical department, currently he is Head of Media Fundamentals and Production Technology. Hoffmann has chaired the EBU project groups P/BRRTV and P/PITV, which were both involved in standardization activities such SDTI and file formats. Hoffmann is currently SMPTE Engineering Vice President. He chaired the SMPTE technology committee on Networks and File Management and has served as Engineering Director, Television. He has been involved in EBU activities on 3D and high-definition television production and emission and set up the HDTV testing laboratory at the EBU. Hoffmann is a fellow of SMPTE and a member of the Institute of Electrical and Electronics Engineers (IEEE), FKT (Germany), and the Society for Information Display (SID).
Thursday, October 25, 12:30 - 14:00
Room: Exhibit Hall
Thursday, October 25, 14:00 - 15:30
Room: Salon 2
Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
- 14:00 Towards documenting AVC Proxies in MXF
- Inclusion of AVC coding within MXF has been a "hot topic" within SMPTE this year. One aspect of that interest has been documenting an interoperable "proxy" file format using AVC video and AAC audio. A SMPTE RDD is in preparation to do just that and the presentation will focus on the development of that document.Presenter bio: A 35 year veteran of the broadcasting industry, Mr. Waddell is a SMPTE Fellow and is currently the Chair of the ATSC's TSG/S6, the Specialist Group on Video/Audio Coding (which includes loudness). He was the founding Chair of SMPTE 32NF, the Technology Committee on Network/Facilities Infrastructure., He represents Harmonic at a number of industry standards bodies, including the ATSC, DVB, SCTE, and SMPTE. He is the 2010 recipient of the ATSC's Bernard J. Lechner Outstanding Contributor Award and has shared in 4 Technical Emmy Awards. He is also a member of the IEEE BTS Distinguished Lecturer panel for 2012 through 2015.
- 14:30 4K TV Capture: An Early Experience Sharing
- As the French public broadcaster, editor of 13 channels, 4 of which HD, francetélévisions studies 4KTV broadcasting scenarios for its premium programs. Enhancing the sense of realness, the viewing comfort and creating an immersive experience, such is the quest of any incumbent broadcaster looking to embrace the future of television. francetélévisions has undertaken 4K/60p production experiments to evaluate future workflows, and specifically work on adapting filming methods and materials. The Quality of Experience is evaluated taking into account the impact of compression on both 4K digital content and scanned films, with a specific attention to noise levels. After providing an early first report of available 4K cameras suitable for TV applications, the impact of compression technology on different types of 4k content will be presented, with a particular focus on HEVC (High-Efficiency Video Coding) codec as the natural compression standard for upcoming 4KTV applications.Presenter bio: Jérôme Viéron received the Ph.D. degree in Signal processing and telecommunication from Rennes 1 university in 1999. He joined Thomson R&D France as Research Engineer working on Advanced Video Coding. He is an active contributor to standardization efforts led by ISO and ITU-T groups. He was very active to the H264/MPEG-4 SVC standardization process. In 2007, he joined the Video Processing and Perception Lab of Technicolor R&I as Senior Scientist and explore new technologies for future video coding applications and Standards. He joined ATEME in 2011, as Advanced Research Manager. He is in charge of French and European Research programs and works on new generation video coding technologies. He is an active contributor to the standardization process of HEVC (High Efficiency Video Coding) the future video coding standard, and is implied in the 4EVER (for Enhanced Video ExpeRience) consortium which aims at researching, developing and promoting an enhanced Television Experience.Presenter bio: Matthieu Parmentier works as a R&D project manager at francetelevisions. He is vice-chairman of the the EBU strategic project FAR: Future of Audio and Radio production, including the well-known PLOUD group, editor of the R128 loudness recommendation. Matthieu started his audio career recording classical music CDs. He joined francetelevisions in 1999 as a sound engineer for live programs. From 2003 to 2007, as a news reporter, he was in charge of sound recording, video editing and outdoor satellite transmissions. Since 2008, he has been working as manager for multichannel audio and HD video development projects. Matthieu holds two license degrees in sound recording and video post-production and a master degree in audiovisual research (Toulouse University).
- 15:00 Systemization of Network-Based Genlock
- Traditional synchronization systems based on blackburst, tri-level sync, DARS and timecode have little relevance in the networked world, and today represent a cumbersome and dated infrastructure overhead. These legacy systems, while continuing to provide utility, do not map into the future systems which are evolving today. SMPTE is working on a universal method for delivering any reference to a virtually unlimited number of devices over an IP network. This technology spans everything from simple configurations of a few pieces of gear to campuses, regional and global networks. This paper will investigate the opportunity for system designers to migrate to this new infrastructure, discussing the core network technology on which it is based, as well as system-level deployment considerations. In addition to replacement of existing synchronization architectures, it will explore new opportunities not possible using legacy methods.Presenter bio: Paul Briscoe Bio Paul began his career in the broadcasting industry in 1980 at the CBC in Toronto. Specializing in the then-new arena of digital television, he was one of the designers of the Toronto Broadcast Center, with particular focus on the plant routing system, computer graphics facilities and overall systemization and timing. Prior to CBC (and during a brief hiatus), he was involved in technology startups and provided system and product design consultation to various clients. He jumped ship from CBC in 1994 to join Leitch Technology as Product Engineer, defining products for the new digital era. Over his 19 years at Leitch (subsequently Harris Broadcast, now Imagine Communications), he was a Project Leader, Development Group Leader, R&D Manager, Manager of Strategic Engineering and Principal Engineer. He left Harris Broadcast in November, 2013, and now provides system, technology, design and standards consultation to the ever-evolving media industry. He has several patents granted and in process, is a member of SMTPE and IEEE, and is an active participant on numerous SMPTE standards committees. A lifelong Radio Amateur, Paul is also an avid curler in the winter and cyclist and gardener in the summer.
Room: Salon 1
- 14:00 Adventures In Cinema Sound-The Birth Of A New Technical Committee
- The current Standards for calibration of the sound produced in cinemas world wide are based on work started in the 1970's employing acoustical real-time analysis of pink noise injected into the cinema sound system at the normalized output stages termed The B-Chain. Two years ago SMPTE convened a Study Group to evaluate these Standards in light of the evolution from analog to digital technology for the program material, the theatrical equipment, the test instrumentation and the methods of acoustical analysis. Dozens of meetings, and thousands of man-hours of discussion and testing have produced a wide ranging Report, soon to be published. Brian Vessa, Chair of the Study Group, will summarize the findings in this session.Presenter bio: Brian Vessa is an audio professional with over 35 years of experience. After attending UCLA Engineering School he became a recording engineer, producing albums and recording orchestras. He was known for hot-rodding studio gear. Brian transitioned into film post as a music editor and sound editor, became a re-recording mixer at Cannon Films and MGM, then handled audio restoration at NT Audio. He was hired by Sony Pictures in 1998, and today is their Executive Director of Digital Audio Mastering and representative to DCI. Brian is a member of the Academy Sound Branch, SMPTE and AES. He chairs the SMPTE B-Chain Study Group as well as D-Cinema and IMF AHG's. He has written many audio specifications, including a white paper on near-field mixing for home theater that has been widely adopted. Brian enjoys recording jazz, rock and mixing live sound. He is a drummer and keyboardist in the LA area, an avid backpacker and an award-winning home wine and beer maker.
- 14:30 Further investigations into the interactions between cinema loudspeakers and screens
- Modern day data acquisition techniques allow the gathering of high resolution polar data to assess the performance of loudspeakers. While these techniques have become common in laboratory and engineering environments during product development, these same techniques can be applied to aspects of exploration beyond initial product development which affect the in-situ performance of loudspeakers. This paper will use modern high-resolution data acquisition techniques and analysis tools to investigate the complexity of the interactions between loudspeakers in typical locations behind the screen in a typical cinema presentation environment. Discussion will explore the impact on the loudspeaker responses of various types of screen surfaces and the distance between the screen and the loudspeakers. The measured performances will be compared to the standards for system response set out in ST-202 and the impact on patron listening experience will be assessed."Presenter bio: With over 15 years in professional audio Long has a diverse and extensive knowledge regarding the design and implementation of sound reinforcement and playback systems for all types of installations ranging from simple single speaker events to massive show spectaculars and multi-channel media presentation environments. Long holds a Master of Fine Arts from the University of Southern California's School of Cinematic Arts, where he specialized in post-production audio and worked on advanced multi-channel audio concepts.Presenter bio: Member of Meyer Sound Technical staff since 1986, holder of several design and technical patents for loudspeaker technology.
- 15:00 Frequency Response Versus Time-of-Arrival for Typical Cinemas
- Cinema equalization is typically based on the use of 1/3-octave, minimum phase filtering to adjust the spatial average of the steady-state magnitude response from multiple microphones to the X-curve. This paper explores one aspect of this process, namely whether the use of steady-state response is appropriate. To do this, the relationship between early-arrival and steady state spectral characteristics for typical cinemas was examined. The comparison between early-arrival and steady-state sounds was done via spectral analyses of impulse responses measured at multiple microphone locations within the audience seating area. The cinemas surveyed varied in size between 30 - 1500 seats and the time gating intervals varied from 4 ms to that equivalent to steady state. When this was done, front loudspeaker measurements showed little spectral tilt upward toward "brightness" for early arrival, compared to steady-state sounds, and a modest upward tilt for surround loudspeaker arrays in the largest cinemas.Presenter bio: Louis Fielder received a BS. degree in electrical engineering from the Caltech in 1974 and an MS. degree in acoustics from the UCLA in 1976. Between 1976 – 1978 he worked on electronic design at Paul Veneklasen and Associates. From 1978 – 1984 he was involved in digital-audio and magnetic recording research at the Ampex Corporation. Since 1984 he has worked at Dolby Laboratories on psychoacoustics for audio design and audio coders for music distribution, transmission, and storage applications, i.e. AC-1, AC-2, AC-3, Enhanced AC-3, AAC, and Dolby E. Additionally, he has investigated perceptually derived limits for the performance for digital-audio conversion, low-frequency loudspeaker systems, and loudspeaker-room equalization. Currently, he is working on cinema equalization and acoustics. He is a fellow of the AES, a senior member of the IEEE, a member of the SMPTE and the ASA. He was on the AES Board of Governors during 1990 –1992, President during 1994 – 1995, and Treasurer 2005 - 2009.
Thursday, October 25, 15:30 - 16:00
Thursday, October 25, 16:00 - 17:30
Room: Salon 2
Chair: Harvey Arnold (Sinclair Broadcast Group, USA)
- 16:00 Broadcasting video over the Cellular network and the Internet
- Huge chunks of the off-air broadcast and microwave television spectrum have slowly and systematically been lopped off in deference to the burgeoning demand and unquenchable thirst for Broadband Services. The first wave came with the DTV conversion and then again with the 2 GHz ENG BAS analog to digital spectrum reduction. All the while, the volume of news gathering content and distribution has dramatically increased to keep pace with the public's insatiable hunger for these ever evolving media rich services. So how can the broadcaster function in the midst of this escalating and diametrically opposed broadcast environment? The answer is through increased efficiencies in modulation and encoding techniques and leveraged augmentation of 3G/4G wireless infrastructures. This paper explores the new use of bandwidth-reducing modulation techniques and the shift from the use of MPEG-2 to MPEG-4 (H.264) video encoding, thereby creating higher value propositions for this encroached upon television broadcast spectrum.Presenter bio: Nuraj Lal Pradhan is a Wireless Network Engineer at Vislink, Inc. In this role Nuraj utilizes his expertise in design and development of protocols and technologies in Core IP, Cellular and Wireless Sensor/Mesh Networks. His focus is the development of video adaptation algorithms over wireless (cellular and mesh) networks. Nuraj earned his Ph.D. in Electrical Engineering from the City University of New York. In addition, Nuraj holds a Master of Science Degree in Communication Networks and Services, a Master of Engineering Degree in Telecommunications as well as a Bachelor of Science Degree in Electronic Engineering. His achievements include a Patent Disclosure Application for “Distributed Power Management Algorithm for Mobile Ad-Hoc Wireless Networks”. Nuraj's work and education in the wireless and telecommunications field has provided worldwide exposure to standards and working environments including Europe, Asia and the USA.
- 16:30 Multiformat Operation - System Implications and Solutions for Routing Switchers
- This paper discusses the formats and media involved when routing simultaneous, multiple video and audio formats, on a variety of physical interconnections, the implications of operating in such an environment, proposes possible operational practices, and reviews practical solutions available. The simultaneous use of multiple video and audio formats, on a variety of physical interconnections, coupled with the demand for increased efficiency, has created new challenges for today's systems engineers and planners when specifying a routing switcher. Transitioning to various high-definition video formats and increasingly dense audio formats has increased the overall complexity of these multi-format systems—a 36,864 x 36,864 embedded audio matrix for an 1152 x 1152 video router! Simultaneously, audio and video processing requirements should be accommodated. Internal processing enables system and operational efficiencies: Control is simplified; a flexible input/output arrangement allows easy reconfiguration between uses. Finally, a glimpse into the future - will it get easier?Presenter bio: Currently Senior Product Manager for Snell Limited UK responsible for Advanced Routing and Processing Systems. Previous employments include 20 years with Vistek Electronics Limited (UK). I was Commercial Director prior to takeover by Probel (UK) and subsequent merger with Snell and Wilcox. Qualifications: BSC Hons Degree in Physics Chartered Engineer (C.Eng) Member of IEEE
- 17:00 Here Comes Ethernet
- Ethernet has been around since 1973, and you're probably aware of many companies who have struggled to make it work for audio and video applications. But those are proprietary systems where often Box A can't talk to Box B. So IEEE, which owns the Ethernet standard has been working on a re-write of the Ethernet standard called 802.1BA AVB, and the AVB is for audio and video bridging. Finally, this standard may herald a new way to design, install and operate audio and video facilities.Presenter bio: Steve Lampen has worked for Belden for twenty-one years and is currently Multimedia Technology Manager and also Product Line Manager for Entertainment Products. Prior to Belden, Steve had an extensive career in radio broadcast engineering and installation, film production, and electronic distribution. Steve holds an FCC Lifetime General License (formerly a First Class FCC License) and is an SBE Certified Broadcast Radio Engineer. On the data side he is a BICSI Registered Communication Distribution Designer. In 2010, he was named “Educator of the Year” by the National Systems Contractors Association (NSCA), and in 2011 was named “Educator of the Year” by the Society of Broadcast Engineers. His book, "The Audio-Video Cable Installer's Pocket Guide" is published by McGraw-Hill. His column "Wired for Sound" appears in Radio World Magazine. He can be reached at email@example.com
Room: Salon 1
- 16:00 Tutorial on Critical Listening of Multi-channel Audio Codec Performance
- Listening for impairments introduced by multichannel audio codecs is an important task. Classical objective methods are not adequate in assessing audio coding schemes. Accordingly, the ITU-R BS.1116 & 1534 recommendations provide guidelines for subjective evaluation of codecs. This paper provides a tutorial on the proper conditions to do reliable codec testing. Several key components covered are, proper experimental design, selection of listening panel and training of listeners, developing the test methodology, selecting balanced program material, loudspeaker/room and sound-field requirements, listening for artifacts, and statistical analysis. This paper addresses these various components including the sound-field requirements since, as per the ITU: "The characteristics of the reference sound field at the listening area are most important for the subjective perception of, or the quality assessment of, auditory events and their reproducibility at other listening places or rooms. These characteristics result from the interaction of the loudspeaker(s) and the listening room".Presenter bio: Sunil Bharitkar, received his Ph.D. in Electrical Engineering from the University of Southern California (USC) in 2004. He has published over 50 technical papers and has 6 patents in the area of signal processing applied to audio and acoustics, and a textbook (Immersive Audio Signal Processing) from Springer Verlag. He co-founded Audyssey Laboratories and had been the VP of Research before joining Dolby Laboratories in the Office of the CTO as Director of Technology Strategy. His room equalization research, at both USC and Audyssey, has resulted in several patented or patent-pending co-inventions. His area of research is in signal processing applied to audio and acoustics using theory and knowledge from acoustics, signal processing, and auditory perception. Some of his recent research leading to inventions include room equalization, dynamic noise compensation for automobile/home-theater/airplane environments, bandwidth extension of speech signals affected by telephony channels, psychoacoustic and physical bass extension, surround envelopment, noise compensation and suppression for telephony, and spatial/3-D audio.
- 16:30 Scalable Format and Tools to extend the possibilities of Cinema Audio
- Surround sound has been making cinematic story telling more compelling and immersive for over 30 years. The first widely deployed surround systems used magnetic recording. Later, optical recording became standard, enabling up to 7.1 channels of audio. With the transition from film to digital distribution there is an opportunity for the next generational step forward. In this paper we describe a new surround sound format that dramatically advances the capabilities of cinema sound. The format was developed in close cooperation with industry stakeholders and was specifically designed to provide the most desired new capabilities and provide a path for future enhancements, while respecting and leveraging the strengths and know-how of the current sound format and pipeline. In particular, the new system maintains and advances the ability to deliver impeccable audio quality, and flexibly extends the creative possibilities to meet the needs and aspirations of both content creators and exhibitors.Presenter bio: Charles Robinson received BSEE and MSEE degrees from the University of Illinois, where he specialized in signal processing and began his professional career just as real-time digital audio signal processing was becoming a practical reality. Since joining Dolby Research in 1995 areas of research have included, acoustics, audio coding, interactive audio and spatial audio with applications to broadcast, gaming and cinema. Mr. Robinson has authored or coauthored over a dozen patents in audio signal processing, contributed to two Emmy-award winning products, and is a member of AES and IEEE.
- 17:00 Lee de Forest and the Invention of Sound Movies, 1918-1926
- Lee de Forest received his Ph.D. in physics from Yale in 1899 and entered the 20th Century as an inventor. By 1906 he had patented his signature invention, the three-element vacuum tube he called the "Audion." Beginning in 1918 he improved upon the earlier work of Bell and Ruhmer and patented a system of writing sound on motion picture film for synchronized talking pictures. Between 1920 and 1926 he worked with fellow inventor Theodore Case to develop the Phonofilm system of variable density recording. De Forest presented his work to the SMPE on four occasions between 1923 and 1926. Even though the de Forest system would not end up the preferred one for sound, his tube would be the key as it allowed amplification of audio using loudspeakers which made it possible for audiences to experience talking pictures. In 1960 de Forest received an Oscar for his sound-on-film contributions.Presenter bio: Mike Adams has been a radio personality and a film maker. Currently he is a professor of radio, television, and film at San Jose State University, where he has been a department chair and an associate dean. As a researcher and writer of broadcast and early technology history, he created two award-winning documentaries for PBS, “Radio Collector,” and “Broadcasting's Forgotten Father.” He has had published numerous articles and four books, including "Charles Herrold, Inventor of Radio Broadcasting," and "Lee de Forest, King of Radio, Television, and Film," Springer Science, 2012.