Program
Mon, 10 12
Mon, 10 12 8:30 - 9:30
P1: Plenary 1
In many applications, we face the challenge of modeling the interactions between multiple observations. A popular and successful approach in machine learning and AI is to hypothesize the existence of certain latent (or hidden) causes which help to explain the correlations in the observed data. The (unsupervised) learning problem is to accurately estimate a model with only samples of the observed variables. For example, in document modeling, we may wish to characterize the correlational structure of the "bag of words" in documents, or in community detection, we wish to discover the communities of individuals in social networks. Here, a standard model is to posit that documents are about a few topics (the hidden variables) and that each active topic determines the occurrence of words in the document. The learning problem is, using only the observed words in the documents (and not the hidden topics), to estimate the topic probability vectors (i.e. discover the strength by which words tend to appear under different topcis). In practice, a broad class of latent variable models is most often fit with either local search heuristics (such as the EM algorithm) or sampling based approaches.This talk will discuss a general and (computationally and statistically) efficient parameter estimation method for a wide class of latent variable models---including Gaussian mixture models (for clustering), hidden Markov models (for time series), and latent Dirichlet allocation (for topic modeling and community detection) ---by exploiting a certain tensor structure in their low-order observable moments. Specifically, parameter estimation is reduced to the problem of extracting a certain decomposition of a tensor derived from the (typically second- and third-order) moments; this particular decomposition can be viewed as a natural generalization of the (widely used) principal component analysis method.
Mon, 10 12 9:50 - 11:10
Mo1A: Coding for Distributed Storage (Invited Session)
Mon, 10 12 11:30 - 12:30
Mo2A: Index Coding
Mo2B: Information Theory & Statistics
Mon, 10 12 14:30 - 15:50
Mo3A: Interactive Communication (Invited Session)
- Relative Discrepancy does not separate Information and Communication Complexity
- Information Complexity in Parallel Repetition Theorems
- Lower bounds for interactive function computation
- Information Complexity Density and Simulation of Protocols
Mo3B: Network Coding
Mon, 10 12 16:10 - 17:50
Mo4A: Distributed Storage
- Optimal Binary Locally Repairable Codes with Joint Information Locality
- Local Codes with Addition Based Repair
- On Algebraic Manipulation Detection Codes from Linear Codes and their Application to Storage Systems
- Repair Scheduling in Wireless Distributed Storage with D2D Communication
- Erasure codes with symbol locality and group decodability for distributed storage
Mon, 10 12 16:10 - 17:30
Mo4B: Network Information Theory 1
Tue, 10 13
Tue, 10 13 8:30 - 9:30
P2: Plenary 2
Detecting or estimating a dense community from a network graph offers a rich set of problems involving the interplay of algorithms, complexity, and information limits. This talk will present an overview and recent results on this topic.Based on joint work with Yihong Wu and Jiaming Xu.
Tue, 10 13 9:50 - 11:10
Tu1A: Statistics and Computation (Invited Session)
Tue, 10 13 11:30 - 12:30
Tu2A: Channel Capacity
Tu2B: Coding Theory
Tue, 10 13 14:30 - 15:50
Tu3A: Spatially Coupled Codes
Tu3B: Broadcast & Multiple Access Channels
- Error and Erasure Exponents for the Asymmetric Broadcast Channel
- Joint Source-Channel Coding for Broadcast Channel with Cooperating Receivers
- Duality between finite numbers of discrete multiple access and broadcast channels
- A Proof of the Strong Converse Theorem for Gaussian Multiple Access Channels
Tue, 10 13 16:10 - 17:30
Tu4A: LDPC and Polar Codes
Tu4B: Network Information Theory 2
Wed, 10 14
Wed, 10 14 8:30 - 9:30
P3: Plenary 3
Recently, much attention is paid to the finite length analysis of the basic problems such as source coding and channel coding. In this talk we first summarize the information-spectrum approach to this kind of problems (called the second-order information theory) such as source coding, resolvability, intrinsic randomness, hypothesis testing, channel coding in a unified framework. In particular, special emphasis is addressed to the class of mixed sources and mixed channels that are defined to be the general mixture of stationary memoryless sources and the general mixture of stationary memoryless channels, respectively.Although it is generally quite hard to give single-letter characterizations to the formulas for general sources and general channels, it is shown that we can successfully derive the compact single-letter characterizations to the class of mixed sources/channels. In particular, we give the detailed proof for the second-order theorem on hypothesis testing with mixed sources.
Wed, 10 14 9:50 - 11:10
We1A: Ranking and Clustering (Invited Session)
Wed, 10 14 11:30 - 12:30
We2A: Topics in Information Theory
Wed, 10 14 11:30 - 12:50
We2B: MIMO
- A Mixed-ADC Receiver Architecture for Massive MIMO Systems
- The Ergodic High SNR Capacity of the Spatially-Correlated Non-Coherent MIMO Channel Within an SNR-Independent Gap
- On the Degrees of Freedom of the Three-User MIMO Interference Channel with Delayed CSIT
- How Many Users Are Needed for Non-Trivial Performance of Random Beamforming in Highly-Directional mm-Wave MIMO Downlink?
Thu, 10 15
Thu, 10 15 8:30 - 9:30
P4: Plenary 4
In this talk, we will discuss a new class of models and techniques that can effectively model and extract rich low-dimensional structures in high-dimensional data such as images and videos, despite nonlinear transformation, gross corruption, or severely compressed measurements. This work leverages recent advancements in convex optimization for recovering low-rank or sparse signals that provide both strong theoretical guarantees and efficient and scalable algorithms for solving such high-dimensional combinatorial problems. These results and tools actually generalize to a large family of low-complexity structures whose associated regularizers are decomposable. We illustrate how these new mathematical models and tools could bring disruptive changes to solutions to many challenging tasks in computer vision, image processing, and pattern recognition. We will also illustrate some emerging applications of these tools to other data types such as web documents, image tags, microarray data, audio/music analysis, and learning graphical (neural network) models.This is joint work with John Wright of Columbia, Emmanuel Candes of Stanford, Zhouchen Lin of Peking University, and my students Zhengdong Zhang, Xiao Liang of Tsinghua University, Arvind Ganesh, Zihan Zhou, Kerui Min and Hossein Mobahi of UIUC
Thu, 10 15 9:50 - 11:10
Th1A: Inference and Computation (Invited Session)
- Robust Regression via Hard Thresholding
- Individualized Rank Aggregation using Nuclear Norm Regularization
- Computing Large-scale Matrix Functions through Stochastic Chebyshev Expansions
- Source Obfuscation in Anonymous Messaging
Thu, 10 15 11:30 - 12:30
Th2A: Secrecy Systems
Th2B: Topics in Networks
Thu, 10 15 14:30 - 15:50
Th3A: Wiretap Channels
Th3B: Source Coding
Thu, 10 15 16:10 - 17:50
Th4A: Security & Privacy in Networks
- Secure Degrees of Freedom of the Interference Channel with No Eavesdropper CSI
- The Two-Hop Interference Untrusted-Relay Channel with Confidential Messages
- The Capacity of a Broadcast Channel with Gaussian Jamming and a Friendly Eavesdropper
- Information integrity between correlated sources through Wyner-Ziv coding
- Privacy on Hypothesis Testing in Smart Grids
Th4B: Lossy Compression
- Polar Lattices are Good for Lossy Compression
- Rate-Distortion Function for a Heegard-Berger Problem with Two Sources and Degraded Reconstruction sets
- Indirect Rate-Distortion Function of a Binary i.i.d Source
- Optimum One-Bit Quantization
- Almost Lossless Analog Compression without Phase Information - Complex Case