Time (New York) Elsewhere

Wednesday, February 6

10:00 am-10:30 am Introduction & welcome
10:30 am-12:00 pm Infrastructure and measurements
01:00 pm-02:30 pm Backbone and wireline access
03:00 pm-04:30 pm Wireless networks, first responders, social networks

Wednesday, February 6

Wednesday, February 6 10:00 - 10:30

Introduction & welcome

Chair: Henning Schulzrinne (FCC, USA)

Wednesday, February 6 10:30 - 12:00

Infrastructure and measurements

Third-Party Measurement of Network Outages in Hurricane Sandy
John Heidemann (University of Southern California, USA)

This abstract summarizes recent work we have done to enable third-party measurement of edge-network resiliency for Hurricane Sally.

Presenter bio: John Heidemann conducts research at ISI and teach at USC's Computer Science Department. He received his PhD (in computer science) from UCLA's Computer Science Deptartment in 1995 under Gerald Popek, and my BS in computer science at University of Nebraska-Lincoln in 1989.
Lessons from Field Damage Assessments about Communication Networks Power Supply and Infrastructure Performance during Natural Disasters with a focus on Hurricane Sandy
Alexis Kwasinski (University of Texas, USA)

This abstract summarizes observations about communication infrastructure performance obtained from field damage assessments made by me after notable recent natural disasters and explores ways of reducing communication networks outages intensity and duration. This discussion is made within the context of observed effects of Hurricane Sandy on communication networks from damage assessments I performed shortly after Sandy made landfall. Some of the other recent notable natural disasters which I studied with field damage assessment include hurricanes Katrina (2005), Dolly, Gustav and Ike (2008), Irene (2011), and Isaac (2012), and the Feb. 2010 earthquake and tsunami in Chile, the Feb. 2011 earthquake in Christchurch, New Zealand, and the March 2011 earthquake and tsunami in the Tohoku Region of Japan. A potential presentation about this topic will include extensive photographic evidence of the observations I made during my field damage assessments of these disasters, with particular focus on Sandy. Links for samples of such photographic evidence and a compilation of most of my published work on this subject can be obtained with my bio at http://users.ece.utexas.edu/~kwasinski/BioFCC.pdf

Power outages are one of the main causes of communication network outages during natural disasters and Hurricane Sandy was no exception. Reports issued by Verizon indicate that 300 central offices were affected and information collected during my damage assessment indicates that the majority of them were affected by power outages but did not loose service. Unlike Katrina, there is no indication of central office outages caused by engine fuel starvation originated in diesel procurement and delivery issues. Still, two key central offices in Lower Manhattan had at least some of their operations, if not all, interrupted when flooding damaged power backup equipment including onsite diesel generators and fuel pumps in their basements or first floor. These issues could have been prevented by locating power equipment on higher floors or by using watertight doors instead of conventional doors to access the central office buildings. Watertight doors are found in many of Japan's NTT central offices. In some sites—e.g. Kamaishi—these doors were effective in limiting the damage during the March 2011 tsunami. Power was also the main reason why an average of 25 % of the cell sites in the area affected by Sandy lost service. During the damage assessment I observed that the majority of base stations in cities and towns in the area affected by Sandy are located on buildings roofs and have no permanent gensets. This is different from what is observed in the Gulf Coast where most cell sites are placed on their own parcel and many of them have permanent gensets. Placing base stations on buildings roofs led to power restoration issues because although some of these cell sites have standard power sockets at the ground level and with easy access to connect a portable diesel generator, many sites did not have any pre-prepared way of connecting a generator at ground level so a cable had to be run from the roof to a portable genset on the street. This ad-hoc solution led to service restoration delays because of difficulties in gaining building or roof access. The discussion will also consider the increased need of communication networks users for reliable power, e.g. to charge their cell phones.

Recently, more comprehensive solutions to address power issues have been proposed. In recent years, implementation of smart grid technologies has been seen as a way of improving electric grids performance during natural disasters. Although some of these technologies allow a faster identification of electric outages and general grid condition, power grids damage repairs still requires human intervention. Moreover, as I explained in an INTELEC 2010 paper exploring the effect of smart grid technologies on communication networks, my damage assessments and statistical outage data indicate that power grids are inherently very fragile systems due to their centralized power distribution and control architectures. Since most utility-based smart grid technologies still maintain such centralized architectural and control approaches, these smart grid enabled power supply availability improvements are limited. As I explained in several of my papers, network operators-based solutions and, in particular, microgrids—independently controlled and confined power grids with their own local power generation sources, such as Verizon's Garden City central office that operated during Sandy powered by 7 fuel cells —present a better technical solution. Moreover, for high-power loads, such as central offices, microgrids are cost competitive with respect to conventional power solutions. Still, operation of some microgrid local power generation units may be affected during disasters because of their dependency on other infrastructures, such as roads for diesel supply. Some of the solutions to address dependency on other infrastructures include using technologies that have low probability of being affected by a given hazard—e.g. natural gas in hurricane prone areas—diversifying local power supply technologies, using local energy storage devices—e.g. batteries—and using renewable energy sources that do not depend on other infrastructures, such as photovoltaic (PV) systems or wind generators. Yet, relatively large footprint and output variability limits the application of PV and wind generators. Limited space and higher relative cost makes implementation of alternative power solutions even more difficult in distributed network elements: base stations, outside plant fiber nodes, such as digital loop carrier (DLC) terminals, and CATV amplifiers supporting telephony services. Although fuel cells have been deployed in some cell sites and natural gas generators have been installed in many DLC terminals along the Gulf Coast, different city architecture approaches makes such solutions difficult to reproduce in the Northeastern Atlantic Coast, leading to unsafe ad-hoc solutions observed with Sandy, such as placing portable generators on top of pole-mounted CATV amplifiers.

Although the discussion focuses on power supply issues, problems related with other communication networks infrastructure components are also commented. For example, the discussion will also explore the implications in terms of network vulnerabilities of using fiber optics remote terminal cabinets to restore service to copper cable facilities that were damaged due to a combination of pressurization failures and flooding in the aftermath of several natural disasters (including Sandy).

Presenter bio: Alexis Kwasinski received his B.S. degree in electrical engineering in 1993 and a graduate specialization degree in communications in 1997. Before returning to school to earn his M.S. and Ph.D. degrees in electrical engineering from the University of Illinois at Urbana-Champaign in 2005 and 2007, respectively, he worked for 10 years in the communications industry, first designing and planning outside plant telephony networks for Telefonica and then working as a technical support engineer and technical consultant for Lucent Technologies Power Systems. Since 2007 Dr. Kwasinski has been working as a faculty member of The University of Texas at Austin. He has recently been promoted to the rank of Associate Professor with tenure (effective September 2013). As shown by his many publications, Dr. Kwasinski has performed substantial research in the area of communication systems performance during natural disasters. As part of this research he has conducted several field damage assessments after natural disasters. The most notable ones include hurricanes Katrina, Ike and Sandy, and the February 2010 and March 2011 earthquakes and tsunamis in the Maule Region of Chile and the Tohoku Region of Japan, respectively. In 2005 Dr. Kwasinski received the IEEE Power Electronics Society Joseph J Suozzi INTELEC Fellowship for his research about using microgrids to power communication facilities and in 2007 Dr. Kwasinski received the best paper award at the IEEE International Telecommunications Energy Conference for his work on improving power supply for communication networks during natural disasters. Dr. Kwasinski also received a National Science Foundation CAREER award to study the use of microgrids to improve power supply availability of critical loads—particularly communication sites—during natural disasters. Alexis Kwasinski is now the chair of a technical thrust within IEEE Power Electronics Society's Technical Committee on Communications Energy Systems dedicated to improve communication networks infrastructure performance during natural disasters. He is also the Vice Chair of the Technical Committee of Electric Power and Telecommunications of the American Society of Civil Engineers (ASCE) Technical Council of Lifeline Earthquake Engineering. Dr. Kwasinski is also an active member of Austin's smart grid initiative called Pecan Street, Inc. Website links with sample information about his work on the effects of natural disasters on communication networks can be found at http://users.ece.utexas.edu/~kwasinski/research.html. A compilation of published material supporting the discussion in this abstract can be obtained in an 80 MB pdf file found at http://users.ece.utexas.edu/~kwasinski/disasters%20comp.pdf. Information and a preliminary report about Hurricane Sandy can be found at http://users.ece.utexas.edu/~kwasinski/sandy.html and http://users.ece.utexas.edu/~kwasinski/preliminary%20telecom%20report%20v3%20comp.pdf, respectively.
Lessons learned by "measuring" the Internet during/after the Sandy storm
Emile Aben (RIPE NCC, The Netherlands); Alistair King (CAIDA, UCSD, USA); Karyn Benson (CAIDA/UCSD, USA); Young S. Hyun (CAIDA/University of California, San Diego, USA); Alberto Dainotti (CAIDA, UC San Diego, USA); Kimberly Claffy (CAIDA, USA)

After causing extensive damage in the Caribbean, superstorm Sandy had devastating effects on the US East Coast, including heavily affecting Internet infrastructure in the region. Researchers at CAIDA have been developing techniques for the detection and analysis of large-scale Internet outages, through correlation of a variety of network measurements, including control-plane signaling, passive traffic collection, and distributed active probing. Experiments with correlating differents sources of data have inspired our current pursuit of new methodologies, with the ultimate goal to develop on operational capability to detect and monitor Internet outages in real-time [This work is partly funded by the NSF SaTC program CNS-1228994].

However, Sandy was a significantly different type of disruption than those we have studied thus far, with characteristics that limited and in some cases prevented us from being able to thoroughly analyze the event. These characteristics include:

  • Movement over a large area, with no fixed epicenter like an earthquake has.

  • High level of Internet penetration in the affected region, including major hubs for international Internet connectivity.

  • Disruption was limited to only a subset of networks/hubs in the affected region, making it harder to identify geographic areas of massive impact.

Nonetheless, we were able to observe some of the impacts of Sandy through Internet measurements, and we also learned some lessons that will allow us to improve our methodologies for studying similar events in the future. We would report on both these aspects at the workshop.

The measurements we would discuss include: analysis of changes in Internet routing to Europe and Asia inferred from traceroutes conducted using the RIPE Atlas network; analysis of traffic reaching the UCSD Network Telescope that originated from the areas affected; and traceroute (forward path) measurements through CAIDA's Archipelago infrastructure.

Presenter bio: Alberto Dainotti is Research Scientist at CAIDA (Cooperative Association for Internet Data Analysis), University of California San Diego, USA. In 2008 he received his Ph.D. in Computer Engineering and Systems at the Department of Computer Engineering and Systems of University of Napoli "Federico II", Italy. He co-authored several peer-reviewed papers published at conferences and in scientific journals in the field of Internet measurement, traffic analysis, and network security. He serves for the European Commission as an independent reviewer/evaluator of projects and project proposals co-funded by the EC.
TSCOPE: Real-time Mobile Data Collection Technology Using Spatiotemporal Data Casting
Kang-Won Lee (IBM Research, USA); Ho Yin Starsky Wong (IBM T.J. Watson Research Center, USA)

To support decision-making in the disaster recovery situation, it is critical to collect data about situation and events as they unfold in the area of interest. Consider a scenario when the government agency is trying to assess the flooding situation in a certain area by collecting data, e.g., pictures and text messages from the people in the area. Ideally we want the agency to be able to send a query to mobile phones owned by people in the affected area without requiring the knowledge of who they are, and where they are.

TSCOPE (which stands for telescope) is a user-friendly service for mobile data collection by sending location-oriented queries to the users to get situational data without requiring explicit knowledge of the contact information of the recipients and their whereabouts. This is an ideal service to enable large-scale data collection from willing participants from a region of interest when disaster hits. Using TSCOPE a user can send text-based queries to mobile devices in a particular region, near a point of interest. The area of interest can be precisely specified by using geospatial concepts such as bounding polygons (e.g. Rockefeller center), distance-based boundaries (e.g., within 5 min driving distance from Ground Zero), and trajectory (e.g., 500 meters along the route), and the query will be only sent to the mobiles in that space.

In this talk, we will describe the software architecture and component technologies used to build TSCOPE and present a simple demo in the context of searching for healthcare providers in a foreign area without prior knowledge of their contact information or locations.

Presenter bio: Dr. Kang-Won Lee is a Research Staff Member and Manager of Wireless Networking Group at IBM T. J. Watson Research Center in NY. Since he joined IBM Research in 2000, he has worked on numerous research projects on wireless networks, network management, cloud computing, and policy technologies. Kang-Won Lee is PI for NIST Project on Measurement Science for Cloud Computing (2010 - 2012), and is an Industrial Technical Area Leader for International Technology Alliance for Network and Information Science (ITA) jointly funded by US Army and UK MOD (2006 - present). He led Smarter Wireless and Appliance Big Bet at IBM Research coordinating the research efforts of 100+ researchers at global labs to develop new technologies to handle fast-growing mobile data more efficiently and intelligently. He is leading a Strategic Initiative in the same area in 2012. Kang-Won Lee received a Ph.D. in Computer Science from University of Illinois at Urbana-Champaign in 2000, and received a B.S. and an M.S. from Seoul National University in 1992 and 1994, respectively. He has been awarded prestigious C.W. Gear Award (1999), IBM Research Division Award (2003), and IBM Outstanding Technical Achievement Award (2007). He has published more than 80 papers in top conferences and journals and has generated more than 50 intellectual properties. He has served as Editor of Journal of Computer Networks, Secretary of IEEE TCCC (Technical Committee on Computer Communications) and President of Korean Computer Scientists and Engineers Association in America (KOCSEA). Kang-Won has been Program co-chair for UKC IST and KOCSEA symposiums, a workshop chair for IEEE SECON. He has served on TPC of numerous conferences including IEEE INFOCOM, IEEE SECON, IEEE Globecom, IEEE ICC, and served on NSF review panels. Kang-Won mentored more than 20 graduate students. Kang-Won is a Distinguished Scientist of ACM and a Senior Member of IEEE.
Lessons Learned from the 9/11 Attacks
Jennifer Rexford (Princeton University, USA)

In the aftermath of the terrorist attacks on September 11, 2001, the Computer Science and Telecommunications Board (CSTB) of the National Academies formed a committee to assess the Internet's performance and reliability on that day, and to offer suggestions for ways to handle future emergencies. In 2003, the committee published a report "The Internet Under Crisis Conditions: Learning from September 11, 2001" (http://www.nap.edu/openbook.php?isbn=0309087023) that gave a detailed timeline of the impact of the attacks on the Internet infrastructure and on the ways people used Internet services. This talk presents an overview of the CSTB report, and the lessons learned about ways to improve the resilience of the Internet during major emergencies.

Presenter bio: Jennifer joined the Network Systems Group of the Computer Science Department at Princeton University in February 2005 after eight and a half years at AT&T Research. Her research focuses on Internet routing, network measurement, and network management, with the larger goal of making data networks easier to design, understand, and manage. Jennifer is co-author of the book Web Protocols and Practice: HTTP/1.1, Networking Protocols, Caching, and Traffic Measurement (Addison-Wesley, May 2001) and co-editor of She's an Engineer? Princeton Alumnae Reflect (Princeton University, 1993, see recent talk about the book). Jennifer served as the chair of ACM SIGCOMM from 2003 to 2007, and has served on the ACM Council and the Board of Directors of the Computing Research Association. She received her BSE degree in electrical engineering from Princeton University in 1991, and her MSE and PhD degrees in computer science and electrical engineering from the University of Michigan in 1993 and 1996, respectively. She was the winner of ACM's Grace Murray Hopper Award for outstanding young computer professional of the year for 2004.
Smart Grid Thinking, Innervation, and Infrastructure Threats
Albert G Boulanger (Columbia University & World Team Now, USA); Doug Riecken (Columbia University, USA)

Aspects of the Smart Grid concept enhance robustness of the grid to both natural and manmade threats. Many smart grid implementations start with the visibility of the last mile with smart meter deployments and ways to communicate to the last mile. This is an important step to the smart grid but is not the end in making a grid self-healing and the other facets of a smart grid. Making the grid adaptive so it can respond and even anticipate failures is another key component. So also is grid-scale and building/campus scale storage technologies. And ultimately, how do we make the grid smart? How will current approaches be changed to building-out smart grids? Below are some key aspects we have been addressing:

  1. An approach to Innvervation -- that is the wiring and sensing of the last mile and distribution level of the grid so that the whole grid is sensed. Disruptive technology will emerge in the world of traditional SCADA systems so that sensors are self-locating, self-organizing, self-identifying, and the data they produce "self-storing". This will be true of all kinds of infrastructure grids -- from transportation to water including the fusion of data across types of infrastructure.

  2. In order to be more intelligent about the control of the grid, it needs to be modeled. Learning models from data is an important part of achieving the level of modeling needed. ML/KDD is useful approach complimenting traditional power flow models. In order to react adaptively and intelligently, the tools of multistage decisions under uncertainly, such as approximate dynamic programming or stochastic programming can be applied.

  3. In order to accept intermediate sources of power the grid needs "elasticity". This is addressed with electrical storage technologies at different levels of the grid, from the end customer though distribution up to transmission.

I will elaborate these points by discussing both research and policy issues I have been involved with at the Center of Computational Learning Systems and World Team Now.

Presenter bio: Albert Boulanger (M.S. CS Univ. of Illinois 1983) has been at Columbia University since 1994 and is Senior Staff Associate with the Center for Computational Learning Systems (CCLS) of Columbia University. He is also a co-founder of CALM Energy, Inc. and Director of Technical Strategy and board member of World Team Now, a nonprofit environmental and social organization, and founding member of World-Team Building, LLC. From 2000-2002, he held the CTO position of vPatch Technologies, Inc., a startup company commercializing a computational approach to efficient production of oil from reservoirs based on time-lapse 4D seismic technologies on a leave of absence from Columbia. Prior to his position at Columbia, Albert was a research scientist at Bolt, Beranek and Newman. Albert has played multiple technical and oversight roles in several Con Edison projects while at Columbia. He is valued for his ability to maintain a systems view of all the facets of large projects. His expertise includes systems integration, expert and knowledge-based systems, machine learning and pattern recognition -- including the interface between numerical and symbolic algorithms, parallel computing, pattern recognition applied to time-lapse seismic data, computer representations of complex scientific and engineering objects, visualization, distributed systems and interoperability. Since 2005, Albert has applied machine learning to studying failure patterns of electric power distribution feeders and their components for Con Edison. More recently Albert was involved a Dept. of Energy funded Con Edison-led Smart Grid project to apply Dynamic Treatment Regimes to formulate optimized repair policies of power distribution components and another Smart Grid project to use Approximate Dynamic Programming for optimizing load curtailment decisions in distribution networks. He currently is involved in using machine learning for optimizing charging of electric delivery trucks and efficient intelligent energy management of large NYC buildings.

Wednesday, February 6 1:00 - 2:30

Backbone and wireline access

Network Adaptability from Disaster Disruptions and Cascading Failures
Biswanath Mukherjee (University of California, Davis, USA)

Recent disasters such as Hurricane Sandy demonstrate that our network infrastructures need to be better prepared for disasters, and they require intelligent and efficient recovery methods during post-disaster events. Note that telecom backbone networks employ optical mesh structures to provide highly-scalable connectivity across large distances; and these networks along with their "higher-layer" (virtual) networks (e.g., IP, MPLS, SONET, Ethernet, ATM) are integral to our economic well-being and national security because they are widely deployed in commercial and defense sectors to support many aspects of our daily life, cloud computing, battlefield surveillance/backhaul, etc. Thus, the need for survivability against disasters is acute, given the scale and criticality of these networks.

Techniques exist (and are implemented in operational networks) to provide fast protection at the optical and other layers, but they are optimized for limited faults without addressing the extent of disasters. Typically, failures caused by disasters are correlated, and cascading, and are much more dynamic than failures recoverable by known techniques. So, there is a pressing need for novel, robust survivability methods to mitigate the effects of disasters on backbone and virtual networks. To address this challenging problem without incurring excessive redundancy, we are conducting research sponsored by the Defense Threat Reduction Agency (DTRA) to investigate methods to provide effective protection against disasters and adaptive network algorithms during disaster events by developing protection techniques for dynamic re-provisioning, multipath routing, and data replication. In particular, we are developing methods for the following: (1) Normal Disaster Preparedness (by accounting for risk of disasters in different parts of the infrastructure); (2) Enhanced Disaster Preparedness (under more-accurate intelligence on potential disasters); and (3) Post-Disaster Service Survivability. Note that while traditional approaches focused on protecting links and nodes (routers, switches, etc.) to provide "network connectivity", the shifting paradigm towards cloud computing/storage require that we protect the data (or content), so we have developed the concept of "content connectivity" and methods to achieve this.

Normal Preparedness: Network operators should proactively take necessary actions to minimize network disruptions and data loss in case of a disaster. Knowledge of possible disaster zones (i.e., risk information) would help to utilize network resources and disseminate data accordingly. For instance, seismic hazard maps would be useful to determine vulnerable parts of the networks and to define risk of traversing connections through these vulnerable parts in case of an earthquake. Similar methods can better prepare our military networks against their threats

Enhanced (Better) Preparedness: If a disaster is predicted through scientific measurements and observations (e.g., the possible path and estimated time of arrival to main land of a hurricane occurring in the ocean can be known days in advance), the network can be better prepared by re-allocation of network resources and re-dissemination of data, and possibly by relocation of hardware resources also.

Post-Disaster Events: After an attack, network resources can become limited, and if full bandwidth cannot be guaranteed, the services should be provided with as much bandwidth as possible (degraded services). Disrupted connections can be reprovisioned on the surviving network resources. The information on network recovery (e.g., through FCC's Disaster Information Reporting System (DIRS)) can give information on network status, and this information can be exploited to better meet the service needs and current network state.

The methods to prepare the network for possible disasters, to better prepare for upcoming disasters, and to provide some minimal level of services after a disaster to support critical operations and recover services while network is recovering can significantly improve network robustness for disasters. If given an opportunity, we would be delighted to share our research activities and findings from our DTRA project at the upcoming FCC workshop on disaster preparedness.

Presenter bio: Biswanath (Bis) Mukherjee is Distinguished Professor at University of California, Davis, where was Chairman of the Department of Computer Science during 1997-2000 and held the Child Family Professorship during 2006-11. He received the BTech degree from Indian Institute of Technology, Kharagpur (1980) and PhD from University of Washington, Seattle (1987). He was General Co-Chair of the IEEE/OSA Optical Fiber Communications (OFC) Conference 2011, Technical Program Co-Chair of OFC'2009, and Technical Program Chair of the IEEE INFOCOM'96 conference. He is Editor of Springer's Optical Networks Book Series. He has served on 12 journal editorial boards, most notably IEEE/ACM Transactions on Networking, IEEE Network, and IEEE Communications Surveys and Tutorials, and including Guest Editor for Special Issues of Proceedings of the IEEE, IEEE/OSA Journal of Lightwave Technology, IEEE Journal on Selected Areas in Communications, and IEEE Communications. He has supervised over 60 PhDs to completion and currently mentors 15 advisees, mainly PhD students. He is co-winner of Best Paper Awards at the Optical Networking Symposium in IEEE Globecom 2007 and 2008; at the IEEE ANTS 2011 and 2012 conferences; and at the 1991 and 1994 National Computer Security Conferences. He is author of the graduate-level textbook Optical WDM Networks (Springer, January 2006). He served a 5-year term on Board of Directors of IPLocks, a Silicon Valley startup company. He has served on Technical Advisory Board of several startup companies, including Teknovus (acquired by Broadcom). He is an IEEE Fellow.
The Vulnerability of Fiber Networks and Power Grids to Geographically Correlated Failures
Gil Zussman (Columbia University, USA)

Recent massive failures of the power grid demonstrated that large-scale and/or long-term failures will have devastating effects on almost every aspect in modern life, as well as on interdependent networks. In particular, telecommunications networks are vulnerable due to their strong dependence on power networks. Communication and power networks are vulnerable to natural disasters, such as earthquakes, floods, hurricane, and solar flare as well as to physical attacks, such as an electromagnetic pulse (EMP) attack. Such real world events happen in specific geographical locations and hence cause geographically correlated failures. Therefore, the geographical layout of the network determines the impact of such events.

This talk will focus on our recent results regarding the vulnerability of telecommunications networks and power grids to geographically correlated failure. We will present methods to identify the locations most vulnerable to large scale disasters. Our approach allows for identifying locations which require additional protection efforts (e.g., equipment shielding). Moreover, it may provide input to network and protocol design that could avert geographical disasters or attacks.

Presenter bio: Gil Zussman received the Ph.D. degree in Electrical Engineering from the Technion in 2004. Between 2004 and 2007 he was a Postdoctoral Associate at MIT. Since 2008 he has been on the faculty of the Department of Electrical Engineering at Columbia University where he is currently a Professor. His research interests are in the areas of wireless, mobile, and resilient networks. He is a co-recipient of 5 best paper awards including the ACM SIGMETRICS 2006 Best Paper Award and the 2011 IEEE Communications Society Award for Advances in Communication. He was a member of a team that won the 1st place in the 2009 Vodafone Americas Foundation Wireless Innovation competition and is a recipient of the Fulbright Fellowship, two Marie Curie Fellowships, the DTRA Young Investigator Award, and the NSF CAREER Award.
Diverse Network Infrastructure for Resilience and Rapid Recovery from Large-Scale Disasters
James P. G. Sterbenz (University of Kansas, USA)

The Internet is a critical infrastructure on which we depend, and thus it is essential that it be resilient such that it continues to provide service in the face of various challenges, including attack and large-scale disasters such as earthquakes, hurricanes, tsunami, and coronal mass ejections [1, 2]. A major aspect of our previous and current work on achieving resilience has centered on providing diversity in the network such that when part of the network fails, alternatives will be available to continue operation. This includes heterogeneity and diversity in mechanism (for example wired and wireless) [3], rich topology interconnection, and structural diversity of the network graph such that paths can be constructed that do not share fate when network components fail [4]. We have developed a set of analytical and simulation techniques and tools to generate network topologies, and to analyse the resilience of real and synthetic network graphs [5]. We have also made our data and topology viewer publicly available at http://www.ittc.ku.edu/resilinets/maps. A key aspect of this ongoing work is the multilevel nature of the analysis, in particular, attacks against the physical infrastructure must be modelled on the physical layer graph (fiber interconnection for the typical backbone) but its effects analysed on the IP network layer graph overlay. Under new funding from NSF NeTS (in collaboration with Deep Medhi at UMKC), we are exploring geographic diversity and its impact on traffic load, such that networks can be designed to survive large-scale disaster of a given scope. For example, an application should be able to specify: give me three multipath routes over which communication can be erasure coded such that the paths are no closer than 100 km (except at the source and destination). This example defends against a disaster with a diameter of less than 100 km in diameter. While we work to understand how to generate networks with desired graph-theoretical, diversity, and resilience properties, the reality is that even if adopted, there will be cases where disasters will partition the network, either because the area is greater than anticipated, or cost constraints have not permitted the deployment of sufficiently resilient infrastructure.

Thus, we are beginning research on how to optimally and rapidly deploy infrastructure after a disaster, in particular, to restore services outside the disaster area, to rapidly deploy assets to permit assessment of the damage to the environment and network, and to rapidly and optimally deploy infrastructure to restore critical network infrastructure to the affected area. This is joint work with Chinese instutions (Jiannong Cao at Hong Kong Poly and Jinyao Yan at CUC Beijing) as a result of our participation in the US NSF / China NSFC Workshop on Environmental Monitoring or Public Health and Disaster Recovery.

References:

[1] James P. G. Sterbenz, Rajesh Krishnan, Regina Rosales Hain, Alden W. Jackson, David Levin, Ram Ramanathan, and John Zao. Survivable mobile wireless networks: issues, challenges, and research directions. In Proceedings of the 3rd ACM workshop on Wireless Security (WiSE), pages 31-40, Atlanta, GA, 2002.

[2] James P. G. Sterbenz, David Hutchison, Egemen K. Cetinkaya, Abdul Jabbar, Justin P. Rohrer, Marcus Schöller, and Paul Smith. Resilience and survivability in communication networks: Strategies, principles, and survey of disciplines. Computer Networks, 54(8):1245-1265, 2010.

[3] James P. G. Sterbenz, David Hutchison, Egemen K. Cetinkaya, Abdul Jabbar, Justin P. Rohrer, Marcus Schöller, and Paul Smith. Redundancy, Diversity, and Connectivity to Achieve Multilevel Network Resilience, Survivability, and Disruption Tolerance (invited pa- per). Springer Telecommunication Systems, 2012. (accepted April 2012).

[4] Justin P. Rohrer, Abdul Jabbar, and James P.G. Sterbenz. Path diversification for future internet end-to-end resilience and survivability. Springer Telecommunication Systems, 2012. accepted April 2012.

[5] James P.G. Sterbenz, Egemen K. Cetinkaya, Mahmood A. Hameed, Abdul Jabbar, Qian Shi, and Justin P. Rohrer. Evaluation of Network Resilience, Survivability, and Disruption Tolerance: Analysis, Topology Generation, Simulation, and Experimentation (invited paper). Springer Telecommunication Systems, pages 1-32, 2011. Published online: 7 December 2011.

Bio:

James P.G. Sterbenz is Associate Professor of Electrical Engineering & Computer Science and a member of technical staff at the Information & Telecommunication Technology Center at the University of Kansas, and is a Visiting Professor of Computing in InfoLab 21 at Lancaster University in the UK. He has previously held senior staff and research management positions at BBN Technologies, GTE Laboratories, and IBM Research. His research interests include resilient, survivable, and disruption tolerant networking, future Internet architectures, active and programmable networks, and high-speed networking and components. He is director of the ResiliNets Research Group, PI in the NSF-funded FIND and GENI programs, the EU-funded FIRE ResumeNet project, leads the GpENI international programmable network testbed project, and has led a US DoD project in highly-mobile ad hoc disruption-tolerant networking. He received a doctorate in computer science from Washington University in 1991. He has been program chair for IEEE NGNI, GI, GBN, and HotI; IFIP IWSOS, PfHSN, and IWAN; is on the editorial board of IEEE Network, and is chair of IEEE ComSoc TCCC and formerly TCGN. He is principal author of the book High-Speed Networking: A Systematic Approach to High-Bandwidth Low-Latency Communication.

Presenter bio: Dr. James P.G. Sterbenz is Associate Professor of Electrical Engineering & Computer Science and a member of technical staff at the Information & Telecommunication Technology Center at the University of Kansas. He has previously held senior staff and research management positions at BBN Technologies, GTE Laboratories, and IBM Research. His research interests include resilient, survivable, and disruption tolerant networking; future Internet architectures; active and programmable networks; and high-speed networking and components. He is director of the ResiliNets Research Group. He received a doctorate in computer science from Washington University in 1991. He has been program chair for IEEE GI, GBN, and HotI; IFIP IWSOS, PfHSN, and IWAN; and is on the editorial board of IEEE Network. He is principal author of the book High-Speed Networking: A Systematic Approach to High-Bandwidth Low-Latency Communication.
Building Robust Cellular Networks
Shivendra Panwar (New York University & Tandon School of Engineering, USA)

The convergence of the cellular industry on a common standard, 4G LTE, offers an opportunity, heretofore unavailable, to build robustness across competing carriers during man-made and natural disasters. Aggregating resources, be they on the airlink or the backhaul, would allow emergency communications to be provided to customers. On the airlink, we will show that if two identical carriers pool their resources, they can double the capacity delivered to each customer, even as their combined pool of customers doubles. Equivalently, this means near normal service can be maintained even if half the cell towers are lost during a disaster. For backhaul, opening residential femtocells to all cellular users would allow islands of service even if all macrocells and/or their backhaul networks failed. These are two examples of using the diversity across wireless and wireline carriers to provide inherent robustness in service. Clearly a regulatory and policy environment that encourages cooperation between competing service providers would be needed to bring this vision to fruition. An advantage of this approach is the relatively modest incremental cost.

Presenter bio: Shivendra S. Panwar is a Professor in the Electrical and Computer Engineering Department at New York University. He is currently the Director of the New York State Center for Advanced Technology in Telecommunications (CATT). His research interests include the performance analysis and design of networks. Current work includes cooperative wireless networks, switch performance and multimedia transport over networks. He is an IEEE Fellow and has served as the Secretary of the Technical Affairs Council of the IEEE Communications Society. He has co-authored TCP/IP Essentials: A Lab based Approach, published by the Cambridge University Press. He was awarded, along with Shiwen Mao, Shunan Lin and Yao Wang, the IEEE Communication Society's Leonard G. Abraham Prize in the Field of Communication Systems for 2004. He was also awarded the 2011 IEEE Multimedia Communications Award.
FTTH technology in the Aftermath of Sandy
Peter Vetter (Nokia, USA)

This talk will discuss the benefits of fiber to the home (FTTH) as a reliable and energy efficient solution for broadband access. Fiber optic cables are more robust against floods than copper cables. The replacement of damaged copper cables and considerations to install utility cables underground in the aftermath of super storm Sandy provide an opportunity to roll out a future proof wireline infrastructure. A passive optical network (PON) does not require active equipment in the outside plant, which avoids the need for powering remote nodes and reduces the risk of failures. PON is the most energy efficient broadband access technology thanks to the low signal attenuation of the fiber transmission medium and the sharing of a single interface in the central office by multiple subscribers.

We will discuss research directions pursued in the GreenTouch research consortium to drastically reduce the power consumption even further. This will extend the service availability for a given power back-up capacity at the central office and customer premises in case of a power outage. An improved energy efficiency of the customer premises equipment (CPE) enables new power back-up approaches, such as easy-to-replace consumer batteries or small solar cells.

Presenter bio: Peter Vetter is Department Head for Access Systems in Bell Labs Murray Hill. He received a PhD from Gent University in 1991 and worked as post-doc at Tohoku University, before joining the research centre of Alcatel (now Alcatel-Lucent) in Antwerp in 1993. In 2000, he became R&D lead for BPON in an Internal Venture that produced the first FTTH product in Alcatel. He also managed various European Research Projects, including the integrated project IST-MUSE. During his career, he has been interested in liquid crystal displays, optical interconnections, optical access, access platforms, access architectures, netcomputing for residentials, and energy efficient access.

Wednesday, February 6 3:00 - 4:30

Wireless networks, first responders, social networks

Managing Interoperability: What Happens When We Succeed?
Art Botterell (Carnegie Mellon University Silicon Valley, USA)

As interoperability tools and new technologies such as LTE and FirstNet deploy, responders gain great flexibility. But with that flexibility comes new responsibility.

As we alleviate the traditional hard-wiring of public safety communications into fixed assignment of channels and talkgroups, we also remove a great deal of implicit structure and context that made network management and use easier. Things like namespace management (i.e., callsigns) and network protocols (procedures and codes) can no longer be treated as merely local concerns.

Suddenly--when we can connect, for example, the FBI and local police instantly and seamlessly by radio--we then must address a variety of new issues. Do the two groups of users know each others's callsigns and assignments? Do they understand each others radio codes and procedures? For that matter, how do they find each other in the first place? Where's the phone book? Or is it more like Google? And who gets access to what, and who says so?

While this is a relatively new class of problems in the Land Mobile Radio domain, it's familiar ground in the Information Technology discipline of Embedded Systems and Web Services. Challenges of "service discovery" and "directory services," for example, have been addressed in a variety of ways in data systems, some of which may serve as examples for needed support services in our emerging emergency communication networks.

Presenter bio: Art Botterell is an expert practitioner in the field of emergency information and public warning systems, with more than four decades' experience in government public safety and disaster response and who also has extensive private sector and media experience. He conceived and led the development of the international Common Alerting Protocol interoperability standard (ITU recommendation x.1303). He was a member of the U.S. Federal Communications Commission's design panel for wireless emergency alerts and has served on a number of study panels for the National Academies of Science. Mr. Botterell has designed, deployed and operated a variety of advanced public warning and emergency public information systems for the U.S. Federal Emergency Management Agency and Department of Defense as well as for state and local agencies and the United Nations Development Program. He has consulted on national-level disaster management systems in the U.S., Asia, Australia, Europe and the Caribbean. In addition Mr. Botterell has experience as a journalist, a radio and television producer, and an award-winning Internet content producer.
Survivable Social Network
Bob Iannucci (Carnegie Mellon University & RAI Laboratory LLC, USA)

As a direct consequence of the Loma Prieta earthquake in 1989, 154 of 160 telephone central offices in Northern California lost primary power, and some of these also lost their backup power. The impacted communities had an expectation of rapid restoration of communication services - in that day and age, this meant voice calling over POTS circuits. Amateur radio operators were able to provide supplementary voice communications during the recovery efforts because individuals had taken steps to become licensed, build skills and prepare equipment that would operate in the absence of communications infrastructure.

But since that time, both the expectations and the infrastructure have changed in ways that have reduced communications resilience for communities. The rise of the internet and, more recently, smartphones and social networks, have shifted expectations toward mobile IP-based data services (e.g., exchanging health and welfare information from phones via text and images; vesting the storage of critical banking and health information in online services). And the nature of this new infrastructure is inherently less robust than central-office-based POTS. Cell sites are constructed with limited backup power (reportedly, more than 25% of the cell sites in the affected area went down during Sandy). Wireless technologies have unique vulnerabilities. Cellular signaling is particularly prone to overloading. Consumer-grade internet services are not engineered to pre-divestiture AT&T standards.

While full restoration of internet access will be important, it stands to reason that restoring communication infrastructure within an impacted community using familiar-feeling communications tools will be among its top priorities in an emergency situation.

The Survivable Social Network (SSN) project at Carnegie Mellon University is developing a solution to this problem. Our approach addresses, among other things, (a) rapidly-deployed replacement infrastructure that can meet basic communication needs and (b) a rich and flexible means for authorities to communicate with citizens.

Presenter bio: Dr. Bob Iannucci is Director of the CyLab Mobility Research Center at Carnegie Mellon University Silicon Valley and is known for leading both software and systems research in scalable and mobile computing. Most recently, he served as Chief Technology Officer of Nokia and Head of Nokia Research Center (NRC). Bob spearheaded the effort to transform NRC into an Open Innovation center, creating "lablets" at MIT, Stanford, Tshinghua University, the University of Cambridge, and École Polytechnique Fédérale de Lausanne (EPFL). Under his leadership, NRC's previously established labs and the new lablets delivered fundamental contributions to the worldwide Long Term Evolution for 3G (LTE) standard; created and promulgated what is now the MIPI UniPro interface for highspeed, in-phone interconnectivity; created and commercialized Bluetooth Low Energy - extending wireless connectivity to coin-cellpowered sensors and other devices; and delivered new technology initiatives including TrafficWorks (using mobile phones to crowd source traffic patterns, Point and Find (augmented-reality using the mobile phone’s camera for image recognition and “zero click” search) and the Morph Concept (opening new directions for using nanotechnology to significantly improve mobile phone functionality and usability). Previously, Bob led engineering teams at startup companies focused on virtualized networking and computational fluid dynamics, creating systems that offered order-of-magnitude improvements over alternatives. He also served as Director of Digital Equipment Corporation's Cambridge Research Laboratory (CRL) and became VP of Research for Compaq. CRL created some of the earliest multimedia indexing technologies, and these became part of Alta Vista. In addition, the CRL team and Dan Siewiorek's team at CMU created MoCCA - a mobile communication and computing architecture - that prefigured and anticipated (by more than a decade)much of what has become today's smartphone technology. MoCCA won the IDEA Gold award for its innovative approach to facilitating real-time interaction within teams. Bob spent the earliest days of his career at IBM studying and developing highly scalable computing systems. Bob remains active as a hands-on systems builder. His most recent iPhone app for radio direction finding is in use in over 70 countries, and he is actively engaged in building WiFi-based "internet of things" devices and the cloud services behind them. He serves as advisor to companies developing new technologies for ultra-low-power computing, mobile video systems, and cloudconnected mobile apps. Bob earned his Ph.D. from MIT in 1988, and his dissertation was on the hybridization of dataflow and traditional von Neumann architectures, offering advantages over both. He has served on a number of scientific and engineering advisory boards and was on the program committees for the 3rd and 4th International Symposia on Wearable Computing. Bob also served as a member of the selection committee for the Millennium Technology Prize in 2008.
Loss of power and communication: A first-hand account
Theodore Rappaport (New York University & NYU WIRELESS, USA)

The presentation will give an eye-witness account of the loss of power and connectivity in downtown Manhattan, as well as an observation of social behavior to find wireless connectivity in Manhattan the morning after.

Presenter bio: Theodore (Ted) S. Rappaport is the David Lee/Ernst Weber Professor of Electrical and Computer Engineering at the Polytechnic Institute of New York University (NYU-Poly) and is a professor of computer science at New York University's Courant Institute of Mathematical Sciences. He is also a professor of radiology at the New York University School of Medicine. Rappaport serves as director of the National Science Foundation (NSF) Industrial/University Collaborative Research Center for Wireless Internet Communications and Advanced Technology (WICAT), a national research center that involves five major universities and is headquartered at NYU-Poly. He is also the founder and director of NYU WIRELESS, the world's first academic research center to combine wireless engineering, computer science, and medicine. Earlier in his career, he founded two of the world's largest academic wireless research centers: The Wireless Networking and Communications Group (WNCG) at the University of Texas at Austin in 2002, and the Mobile and Portable Radio Research Group (MPRG), now known as Wireless@ at Virginia Tech, in 1990. Rappaport has over 100 U.S. or international patents issued or pending and has authored, co-authored, and co-edited 18 books in the wireless field, including Wireless Communications: Principles & Practice (translated into 6 languages), Principles of Communication Systems Simulation with Wireless Applications, and Smart Antennas for Wireless Communications: IS-95 and Third Generation CDMA Applications. He has received three prize paper awards, including the 1999 Stephen O. Rice Prize Paper Award from the IEEE Communications Society for his work on site-specific propagation.
Minimizing the Risk of Communication Failure
John Thomas (Sprint, USA)

Weather related natural disasters occurring over the past two years have created an opportunity for an examination of assumptions regarding communications networks, interdependencies of communications platforms, and risk probability at a regional level. Sprint Nextel Corporation's ("Sprint") planning for network continuity and recovery is a constant process. Sprint maintains internal teams of personnel dedicated to ensuring continuity of service and expeditious network recovery. These expert teams adapt their forward-looking readiness and disaster response plans from lessons learned from past events, and have adapted their plans based on the lessons learned from the weather events of 2011 and 2012.

Hurricanes Irene and Sandy were unique due their extremely large footprints and severe impacts. Hurricane Irene made landfall in North Carolina and impacted every state from there through New England, while Hurricane Sandy impacted every state from the mid-Atlantic through New England. The frequency of such storms impacting the northeast is unprecedented. The scale, volume and frequency of these storms present a tremendous opportunity to analyze and plan for future events. Sprint's readiness, mobilization and deployment protocols were adapted between these storms, and Sprint continues to adapt its approach based on lessons from Hurricane Sandy. Sprint's presentation will focus on how wireless networks are affected differently by each disaster, and the vulnerabilities, risks and mitigation strategies that are involved in preparation and response.

Presenter bio: John is the Director of Sprint's Network Service Management Department. He is responsible for network performance analytics, network risk planning/mitigation and market centric network communications. He previously served as the Director of Network Center Operations. John has also held leadership roles in transmission planning & engineering, network design, access management and traffic engineering. John began his telecommunications career with the Federal Communication Commission as an Information Specialist in the Field Operations Bureau where he conducted FCC license examinations and assisted in radio inspection and enforcement activities. This led to a 25 year career at Sprint. John earned a bachelor's degree in communications from the University of Missouri.
Leveraging Diversity for Resiliency
Roch Guérin (Washington University in St. Louis, USA)

Diversity, or as it is some times called, redundancy, is a well-known approach for designing resilient systems. In the context of communications networks, it is typically synonymous with the availability of multiple, disjoint paths to a destination. In wireless settings, the availability of multiple paths arises naturally, and the challenge is in properly exploiting them. In large-scale wired networks such as the Internet, the distributed design decisions behind their deployment often make it difficult to even ascertain the presence of path diversity, and the many policy constraints under which the Internet operates can further limit it.

In this talk, I will first briefly report on an investigation seeking to establish the level of path diversity available in the Internet, and review simple solutions to improve it. The results hint at the fact that Internet resiliency could be enhanced by modifying its protocols to better leverage existing diversity.

Next, I will outline a simple multipath protocol aimed at wireless mesh networks with variable link conditions, and show how its reliance on multiple paths can significantly improve transmission stability even under adverse link conditions.

Finally, I will outline why in spite of the significant benefits that multipath solutions can afford in both wired and wireless settings, fully realizing those benefits may call for coordinated changes across networks and end-systems.

Presenter bio: Roch Guérin received an engineer degree from ENST, Paris, France, and M.S. and Ph.D. degrees in Electrical Engineering from Caltech. He joined the Computer Science and Engineering department of Washington University in St. Louis as the Harold B. and Adelaide G. Welge Professor and department chair. Prior to joining Washington University, he was the Alfred Fitler Moore Professor of Telecommunications Networks in the Electrical and System Engineering department of the University of Pennsylvania. Before that, he spent many years at the IBM T. J. Watson Research Center in a variety of technical and management positions. From 2001 to 2004 he was on leave from Penn, starting Ipsum Networks, a company that pioneered the concept of route analytics for managing IP networks. Dr. Guérin has published extensively in international journals and conferences, and holds more than 30 patents. He has also been active in standard organizations such as the IETF where he has co-authored a number of RFCs. His research is in the general area of networked systems and applications, from wired and wireless networks to social networks, and encompasses both technical and economic factors that affect network evolution. Dr. Guérin has been an editor for several ACM and IEEE publications and was the Editor-in-Chief for the IEEE/ACM Transactions on Networking from 2009 till 2012. He also served as General Chair or Program co-Chair for a number of ACM and IEEE sponsored conferences. Dr. Guérin is an ACM (2006) and IEEE (2001) Fellow. In 1994 he received an IBM Outstanding Innovation Award for his work on traffic management. He received the IEEE TCCC Outstanding Service Award in 2009, and was the recipient of the 2010 INFOCOM Achievement Award for “Pioneering Contributions to the Theory and Practice of QoS in Networks.” He was also the co-recipient of the 2010 INFOCOM Best Paper Award for the paper entitled “On the Feasibility and Efficacy of Protection Routing in IP Networks.” He was on the Technical Advisory Board of France Telecom for two consecutive terms from 2001 till 2006 and on the Technical Advisory Board of Samsung Electronics in 2003-2004. He joined the Scientific Advisory Board of Simula Research in 2010.
Climate projections
Klaus Jacob (Columbia University, USA)

A brief summary of the climate projections for the NYC metropolitan region for the 2020s, 2050s and 2080s will be given with particular emphasis on hurricane frequencies, and how sea level rise will drastically increase the frequency of severe coastal storm surges. These updated findings will be discussed in the context of the general findings in the New York State issued, pre-Sandy climate change adaptation report: ClimAID, Chapter 10: Telecommunications, accessible at http://www.nyserda.ny.gov/climaid.

Presenter bio: Klaus Jacob has worked at Columbia University for over forty years. He started as research associate in geophysics and seismology at the Lamont-Doherty Earth Observatory of Columbia University (1968-73). Since 1973 he has been a Senior Research Scientist at LDEO, a position from which he retired in 2001. Presently, Professor Jacob is a part-time Special Research Scientist at LDEO, combined with Adjunct Professor position at SIPA (2000-present). He has also taught at the Department of Environmental Sciences, Barnard College (1999-2005), and the Graduate School for Architecture, Planning, and Preservation (2001-2003). Dr. Jacob's research career evolved from basic Earth sciences to disaster risk management, regulatory policies and infrastructure/urban development; he bridges the interface of Earth science, engineering and public affairs. His focus is climate change and earthquakes. His basic research in seismology and tectonics stretched over 5 continents. Dr. Jacob cofounded the National Center for Earthquake Engineering Research; contributed to the U.S. National Earthquake Hazard Reduction Program's National Seismic Hazard Maps. He coauthored the U.S. national model and the New York City seismic building codes. He worked with the Emergency Management Communities at the federal, state, and local levels on risk mitigation strategies, including the recovery phase of the WTC disaster in NYC. He specializes in multi-hazard risk assessment, quantitative disaster loss estimation, and disaster mitigation research. Recent research include risks from global climate change, sea level rise, coastal storm surges, flooding and inundation, primarily of infrastructure systems in global megacities; and the sustainability of cities vis-à-vis natural hazards. Dr. Jacob has (co-) authored more than 150 scientific and technical publications and book chapters. Sample publications are: “Responding to Climate Change in NY State”; Chapter 9: Transportation; Chapter 10: Telecommunication (NY Acad. Sc., 2011); “Climate Change Adaptation in NYC”; Chapter 7: Indicators and Monitoring (NYAS, 2010); “Potential Impacts of Climate Change on US Transportation” (NRC/NAS, 2008); “Vulnerability of the NYC Metropolitan Area to Coastal Hazards, Including Sea-Level Rise: Inferences for Urban Coastal Risk Management and Adaptation Policies" (Elsevier, 2006); “Multihazard Risks in Caracas, Venezuela”, Chapter 5 in ‘Natural Disaster Hotspots – Case Studies' (World Bank, 2006); and "Futuristic Hazard and Risk Assessment: How do We Learn to Look Ahead (Invited Commentary)" in Natural Hazard Observer (July 2000). Past projects include "Urban Planning of a Disaster-Resilient Mega-City" with regional focus on Caracas, Venezuela (2001); Istanbul, Turkey (2002); and Accra, Ghana (2003); a seismic hazard analysis for the Republic of Singapore. Dr. Jacob has testified before U.S. Congressional Committees. He works with professional organizations and the media. He is a member of the American Geophysical Union, Seismological Society of America, Earthquake Engineering Research Institute and the American Geological Institute. Mayor Bloomberg appointed him to the New York Panel on Climate Change (NPCC); he also served on NY State's Sea Level Rise Task Force. Professor Jacob earned a BS in Mathematics and Physics at the Technical University, Darmstadt (1960); a MS in Geophysics (1963) from Gutenberg University, Mainz; and a PhD (1968) from Goethe University, Frankfurt, all in Germany. He was a research associate in geophysics at University of Frankfurt 1964 to 1968; and a visiting scientist at the BP Research Center, Sunbury-on-Thames, U.K. (1963-64).
Case Study: Red Hook Initiative WiFi & Tidepools
Georgia Bullen (New America Foundation, USA)

Red Hook Initiative WiFi is a collaboratively designed community mesh network. It provides Internet access to a small area in the Red Hook section of Brooklyn, NY, and serves as a platform for developing local applications and services. Red Hook Initiative has built the network in partnership with the Open Technology Institute, putting human-centered design and community engagement at the core of the project. The community expanded the network with the help from recovery volunteers and FEMA significantly following Superstorm Sandy in November 2012. This case study shows how lightweight community built infrastructure can facilitate rapid emergency response due to the social and technical infrastructure already in place.

Presenter bio: Georgia Bullen is field operations technologist with the Open Technology Institute at the New America Foundation. Based in NAF's New York office, Bullen provides usability, planning and geospatial analytical support, as well as data visualization skills, to the OTI team and its community partnerships. In her previous work, Bullen worked on data visualization projects in the areas of social media, transportation logistics, economic geography, urban flows, and other large-scale urban issues. Her work focuses on the intersection of human-centered design, urban space, and technology – specifically, how applied technologies can improve and facilitate the urban planning process, citizen access to technology resources and data, and the information systems that people use to interact with urban environments. Bullen holds a Master of Science in urban planning from Columbia University's Graduate School of Architecture, Planning and Preservation, and a Bachelor of Science in psychology and human-computer interaction from Carnegie Mellon. Her previous work and projects are available at http://georgiabullen.com.