Program for IT Performance & Capacity 2015 by CMG

Time Anacacho Peraux Draper Cavalier Naylor Jones Morrison

Monday, November 2

07:00 am-08:00 am Breakfast - Workshop Attendees Only
08:00 am-11:30 am WS1: Workshop: How to Do Performance Analytics with R WS2: Application Profiling - Telling a Story with Your Data WS3: VMWare vSphere Capacity and Performance Essentials WS4: IT-Based Operational Risk Modeling Workshop WS5: Applying Analytics to Data Center Performance    
11:45 am-12:45 pm Lunch - Workshop Attendees Only
01:15 pm-02:00 pm CMG 2015 Opening Session
02:00 pm-03:00 pm 271: Plenary Session: Integrating Software and Systems Performance Engineering Processes into Software Development Processes
03:00 pm-03:15 pm Break
03:15 pm-04:15 pm 281: I Feel the Need for Speed 282: Capacity Management Maturity: Assessing and Improving the Effectiveness 283: Historical Value-at-Risk Estimation: Performance Optimization on Multicore CPUs and GPUs 284: CMG-T: Analytics for Performance Analysis & Capacity Planning - Part 1 285: A Performance Model for SWIFT Store-and-Forward Platform 286: TBD  
04:15 pm-04:30 pm Break
04:30 pm-05:30 pm 291: Invited: Understanding the Performance and Management Implications of FICON Dynamic Routing 292: Managing the Datacenter as the Computer 293: Capacity Planning Model and Simulation (CAPSIM) for the Cloud 294: CMG-T: Analytics for Performance Analysis & Capacity Planning - Part 2 295: Multiple Dimensions of Load Testing 296: TBD  
05:30 pm-05:45 pm Break
05:45 pm-06:45 pm CMG Annual Business Meeting
07:00 pm-07:30 pm   Session Chair & Montitor Training First Timers Meeting/Orientation        
07:30 pm-10:00 pm Welcome Reception

Tuesday, November 3

07:00 am-08:00 am Breakfast
08:00 am-09:00 am 301: Plenary Session: The Business Justification for Application Performance Management
09:00 am-09:15 am Break
09:15 am-10:15 am 311: Invited: Containers and Microservices Create New Performance Challenges 312: CMG-T: Modeling and Forecasting - Part 1 313: Capacity Planning with an Eye on Business Risk 314: CMG-T: z/OS Storage Performance: Tutorial Part 1 - The Basics 315: Invited: Tackling Big Data 316: Invited: Optimal Design Principles for Better Performance of Next generation Systems 317: Vendor Tools: IBM Performance Monitoring Introductory Workshop - Part 1
10:15 am-10:30 am Break
10:30 am-11:30 am 321: Invited: IBM z13 and I/O Enhancements 322: CMG-T: Modeling and Forecasting - Part 2 323: Top Interviewing Tips You Need to Know Now 324: CMG-T: z/OS Storage Performance: Tutorial Part 2 - All Flash or Autotiering Storage Systems? 325: Data Analytics: The Key to Successful Storage Management in Complex Virtualized Data Centers 326: Invited: Using R to Discover True Web System Performance 327: Vendor Tools: IBM Performance Monitoring Introductory Workshop - Part 2
11:45 am-12:45 pm Lunch
01:00 pm-02:00 pm 331: Resource Optimization for IaaS and SaaS Providers 332: CMG-T: Modeling and Forecasting - Part 3 333: Understanding VMware Capacity 334: CMG-T: z/OS Storage Performance: Tutorial Part 3 - Instrumentation in the Black Box 335: An Affordable Care Act Web Site Load Testing Experience 336: Invited: Testing the Performance of Mobile Apps 337: Vendor Tools: AppEnsure
02:00 pm-02:15 pm Break
02:15 pm-03:15 pm 341: Invited: Creating a Performance Testing Framework that Fits a Development Workflow 342: Invited: Performance Assurance for Big Data World 343: Invited: Software Memories, Simulated Machines 344: CMG-T: Performance Engineering Guidelines for Tuning Multi-Tier Applications - Part 1 345: Automatic Workload Characterization Using System Log Analysis 346: What Performance and Capacity People Need to Know About Intelligent BPM 347: Vendor Tools: What's New in ASG PERFMAN 2020
03:15 pm-03:45 pm Break
03:45 pm-04:45 pm 351: Rethinking Randomness: What you need to know 352: Developing Predictive and Prescriptive Business Analytics: A Case Study 353: Getting Performance Information from Oracle Infrastructure 354: CMG-T: Performance Engineering Guidelines for tuning multi-tier Applications - Part 2 355: Modeling the Tradeoffs Between System Performance and CPU Power Consumption 356: Performance Prediction for Enterprise Application Migration 357: Vendor Tools:z/OS Capping and Automation: What's in your Tool Box?
04:45 pm-05:00 pm Break
05:00 pm-06:00 pm 361: IT Capacity Management 101 362: PANEL: How Do You Manage Hybrid Applications in the Cloud 363: Invited: z/OS Performance HOT Topics 364: CMG-T: Performance Engineering Guidelines for Tuning Multi-Tier Applications - Part 3 365: Invited: Capacity Planning for Java Application Performance 366: Sense and Respond? Why Not Predict and Prevent? 367: Vendor Tools
06:00 pm-06:15 pm Break
06:15 pm-07:15 pm   BOFs / Exhibitor Presentation BOFs / Exhibitor Presentation BOFs / Exhibitor Presentation BOFs / Exhibitor Presentation    
07:30 pm-09:30 pm PARS

Wednesday, November 4

07:00 am-08:00 am Breakfast
08:00 am-09:00 am 401: Plenary Session: Network Performance Analysis Using Open Source - The Evolution of WireShark
09:00 am-09:15 am Break
09:15 am-10:15 am 411: Invited: Let's Put the "e" back in Testing 412: CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 1 413: Dealing with Fat Tailed Utilization Distributions and Long Term Correlation 414: CMG-T: Introduction to the Storage Performance Management Life Cycle - Part 1 415: Invited: Perfkit - Benchmarking the Cloud 416: Invited: Architecture and Design for Performance of a Large European Bank Payment System 417: Vendor Tools: Truesight Capacity Optimization
10:15 am-10:30 am Break
10:30 am-11:30 am 421: How to Gain Support for Your IT Performance Initiatives from Your Finance Partner 422: CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 2 423: Demystifying Mobile App and Browser Performance Testing 424: CMG-T: Enterprise Storage System Architecture Overview - Part 2 425: Invited: Performance Analysis of Big Data Analytics on Lustre and HDFS File Systems 426: How to Integrate Performance Tests in Each Sprint of an Agile Development Process 427: Vendor Tools: Truesight Capacity Optimization
11:45 am-12:45 pm Lunch
01:00 pm-02:00 pm 431: Invited: Why is this Web App Running Slowly? 432: CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 3 433: Invited: Let's Turn Real User Data into a Science 434: CMG-T: Enterprise Disk and SAN Data Collection and Measurement - Part 3 435: Performance Evaluation of an Electronic Point of Sale System for a Retail Client -CANCELED 436: Invited: Incremental Risk Charge Calculation: A Case Study of Performance Optimization on Many/Multi Core Platforms 437: Vendor Tools: Performance Tuning for DB2
02:00 pm-02:15 pm Break
02:15 pm-03:15 pm 441: Lessons from Capacity Planning a Java Enterprise Application: How to Keep Capacity Predictions on Target and Cut CPU Usage by 5x 442: PANEL: zEnterprise Performance and Capacity Management Q and A 443: Invited: Performance Considerations for Public Cloud 444: CMG-T: Windows System Performance Measurement and Analysis - Part 1 445: You Test Where? Performance Testing in DR and Prod! 446: Maximum User Concurrency and Blocking Probability for Managed and Open Access Applications 447: Vendor Tools: Teemstone OnTune:Performance Analysis and Tuning at Intersection of System and Applications - A Shared Tool
03:15 pm-03:45 pm Break
03:45 pm-04:45 pm 451: HTTP/2: Implications for Web Application Performance 452: PANEL: Mobile Performance Testing and Management 453: Invited: z/OS Central Storage Management 454: CMG-T: Windows System Performance Measurement and Analysis - Part 2 455: TBD 456: Monitoring and Remediation of Cloud Services Based on 4R Approach 457: Vendor Tools
04:45 pm-05:00 pm Break
05:00 pm-06:00 pm 461: Invited: Hadoop Super Scaling 462: Social Media and Analytics: What Performance and Capacity Engineers Need to Know 463: Invited: WSC Experiences with the z13 and SMT: What the Numbers Mean 464: CMG-T: Windows System Performance Measurement and Analysis - Part 3 465: Invited: Performance Measurement of Deduplication Applied to Block Storage 466: TBD 467: Vendor Tools: SOASTA
06:00 pm-06:15 pm Break
06:15 pm-07:15 pm   Exhibitor Presentation: HPE: High Volume Performance Testing in a Mobile World Exhibitor Presentation: Metron-Athene: athene® ES/1: Delivering More with Less BOFs / Exhibitor Presentation CMG 2016 Kick-Off Meeting    
07:30 pm-09:30 pm PARS

Thursday, November 5

07:00 am-08:00 am Breakfast
08:00 am-09:00 am 501: Plenary Session: Five Trends in Computing Leading to Multi-Cloud Applications and Their Management
09:00 am-09:15 am Break
09:15 am-10:15 am 511: Invited: The Languages of Capacity Planning: Business, Infrastructure & Facilities 512: CMG-T: Network Performance Engineering - Part 1 513: CMP (Cloud Management Platform) - Performance Workload Analysis 514: CMG-T: Capacity and Performance for Newbs and Nerds - Part 1 515: Invited: Memory Management in the TB Age 516: Establishing Better Governance for IT Service Management through ISO20K Accreditation and ITIL Capacity Management 517: Vendor Tools
10:15 am-10:30 am Break
10:30 am-11:30 am 521: Performance Monitoring vs. Capacity Management: Does it Matter? 522: CMG-T: Network Performance Engineering - Part 2 523: Spinning Your Wheels: CPU Time vs Instructions 524: CMG-T: Capacity and Performance for Newbs and Nerds - Part 2 525: Percentile-Based Approach to Forecasting Workload Growth 526: TBD 527: Vendor Tools: IntelliMagic Vision Overview with the Founder
11:45 am-12:45 pm Lunch
01:00 pm-02:00 pm 531: Invited: Performance Engineering for the Internet of Things and Other Real-Time Embedded Systems 532: PANEL: Advancing in Performance Careers 533: Identifying the Causes of High Latencies in Storage Traces Using Workload Decomposition and Feature Selection 534: CMG-T: Java - Part 1 535: Invited: Network Visibility in the Cloud 536: Invited: Beyond RMF/SMF Reporting - Using Availability Intelligence to Protect Availability at the Production Site 537: Vendor Tools: Nimble Storage InfoSight: Defining a New Storage Experience
02:00 pm-02:15 pm Break
02:15 pm-03:15 pm 541: Developing Our Intuition About Queuing Network Models 542: CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 1 543: Turning Performance Data into Actions 544: CMG-T: Java - Part 2 545: Invited: Lessons Learned from Implementing an IDAA 546: Data Correlation for Capacity Management 547: Vendor Tools: IBM Proof of Technology Labs - Part 1
03:15 pm-03:45 pm Break
03:45 pm-04:45 pm 551: Maturing the Capacity Management Process 552: CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 2 553: Four Steps to Performance Risk Mitigation 554: CMG-T: Java - Part 3 555: Invited: Determination of Web Performance Envelope 556: There's Something Happening Here, but What It is Ain't Exactly Clear - Capturing the Real User Experience 557: Vendor Tools: IBM Proof of Technology Labs - Part 2
04:45 pm-05:00 pm Break
05:00 pm-06:00 pm 561: No Session Scheduled 562: CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 3 563: TBD 564: Invited: Summary of 4-Part Series Published in CMG Journal 565: Essential Reporting for Capacity and Performance Management 566: High Performance Computing Tutorial 567: Vendor Tools
06:00 pm-06:15 pm Break
06:15 pm-07:15 pm   BOFs / Exhibitor Presentation BOFs / Exhibitor Presentation BOFs / Exhibitor Presentation BOFs / Exhibitor Presentation    
07:30 pm-10:30 pm Gala Reception

Monday, November 2

Monday, November 2, 07:00 - 08:00

Workshops: Breakfast - Workshop Attendees Onlygo to top

Peacock Alley

Monday, November 2, 08:00 - 11:30

WS1 (Workshops): Workshop: How to Do Performance Analytics with Rgo to top

Dr. Neil Gunther
Room: Anacacho

You've collected cubic light-years of performance monitoring data, now whaddya gonna do? Raw performance data is not the same thing as information, and the typical time-series representation is almost the worst way to glean information. Neither your brain nor that of your audience is built for that (blame it on Darwin). To extract pertinent information, you need to transform your data and that's what the R statistical computing environment can help you do, including doing it automatically.Topics covered include:•Introduction to R using RStudio•Descriptive statistics•Performance visualization•Data reduction techniques•Multivariate analysis•Machine learning techniques•Power law methods•Forecasting with R•Scalability analysisCase studies are also covered in this workshop.R is an open-source application freely available from http://www.r-project.org.Attendees who wish to participate more directly in the workshop should bring their laptops already loaded with Adobe Acrobat, R and RStudio software. The specific software to pre-load is available at:The R framework and language (R-3.2.2): https://cran.rstudio.comRStudio IDE 0.99.484: https://www.rstudio.com/products/rstudio/download/Adobe Acrobat Reader: https://get.adobe.com/reader/Alternately, people are welcome to just sit back and watch. Please note that the workshop room will be using "theater seating" so work tables and electrical outlets will not be available.

WS2 (Workshops): Application Profiling - Telling a Story with Your Datago to top

Richard Gimarc
Room: Peraux

An Application Profile is a description of application behavior, performance, and resource consumption. As an example, a Profile can be developed using load test results to quantify the resource requirements and response time components of individual business functions. In a production environment, a Profile can describe the mix of transaction types that are processed by an application. Both examples illustrate the quantitative focus of an Application Profile; a precise description of the workloads processed by an application, their performance characteristics and resource usage. An Application Profile is a prerequisite for application performance analysis and capacity planning.Application-level performance analysis is generally focused on improving the performance or capacity of an application. Creating an Application Profile should be the first step; determining what an application is doing, where it is spending its time and quantifying its resource consumption. The Profile enables the performance analyst to focus his/her efforts.Data center capacity planning is most effective when it is developed from a set of application-level capacity plans. The application-level capacity plans are simply another form of an Application Profile that relates workload volume to resource requirements. If the capacity planner has a complete set of Application Profiles, then the task of developing a data center capacity plan is simplified by leveraging the set of Application Profile building blocks.This workshop will explore the concept, development, presentation and utility of an Application Profile. The following topics will be addressed:•Terminology for decomposing an application into components suitable for profiling•Different Profile types based on the profiling goals and available data sources•Techniques for Application Profile development•Methods for Profile development across the application development lifecycle; design, test and production•Sample Application Profiles

WS3 (Workshops): VMWare vSphere Capacity and Performance Essentialsgo to top

Charles Johnson, Jamie Baker
Room: Draper

The workshop provides a focus on Capacity Management and VMware performance, so will be of interest to: •Capacity management personnel •VMware administration staff •Support personnelContents •Introduction to virtualization with VMware oOverview of the workshopoVirtualization conceptsoIntroduction to vSphere•VMware Technology and Terminology oOverview of VMware architectureoVMware functionality and conceptsoHosts, guests, clusters, resource pools and data centersoDRS and vMotionoNetworkingoDisks and datastoresoMemory and CPU management•Capacity Planning for VMware oCapacity planning at the right VMware leveloReporting optionsoCapacity modelling and queuingoCase study•VMware monitoring and metrics oKey CPU, memory, disk, network and storage metricsoPerformance assurance recommendationsoMonitoring the serviceoKey Performance Indicators (KPIs)•Optimization and Tuning for VMware oOptimization and tuning cycleoWorkload consolidationoUsage normalizationoResource tuningoFiltering•Wrap up

WS4 (Workshops): IT-Based Operational Risk Modeling Workshopgo to top

Dennis Wenk
Room: Cavalier

Many organizations—along with stakeholders and regulatory authorities—are focusing on operational risk. One key area of focus is information technology and data protection because the likelihood that an organization will experience a catastrophic loss from a service interruption due to an IT problem is far greater than any service interruption being caused by some disaster or ‘black swan' event. In fact, it has been estimated that IT failures are currently costing businesses $6.18 trillion per year worldwide. While the losses may be well known and obvious, it is not obvious what the ‘real' risks are, where the ‘real' exposure is located. Nor is it obvious which solution provides the best answer. Controlling these IT related losses can provide an incredible payback opportunity for any organization.While the magnitude of a service interruption is quite large the all important question is; "how much should an organization invest' to mitigate the risk of a service interruption. There are a wide variety of solutions and alternatives (such as backups, fail-over clusters, remote data replication, virtualization, storage arrays, and converged networks) that prevent or mitigate service-interruptions and the solutions can often be just as confusing as identifying the ‘real' risks. In addition, these high-availability alternatives are often extremely costly. Given the complexities and uncertainty of today's IT infrastructure, an organization could quickly exhaust all of its capital attempting to implement every possible high availability solution. The only rational reason to spend money on reducing operational risk is the expectation that solution-benefits outweigh their costs.An organization may not know the answers to all of its IT operational-risk questions or how to address the risk challenges, however, with a sound, quantitative operational risk model and framework an organization can ask the right questions, identify the ‘serious' risks, choose the cost-effective solutions, and make better, more informed decisions concerning investments that reduce IT operational risk. This model and framework must answer two fundamental questions: "Which risks are the serious ones?" (because it is impossible to mitigate all of them) "What are the optimal risk-reduction actions?" (because there are limited resources).

WS5 (Workshops): Applying Analytics to Data Center Performancego to top

Kingsum Chow, Li Chen, Pooja Jain
Room: Naylor

The attendees of CMG can take advantage of this tutorial to obtain knowledge of how two adjacent fields, namely design of experiments and analytics, can help them in dissecting performance engineering problems. While both fields do not contain very complicated techniques, they are rarely put together. This tutorial weaves together these techniques, and provides a new way to approach performance engineering problems. The hands-on experience would enable the attendees to readily apply what they learn when they get back to work.

Monday, November 2, 11:45 - 12:45

Workshops: Lunch - Workshop Attendees Onlygo to top

Peacock Alley

Monday, November 2, 13:15 - 14:00

CONF: CMG 2015 Opening Sessiongo to top

Anacacho Room

Monday, November 2, 14:00 - 15:00

271 (Featured Speaker): Plenary Session: Integrating Software and Systems Performance Engineering Processes into Software Development Processesgo to top

Anacacho Room
Integrating Software and Systems Performance Engineering Processes into Software Development Processes
Andre Bondi (Software Performance and Scalability Consulting LLC, USA)
Featured Speaker
Performance is a significant factor in the success of any software product or system. Therefore, it also poses a significant risk to the product's success and its ability to meet functional needs. A great deal of the effort in performance evaluation and capacity planning occurs after a system has been placed in production. By that time, it is often too late to remedy disabling performance problems. Early attention to performance concerns and early planning of performance requirements and performance testing are needed to prevent debacles like the early rollout of healthcare.gov. In this talk, we shall discuss how performance engineering may be integrated into all phases of the software lifecycle, from the conception of a system to requirements specification, architecture, testing, and finally to deployment. By specifying performance requirements and linking them to functional requirements before the architecture of a system is planned, we establish a performance baseline that an architecture should meet that is justified by the nature of the services the system is supposed to provide and the capacity it is supposed to serve. By reviewing the architecture of a system before design and implementation take place, we reduce the risk of designing and developing a system that contains inherent performance vice. At this stage performance modeling can be used to justify architectural and scheduling decisions such as the use of scheduling rules. The outputs of performance tests planned with reference to performance models enable us able to identify concurrent programming issues and other issues that would not be apparent in unit testing. The use of performance models also enables us to plan performance tests with reference to the performance requirements. These performance engineering methods have been used in waterfall and agile processes. In the case of a service-oriented architecture, timely performance tests prevented the delivery of a service with poor performance characteristics and ensured its replacement was healthy from a performance standpoint. They have also been used to ensure that efforts to improve system performance were carefully targeted and based on informed analysis of measurement data and a clear, unambiguous specification of the performance requirements.
Presenter bio: Andre B. Bondi is the founder of Software Performance and Scalability Consulting LLC. During the fall of 2016, he was a visiting professor at the University of L'Aquila. He is a recipient of CMG's A. A. Michelson Award. Until 2015, he was a Senior Staff Engineer at Siemens Corp., Corporate Technologies in Princeton. His book, Foundations of Software and Systems Performance Engineering, was published by Addison-Wesley in August 2014. He has worked on performance issues in many domains, including telecommunications and train control. Prior to joining Siemens, he held senior performance positions at two startup companies. He spent more than ten years working performance and operational issues at AT&T Labs and Bell Labs. He taught courses in performance, simulation, operating systems, and architecture at UCSB for three years. He holds a Ph.D. in computer science from Purdue University, and an M.Sc. in statistics from University College London. He holds nine US patents.
Andre Bondi

Monday, November 2, 15:00 - 15:15

CONF: Breakgo to top

Monday, November 2, 15:15 - 16:15

281 (Featured Speaker): I Feel the Need for Speedgo to top

Room: Anacacho
I Feel the Need for Speed
H. Pat Artis (Performance Associates & Virginia Tech, USA)
Featured Speaker
Five decades after it was first postulated in an article in the April 1965 issue of Electronics Magazine, Moore's Law may no longer be reliable for predicting the characteristics of future systems. Whether it is the notion of processor price performance doubling every eighteen months or a doubling of circuit density every twenty-four months, fundamental limits in the economical manufacturing of integrated circuits with ever smaller feature sizes may herald a sunset for Moore's vision. As a result, applications must be designed to exploit parallelism rather than being serialized processes that can only be rescued by improved CPU speed. This session will explore questions that may redefine the practice of performance management over future decades.
Presenter bio: Dr. H. Pat Artis graduated with a degree in Engineering Science and Mechanics from Virginia Tech in 1971. He also holds a Masters Degree in Computer Science, a Doctorate in Informatics, and has also attended the National Test Pilot School. Dr. Artis' career has included a decade at Bell Labs, start-up through initial public offering at a startup company in northern Virginia during the 1980s, and 30 years as the head of his own company. He now serves as a Professor of Practice in the Biomedical Engineering and Mechanics Department at Virginia Tech. In addition to his other honors, he received the A.A. Michelson Award for his fundamental contributions to computer metrics, is a Distinguished Graduate of the Virginia Tech College of Engineering, and is a member of the Academy of Engineering Excellence.

282 (ITSM): Capacity Management Maturity: Assessing and Improving the Effectivenessgo to top

Room: Peraux
Capacity Management Maturity: Assessing and Improving the Effectiveness
Rich Fronheiser (Metron-Athene, Inc., USA)
ITSM
Many organizations have a Capacity Management process or function in place, but no practical way to assess the effectiveness or even the strengths and weaknesses of the process or function. This led to the development and refinement of a Capacity Management Maturity Assessment, consisting of 20 carefully chosen questions that help an organization assess maturity and effectiveness. Once completed, the results will allow the Capacity Manager to better communicate the importance of Capacity Management and also create a plan to fill identified gaps going forward. Applying this assessment to multiple organizations allows comparisons to be made — between organizations and between an organization and others sharing characteristics such as type of business, geographical location, organizational size, among others. This presentation will discuss the development of the Capacity Management Maturity Assessment, will walk attendees through the assessment, and will also present some findings gathered over the last year. At the completion of the presentation, attendees will be better equipped to evaluate the maturity of their organizations' Capacity Management process or function.
Presenter bio: Rich has been working in Capacity Management for 20 years, the last 13 with Metron, holding ITIL v2 Manager and v3 Expert certification. He's worked in a variety of presales and postsales consulting roles within Metron, turning his attention in recent years to product and strategic marketing. Rich earned a BS in Mathematics from Juniata College (PA) and an MBA from the University of Wisconsin-Whitewater and is responsible for Metron's global marketing efforts.
Rich Fronheiser

283 (APM): Historical Value-at-Risk Estimation: Performance Optimization on Multicore CPUs and GPUsgo to top

Room: Draper
Historical Value-at-Risk Estimation: Performance Optimization on Multicore CPUs and GPUs
Amit Kalele (Tata Consultancy Services, India); Manoj Nambiar (Tata Consultancy Services & Institute of Electrical and Electronics Engineers, India)
APM
Risk management is a classical problem in finance. Value at Risk (VaR), is used as an important measure to quantify market risk. A common approach for estimating VaR is referred as Historical Value-at-Risk (HVaR). The HVaR algorithm is simplistic in nature, however with large number of instruments or assets and their frequent revaluations makes it a significant computational task. In this paper, we show that with the advent of multicore CPUs and GPUs and with parallel computing, many fold speed up can be achieved in HVaR estimation for large portfolios. HVaR computations are repeated many times in the tasks like back testing, deal synthesis and batch jobs, which runs over night or for days, a significant reduction in turn around time can be achieved. These state-of-the-art platforms not only enables fast computations but also reduces the computational cost in terms of energy requirement. We present our approach for optimization and parallelization for HVaR estimation and report significant reduction in overall application time.
Presenter bio: Manoj Nambiar is currently working with TCS as a Principal Scientist, heading the Performance Engineering Research Center (PERC). He also leads the Parallelization and Optimization Centre of excellence as a part of the company's HPC Initiative. Until 2011, Manoj has been working as a research lead in High Performance Messaging, Networking and Operating Systems in PERC. Prior to this has executed several consulting assignments in the performance engineering area specializing in network and systems performance. Manoj has a B.E from the University of Bombay, and a post graduate diploma in VLSI design from C-DAC, India.
Manoj Nambiar

284 (CMG-T): CMG-T: Analytics for Performance Analysis & Capacity Planning - Part 1go to top

Room: Cavalier
CMG-T: Analytics for Performance Analysis & Capacity Planning - Part 1
Ray Wicks (Retired, USA)
CMG-T
This is a two part session which reviews some of the statistical techniques which can be useful in performance analysis and capacity planning Part 1 reviews some of the analytic (statistical) concepts and their psychology used in all of statistics. The process of seeing and describing reality in terms of numbers and graphs is foremost. This analysis is essential to grasping statistical concepts that follow. Emphasis in this part will provide the under pinning of more complex statistical ideas: average, distribution, standard deviation, coefficient of variation.
pdf file

285 (ARCAP): A Performance Model for SWIFT Store-and-Forward Platformgo to top

Room: Naylor
A Performance Model for SWIFT Store-and-Forward Platform
Nicolas Schul (SWIFT scrl, Belgium)
ARCAP
At SWIFT, the store-and-forward platform allows customers to store or retrieve financial information exchanged between them in an asynchronous mode. In order to manage those systems' capacity along the time, we have built a performance model for the different active machines, linking the measured CPU activity against volumes of multiple sources of traffic. A novel and original methodology has been set up, looking at task applications impact by various sub-processes and using a robust regression to discriminate periods of non-MMF activity. The various steps resulting in this complex model will be detailed, including the choice of the time range sample, the main message flow modelling, the search of outliers, as well as some derived methods to identify significantly impacting message properties.
Presenter bio: Nicolas Schul studied Physics at 'Univeristé Catholique de Louvain', Belgium. He got his PhD in 2011 following a thesis on the analysis of exclusive events collected by the CMS experiment at the CERN Large Hadron Collider at Geneva. He is now working as a capacity planner for SWIFT, a secure global carrier of financial messages. He is in charge to build system performance models with actuals and to forecast the system load over time by statistical means.
Nicolas Schul

286 (NETCAP): TBDgo to top

Room: Jones

Monday, November 2, 16:15 - 16:30

CONF: Breakgo to top

Monday, November 2, 16:30 - 17:30

291 (zOS): Invited: Understanding the Performance and Management Implications of FICON Dynamic Routinggo to top

Room: Anacacho
Invited: Understanding the Performance and Management Implications of FICON Dynamic Routing
Stephen Guendert (Computer Measurement Group & IEEE, USA)
zOS
As part of the z13 announcement in January 2015, IBM announced support for a new FICON routing technique for FICON interswitch links ISLs) called FICON Dynamic Routing (FIDR). This new technique is generally available in late Sept 2015. This paper will explain how the predecessor technology (static routing) works, and compare it with the new FICON Dynamic Routing mechanism. The paper will conclude with a discussion of the possible use cases where implementing FICON Dynamic Routing will be of the most benefit to z Systems end users.
Presenter bio: Dr. Steve Guendert is z Systems Technology CTO for Brocade Communications, where he leads the mainframe-related business efforts. He was inducted into the IBM Mainframe Hall of Fame in 2017 for his contributions in the area of I/O technology. He is a senior member of the Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), and a member of the Computer Measurement Group (CMG). He is a former member of both the SHARE and CMG boards of directors. Steve has authored over 50 papers on the subject of mainframe I/O and storage networking, as well as two books.
Stephen Guendert

292 (SERV): Managing the Datacenter as the Computergo to top

Room: Peraux
Managing the Datacenter as the Computer
Jay Vincent (Intel Corporation, USA)
SERV
With the increasing focus of creating cloud oriented services and enabling a dynamic environment, an increasing need for Datacenter Infrastructure Management (DCIM) has become apparent. One of the goals of DCIM is to create a framework and interfaces for comprehensive management of the resources within the datacenter. In order to accomplish this a robust set of telemetry is required in order to visualize resource consumption that is continually changing within a cloud environment. This paper reviews the core telemetry of the compute platform that is required to assess the utilization of the compute environment and balance it with the power and cooling resources within the infrastructure. A Methodology for application, along with the associated proof from a practical implementation of the method, will be presented. The continued focus of implementing cloud services for both public and private datacenter operators is driving a demand for a dynamic data center. The need to have datacenter infrastructure provide a supply of resources that can dynamically change allocation based on the dynamic nature of compute demand in the cloud is essential to keep costs down. The DCIM efforts in the industry is an attempt to address this problem by creating tools and utilities to visualize and respond to compute demand for resources in the infrastructure. The one common thread to enable a reliable DCIM solution is a common low cost data source that can show how compute systems are consuming resources. In order to have a successful DCIM solution and obtain cost effective use of resources while maintaining a high level of service an integration of compute platform telemetry with infrastructure management tools is required. The bridging of the gap between Facilities and IT must occur in order to reach the goal of a cost effective and dynamic datacenter. A number of datacenter resource optimization techniques have been proposed that will balance the demand for resources by the compute systems with the supply of the resources through the facility. The resources in play are Power, Space, Cooling and Compute. Compute is a category that encompasses Processors, Memory, Storage and Network. One approach to reducing cost is through the optimization of cooling resources in the data center through increasing the ambient operating temperature. Another widely adopted method of optimization and cost reduction is the implementation of free air cooling. [4] Datacenter optimization can be applied in a progressive method of implementing various techniques. These strategies consist of simple best practices such as removing old equipment that is inefficient due to technology advances and advanced techniques like dynamic control of cooling and minimizing compute energy. In this paper a method for collecting datacenter resource consumption data from the compute node and applying a policy based control process to rebalance load will be applied in order to optimize resources for the intent of reducing operating costs. The core function of the data center is to process information through the use of CPU's, Memory and IO devices. The core component to accomplish the task is the compute platform with its associated periphery of storage and network devices. These components are considered the compute devices and represent the demand for resources. The supply of resources is delivered through the facility infrastructure. These resources are Space, Power and Cooling. The space consists of the floor and equipment racks that hold the compute devices. The power consists of the utility feed and power distribution components that deliver the power to the compute devices. The cooling consists of the transport of a media that facilitates the extraction of heat generated by the compute devices out of the datacenter; whether the media be a liquid or air. In order to effectively optimize datacenter resource a complete understanding of how the compute devices consume those resources is required. This requires the need to extract power, thermal and compute utilization at the compute node. The data required to access resource impacts on the datacenter is: 1. Computer Power Consumption 2. Computer Inlet Temperature 3. Computer Outlet Temperature 4. Computer Air Volume 5. Computer CPU, Memory and IO utilization All of these data points can be extracted from enabled computers using hardware sensors instrumented on Intel® Xeon® Processor E5-2600 v3 platforms by using a standard interface called the Intelligent Platform Interface (IPMI). A simplified interface is also available from Intel called the Node Manager Programmers Resource Kit (NMPRK). The Intel® Xeon® Processor E5-2600 v3 platform is equipped with core datacenter resource sensors. These sensors provide the critical data required to measure the impact a standard rack mount server is having on data center resources. Combining, aggregating and analyzing these data sets allows the datacenter operator to clearly understand the impact the server base is having on the datacenter and the IT administrator to better understand the costs associated with the server systems. Ultimately with the application of software defined network, storage and infrastructure the facility and the servers must be managed as a whole. This paper will cover the essential data telemetry and the methods of aggregation in order to create a holistic performance, availability and reliability analysis and response system.

293 (C&M): Capacity Planning Model and Simulation (CAPSIM) for the Cloudgo to top

Room: Draper
Capacity Planning Model and Simulation (CAPSIM) for the Cloud
Uriel Carrasquilla (Akamai Technologies, USA)
C&M
The success of Content Delivery Networks (CDNs) and Cloud providers depends on their ability to deploy servers, storage devices and applications near their end-users. The mapping between end-users and servers are based on quality of service estimates. The benefit for the CDN customer is reduced bandwidth and processing load on their origin site infrastructure plus elastic computational power. The capacity planner's goal is to size server clusters to meet traffic demands while minimizing both cost and network latencies. Modeling this sizing problem in a large environment is difficult due to the computational complexity burden. This research investigates the application of a heuristic using a genetic algorithm (GA) to solve the sizing problem for large deployments after taking into account predicted future end-user demands, network latencies, server failures and denial of service attacks. The GA searches the space of possible solutions with a linear computational complexity to find a near optimal solution to the problem. The employed GA technique is inspired by concepts in evolutional theory such as inheritance, mutation, selection, and crossover. The contribution of this research is a methodology for developing and implementing the CAPSIM prototype for solving medium to large scale deployments problems for CDN or cloud service providers.
Presenter bio: CMG member since 1987 and speaker at many CMG meetings since 1991. Currently holds a PhD in Computer Information System from Nova Southeastern and an MBA from McGill University. Extensive capacity planning experience that includes the Vancouver Stock Exchange in Canada, NCCI in Boca Raton and since November 2012 at Akamai Technologies in Boston.
Uriel Carrasquilla

294 (CMG-T): CMG-T: Analytics for Performance Analysis & Capacity Planning - Part 2go to top

Room: Cavalier
CMG-T: Analytics for Performance Analysis & Capacity Planning - Part 2
Ray Wicks (Retired, USA)
CMG-T
This is a two part session which reviews some of the statistical techniques which can be useful in performance analysis and capacity planning Part 2 reviews the basic techniques will be expanded to talk about the comparison of measurement results (T-test) and the use of techniques which can be useful in performance analysis and capacity planning, namely regression analysis and time series analysis (a.k.a. trending).
pdf file

295 (APM): Multiple Dimensions of Load Testinggo to top

Room: Naylor
Multiple Dimensions of Load Testing
Alexander Podelko (Oracle, USA)
APM
Load testing is an important part of the performance engineering process. It remains the main way to ensure appropriate performance and reliability in production. It is important to see a bigger picture beyond stereotypical, last-moment load testing. There are multiple dimensions of load testing: environment, load generation, testing approach, life-cycle integration, feedback and analysis. This paper discusses these dimensions and how load testing tools support them.
Presenter bio: Alex Podelko has specialized in performance since 1997, working as a performance engineer and architect for several companies. Currently he is a Consulting Member of the Technical Staff at Oracle, responsible for performance testing and optimization of Enterprise Performance Management and Business Intelligence (a.k.a. Hyperion) products. Alex periodically talks and writes about performance-related topics, advocating tearing down silo walls between different groups of performance professionals. His collection of performance-related links and documents (including his recent papers and presentations) can be found at www.alexanderpodelko.com. He blogs at http://alexanderpodelko.com/blog and can be found on Twitter as @apodelko. Alex currently serves as a director for the Computer Measurement Group (CMG, http://cmg.org), an organization of performance and capacity planning professionals.
Alexander Podelko
pdf file

296 (zOS): TBDgo to top

Room: Jones

Monday, November 2, 17:30 - 17:45

CONF: Breakgo to top

Monday, November 2, 17:45 - 18:45

CONF: CMG Annual Business Meetinggo to top

Anacacho Room

Monday, November 2, 19:00 - 19:30

CONF: Session Chair & Montitor Traininggo to top

Room: Peraux

CONF: First Timers Meeting/Orientationgo to top

Room: Draper

Monday, November 2, 19:30 - 22:00

CONF: Welcome Receptiongo to top

Starlight Lounge {10th Floor}

Tuesday, November 3

Tuesday, November 3, 07:00 - 08:00

CONF: Breakfastgo to top

Tuesday, November 3, 08:00 - 09:00

301 (Featured Speaker): Plenary Session: The Business Justification for Application Performance Managementgo to top

Anacacho Room
The Business Justification for Application Performance Management
Jonah Kowall (AppDynamics, USA)
Featured Speaker
Everyone is talking about and investing in APM but how does it fit into an enterprise monitoring strategy and how does APM deliver value to the business? APM can be a costly and often difficult journey. Creating the proper business case which encompasses the other elements of monitoring and root cause isolation is critical when looking at funding and creating APM capabilities.
Presenter bio: Jonah Kowall is the Vice President of Market Development and Insights at AppDynamics. Jonah has a diverse background including 15 years as an IT practitioner at several startups and larger enterprises with a focus on infrastructure and operations, security, and performance engineering. His experience includes running tactical and strategic operational initiatives and monitoring of infrastructure and application components. Jonah previously worked at Gartner as a research Vice President, specializing in availability and performance monitoring and IT operations management. His research focused on IT leaders and CIOs and he has spoken at many conferences on these topics. Jonah led Gartner's influential application performance monitoring and network performance monitoring and diagnostics magic quadrants.
Jonah Kowall

Tuesday, November 3, 09:00 - 09:15

CONF: Breakgo to top

Tuesday, November 3, 09:15 - 10:15

311 (APM): Invited: Containers and Microservices Create New Performance Challengesgo to top

Room: Anacacho
Invited: Containers and Microservices Create New Performance Challenges
Jonah Kowall (AppDynamics, USA)
APM
Changes in software have been extensive over the last half decade beginning with agile which have led to rampant use of microservices, and are beginning to change the data center with containers. Providing assurance and managing these operationally are a major challenge, the technologies are still evolving. Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes, have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. The problem is there are a lot of moving parts in these designs, this makes assuring performance complex especially if the services are geographically distributed or provided by multiple third parties. Similarly the use of containers disrupts the traditional operating system instances used in physical or virtual servers. The requirements to manage containers are still evolving along with the added layers of abstraction containers add to environments. Most open source monitoring tools do not handle end to end transactional monitoring, but focus on component microservice and container instances. These tools are evolving to handle distributed environments, but still lag commercial solutions. We will outline what needs to be built in terms of data extraction, analytics, and other open source technologies. Finally we'll also discuss commercial alternatives and what features and functions are critical when monitoring microservices based applications. Attendees of this session will walk away with a clear understanding of: What is changing with software, and why? What challenges are faced with these changes? How to overcome these challenges
Presenter bio: Jonah Kowall is the Vice President of Market Development and Insights at AppDynamics. Jonah has a diverse background including 15 years as an IT practitioner at several startups and larger enterprises with a focus on infrastructure and operations, security, and performance engineering. His experience includes running tactical and strategic operational initiatives and monitoring of infrastructure and application components. Jonah previously worked at Gartner as a research Vice President, specializing in availability and performance monitoring and IT operations management. His research focused on IT leaders and CIOs and he has spoken at many conferences on these topics. Jonah led Gartner's influential application performance monitoring and network performance monitoring and diagnostics magic quadrants.
Jonah Kowall

312 (CMG-T): CMG-T: Modeling and Forecasting - Part 1go to top

Room: Peraux
CMG-T: Modeling and Forecasting - Part 1
Michael Salsburg (Independent Consultant, USA)
CMG-T
Although most computing environments are heterogeneous, computer system modeling is, in most ways, platform neutral. The same techniques and tools can be used to model zSeries, Unix / Linux, and Windows. At the heart of these models is the essential queueing network. This course provides the details of the essential queueing network, including the necessary statistics that need to be collected from the system, as well as various modeling techniques that yield insights that cannot be gleaned from observing the actual computer system. Once the model is validated, it can be used to explore "what-if" scenarios where either the workload or the underlying configuration can be changed in the model so that the resulting service levels can be observed. If time permits, an additional section on the subject of time series estimation and forecasting will be presented. This course will not teach you everything you need, but it will give you a full survey of the various approaches with a full bibliography for future reference. This is the first of three sessions. Computer performance modeling is mainly focused on understanding how business activity and the infrastructure can be analyzed to understand the impact on IT services. The key to this activity is to understand how requests for service queue for usage of resources. This first session provides basic definitions, the history of modeling queueing systems and some basic analytical queueing models.
Presenter bio: Dr. Salsburg is an independent consultant. Previously, Dr. Salsburg was a Distinguished Engineer and Chief Architect for Unisys Technology Products. He was founder and president of Performance & Modeling, Inc. Dr. Salsburg has been awarded three international patents in the area of infrastructure performance modeling algorithms and software. In addition, he has published over 70 papers and has lectured world-wide on the topics of Real-Time Infrastructure, Cloud Computing and Infrastructure Optimization. In 2010, the Computer Measurement Group awarded Dr. Salsburg the A. A. Michelson Award.
Michael Salsburg
pdf file

313 (ARCAP): Capacity Planning with an Eye on Business Riskgo to top

Room: Draper
Capacity Planning with an Eye on Business Risk
Todd Evans (IBM, USA)
ARCAP
Using business metrics is an established practice but do you understand the risks associated with the metrics being used and the resulting forecast? This presentation will present some techniques for evaluating and answering these and other questions. What is the risk of being wrong in the forecast / prediction? Do you have adequate aces in the hole to play if needed? How much detail is needed to support the decisions being made at the business level? How does your company's hardware and software acquisition strategy affect the way you forecast?

314 (CMG-T): CMG-T: z/OS Storage Performance: Tutorial Part 1 - The Basicsgo to top

Room: Cavalier
CMG-T: z/OS Storage Performance: Tutorial Part 1 - The Basics
Gilbert Houtekamer (IntelliMagic, The Netherlands)
CMG-T
This tutorial will provide an introduction to the z/OS storage architecture from the zSeries, z/OS and the Storage System perspectives. The tutorial will cover the measurements that are available from RMF, including recent new additions like I/O interrupt delays and Channel Processing time. We will also cover discuss the interpretation of the RMF measurements, such that you can assess whether values are good or bad. Finally, we will show you what other tools are available when the averages RMF cannot explain your performance issues. That is a not-so-basic topic.
Presenter bio: Dr. Gilbert Houtekamer started his career in computer performance analysis when obtaining his Ph.D. on MVS I/O from the Delft University of Technology in the 1980s. He now has over 25 years of experience in the field of z/OS and storage performance and obtained a place in the ‘Mainframe Hall of Fame'. Gilbert is founder and managing director of IntelliMagic, delivering software that applies built-in intelligence to measurement data for storage and z/OS to help customers manage performance, increase efficiency and predict availability issues.
Gilbert Houtekamer
pdf file

315 (C&M): Invited: Tackling Big Datago to top

Room: Naylor
Invited: Tackling Big Data
Rohith Bakkannagari (The MathWorks, USA)
C&M
Big data represents an opportunity for analysts and data scientists to gain greater insight and to make more informed decisions, but it also presents a number of challenges. Big data sets may not fit into available memory, may take too long to process, or may stream too quickly to store. Standard algorithms are usually not designed to process big data sets in reasonable amounts of time or memory. Big data sources include streaming data from instrumentation sensors, satellite and medical imagery, video from security cameras, as well as data derived from financial markets and retail operations. Big data sets from these sources can contain gigabytes or terabytes of data, and may grow on the order of megabytes or gigabytes per day. There is no single approach to big data. Therefore, MATLAB provides a number of tools to tackle these challenges some of which are, 1. Datastore: Use the datastore function to access data that doesn't fit into memory. This includes data from files, collections of files or, in conjunction with Database Toolbox, database tables. The datastore function allows you to define the data you want to import from your files or database tables, define the format to apply to your imported data, and manage the incremental import of your data, providing a means to iterate over big data sets using only a while loop 2. Parallel Computing: Parallel Computing Toolbox provides a parallel for-loop that runs your MATLAB code and algorithms in parallel on multicore computers. If you use MATLAB Distributed Computing Server, you can execute in parallel on clusters of machines that can scale up to thousands of computers 3. MapReduce: Use the MapReduce functionality built into MATLAB to analyze data that does not fit into memory. This is a powerful, and established programming technique that can be used to analyze data on your desktop, as well as run MATLAB analytics on the big data platform Hadoop 4. Hadoop: With the MapReduce and Datastore functionality built into MATLAB, you can develop algorithms on your desktop and directly execute them on Hadoop. To get started, access a portion of your big data stored in HDFS with the MATLAB datastore function, and use this data to develop MapReduce based algorithms in MATLAB on your desktop. Then use MATLAB Distributed Computing Server to execute your algorithms within the Hadoop MapReduce framework against the full data set stored in HDFS. To integrate MATLAB analytics with production Hadoop systems, use MATLAB Compiler to create applications or libraries from MATLAB MapReduce based algorithms.

316 (APM): Invited: Optimal Design Principles for Better Performance of Next generation Systemsgo to top

Room: Jones
Invited: Optimal Design Principles for Better Performance of Next generation Systems
Balachandar Gurusamy (INFOSYS LIMITED & INFOSYS LIMITED, USA); Maheshgopinath Mariappan and Indranil Dharap (INFOSYS LIMITED, India)
APM
Design plays a vital role in the software engineering methodology. Proper design ensures that the software will serve its intended functionality. Design of a system should cover both functional and Nonfunctional requirements. Designing the nonfunctional requirements are very difficult in the early stages of SDLC due to less clarity of actual requirements and primary focus is given to Functional requirements. Design related errors are really difficult to address and it might cost millions to fix it at a later stage. This paper describes the various real life performance issues and design aspects to be taken care for better Performance
Balachandar Gurusamy

317 (CONF): Vendor Tools: IBM Performance Monitoring Introductory Workshop - Part 1go to top

Wayne Bucek
Room: Morrison

This session provides a hands-on lab environment for you to explore new innovations for z systems management and cost optimization.Three key objectives for this session:• Explore top common performance monitoring techniques forz/OS®, CICS, IMS™, MQ, DB2®, Storage, and TCPIP & VTAM® Networksusing IBM OMEGAMON® V5 features• Gain deeper insights to drive efficiency. Transform IT operationalbig data with advanced Log Analytics to detect problems and avoid outages proactively.• Manage and track usage to drive cost savings. Leverage ITAsset management tool to enable compliance and mitigate risk.Through multiple hands-on lab exercises which are hosted on live z Systems environments,you will explore new ways to best monitor performance, discover a deeperlevel of insights from SYSLOG, and your software asset usage in the enterprise.Each unit takes about 20-30 minutes, self-paced, at your choice. Instructors and hand-outs are available. Lab #A - Real-time Log Analytics•Advanced searches and usage scenarios with LOG streaming •Dashboards - the power of summarization out of multiple log data sources•Gain operational insight - quick glance of anomalies for security and performance issuesLab #B - Performance monitoring•OMEGAMON XE V5 Service Management Suite for z/OS, CICS, IMS, DB2, MQ, Storage and Mainframe Networks•Explore Tivoli Enterprise Portal with alert and workspace customization•Integration with IBM z Log Analytics

Tuesday, November 3, 10:15 - 10:30

CONF: Breakgo to top

Tuesday, November 3, 10:30 - 11:30

321 (zOS): Invited: IBM z13 and I/O Enhancementsgo to top

Room: Anacacho
Invited: IBM z13 and I/O Enhancements
Stephen Guendert (Computer Measurement Group & IEEE, USA)
zOS
This session will discuss all of the IBM z13 related I/O technology enhancements from 2015, including but not limited to: FICON Express 16S channels, Forward Error Correction (FEC), SAN Fabric I/O Priority, FICON CUP Diagnostics, FICON Dynamic Routing, and zHPF Extended Distance II.
Presenter bio: Dr. Steve Guendert is z Systems Technology CTO for Brocade Communications, where he leads the mainframe-related business efforts. He was inducted into the IBM Mainframe Hall of Fame in 2017 for his contributions in the area of I/O technology. He is a senior member of the Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), and a member of the Computer Measurement Group (CMG). He is a former member of both the SHARE and CMG boards of directors. Steve has authored over 50 papers on the subject of mainframe I/O and storage networking, as well as two books.
Stephen Guendert

322 (CMG-T): CMG-T: Modeling and Forecasting - Part 2go to top

Room: Peraux
CMG-T: Modeling and Forecasting - Part 2
Michael Salsburg (Independent Consultant, USA)
CMG-T
Although most computing environments are heterogeneous, computer system modeling is, in most ways, platform neutral. The same techniques and tools can be used to model zSeries, Unix / Linux, and Windows. At the heart of these models is the essential queueing network. This course provides the details of the essential queueing network, including the necessary statistics that need to be collected from the system, as well as various modeling techniques that yield insights that cannot be gleaned from observing the actual computer system. Once the model is validated, it can be used to explore "what-if" scenarios where either the workload or the underlying configuration can be changed in the model so that the resulting service levels can be observed. If time permits, an additional section on the subject of time series estimation and forecasting will be presented. This course will not teach you everything you need, but it will give you a full survey of the various approaches with a full bibliography for future reference. This is the second of three sessions. Building on session 1, the discussion turns to understanding the distribution of requests for service as well as the distribution for the service times for each request. It is shown how the understanding of these distributions contributes to developing accurate models that predict IT service end-to-end times. During this session, another approach to computer performance modeling, simulation modeling, is introduced. The basics of simulation modeling to predict computer performance are presented.
Presenter bio: Dr. Salsburg is an independent consultant. Previously, Dr. Salsburg was a Distinguished Engineer and Chief Architect for Unisys Technology Products. He was founder and president of Performance & Modeling, Inc. Dr. Salsburg has been awarded three international patents in the area of infrastructure performance modeling algorithms and software. In addition, he has published over 70 papers and has lectured world-wide on the topics of Real-Time Infrastructure, Cloud Computing and Infrastructure Optimization. In 2010, the Computer Measurement Group awarded Dr. Salsburg the A. A. Michelson Award.
Michael Salsburg
pdf file

323 (ORG): Top Interviewing Tips You Need to Know Nowgo to top

Room: Draper
Top Interviewing Tips You Need to Know Now
Denise P Kalm (Kalm Kreative, Inc, USA)
ORG
Times have changed. Where jobs were once plentiful and it was easy to keep the job you already had, now most people will find themselves laid off at least once in their career. Even if you survive that, dramatic changes in the industry mean that the job you have may no longer work for you. With unprecedented competition for jobs, having an 'edge' in the interview can mean the difference between landing the job you want for the pay you desire or being an also-ran. And the skills you will learn here will also help you understand better how to negotiate terms at your present job, such as getting raises and promotions.
Presenter bio: Denise P. Kalm is the Chief Innovator at Kalm Kreative, Inc., a marketing services organization. Her experience as a performance analyst/capacity planner, software consultant, and then marketing maven at various software companies grounds her work providing contract writing, editing, marketing and speaking services. She is a frequently published author in both the IT world and outside and has 3 books: Lifestorm, Career Savvy-Keeping & Transforming Your Job, Tech Grief - Survive & Thrive Thru Career Losses (with L. Donovan). Kalm is a requested speaker at such venues as SHARE, CMG and ITFMA and has enhanced her skills through Toastmasters where she has earned her ACG/ALB . She is also a personal coach at DPK Coaching.
Denise P Kalm

324 (CMG-T): CMG-T: z/OS Storage Performance: Tutorial Part 2 - All Flash or Autotiering Storage Systems?go to top

Room: Cavalier
CMG-T: z/OS Storage Performance: Tutorial Part 2 - All Flash or Autotiering Storage Systems?
Gilbert Houtekamer (IntelliMagic, The Netherlands)
CMG-T
Flash storage is making big inroads in almost all installations because of the attractive performance and density. In this tutorial we will focus on the use of Flash and SSD in high-end storage systems such as used for z/OS storage environments. Storage systems that are not only selected based on cost and performance, but also based on replication capabilities and resilience in general. First, we will discuss the strengths of both Flash and HDDs, to understand where each technology might work best. Second, we will review the general architecture of the z/OS Storage systems, and where Flash and SSD fit in these architectures. Third, the capabilities and risks of automatic tiering will be discussed. Finally, we will compare all flash and hybrid arrays from a performance and capacity perspective.
Presenter bio: Dr. Gilbert Houtekamer started his career in computer performance analysis when obtaining his Ph.D. on MVS I/O from the Delft University of Technology in the 1980s. He now has over 25 years of experience in the field of z/OS and storage performance and obtained a place in the ‘Mainframe Hall of Fame'. Gilbert is founder and managing director of IntelliMagic, delivering software that applies built-in intelligence to measurement data for storage and z/OS to help customers manage performance, increase efficiency and predict availability issues.
Gilbert Houtekamer
pdf file

325 (Featured Speaker): Data Analytics: The Key to Successful Storage Management in Complex Virtualized Data Centersgo to top

Room: Naylor
Data Analytics: The Key to Successful Storage Management in Complex Virtualized Data Centers
Mark Cooke (NA & Nimble Storage, USA)
Featured Speaker
Today's heavily virtualized data centers have the propensity to become massive battlegrounds over precious resources among virtual machines. Some VMs become bullies; they consume more than their fair share of resources, causing latency and performance bottlenecks that threaten other workloads by starving out neighboring VMs. These types of issues often confound even the best virtualization admins, application specialists and storage teams alike-- without the proper tools and methodology. In solving these issues before they can disrupt the business, clear visibility up the entire stack is essential. IT organizations need to look no further than data sciences-based tools to help them successfully resolve these resource contention issues without resorting to expensive infrastructure upgrades. Key takeaways: By the end of the session, you will understand: • How VM-level resource contention issues manifest themselves in virtual environments • How data analytics is the key to establishing visibility into performance and latency issues through the entire stack • What actions to take for rapid resolution of performance bottlenecks without resorting to expensive infrastructure upgrades

326 (APM): Invited: Using R to Discover True Web System Performancego to top

Room: Jones
Invited: Using R to Discover True Web System Performance
Benjamin Mao (CMG and Ranstad Technologies)
APM
R is a free software environment and statistical language for statistical analysis and graphics. The sophisticated statistical functions of R can help maximizing the benefit of performance testing on web applications tremendously. Due to nature of today's n-tiers internet world, the true system performance behaviors under load is hard to discover due many system resource hiccup and/or outliers. Performance test result analysis using R enables us to analyze performance test data in search of true system performance under load and insights from APM monitoring data, logs, and system resource data. This session will illustrate 3 benefits from R to performance engineers - 1) Providing accurate real user access workload model discovering and calibrating; 2) Building webpage performance trending analysis without outliers for prediction; 3) Developing free web based web app performance analytical services to enterprise teams with shiny server.
Presenter bio: Benjamin Mao is currently a Performance Test Architect at Ulta.com, where he leads continuous performance improvement effort for high performing web applications from frontend and backend. He applies enhanced web performance best practice to identify web app performance bottlenecks accurately and quickly. He works closely with project teams to provide web app performance assessment, framework architecture validation, code profiling, capacity planning based on accurate performance testing approach. Besides being a web performance thinker, he is poem writer.

327 (CONF): Vendor Tools: IBM Performance Monitoring Introductory Workshop - Part 2go to top

Wayne Bucek
Room: Morrison

This session provides a hands-on lab environment for you to explore new innovations for z systems management and cost optimization.Three key objectives for this session:• Explore top common performance monitoring techniques forz/OS®, CICS, IMS™, MQ, DB2®, Storage, and TCPIP & VTAM® Networksusing IBM OMEGAMON® V5 features• Gain deeper insights to drive efficiency. Transform IT operationalbig data with advanced Log Analytics to detect problems and avoid outages proactively.• Manage and track usage to drive cost savings. Leverage ITAsset management tool to enable compliance and mitigate risk.Through multiple hands-on lab exercises which are hosted on live z Systems environments,you will explore new ways to best monitor performance, discover a deeperlevel of insights from SYSLOG, and your software asset usage in the enterprise.Each unit takes about 20-30 minutes, self-paced, at your choice. Instructors and hand-outs are available. Lab #A - Real-time Log Analytics•Advanced searches and usage scenarios with LOG streaming •Dashboards - the power of summarization out of multiple log data sources•Gain operational insight - quick glance of anomalies for security and performance issuesLab #B - Performance monitoring•OMEGAMON XE V5 Service Management Suite for z/OS, CICS, IMS, DB2, MQ, Storage and Mainframe Networks•Explore Tivoli Enterprise Portal with alert and workspace customization•Integration with IBM z Log Analytics

Tuesday, November 3, 11:45 - 12:45

CONF: Lunchgo to top

Peacock Alley

Tuesday, November 3, 13:00 - 14:00

331 (Featured Speaker): Resource Optimization for IaaS and SaaS Providersgo to top

Room: Anacacho
Resource Optimization for IaaS and SaaS Providers
Daniel A Menasce (George Mason University, USA)
Featured Speaker
Cloud computing has gained significant attraction in recent years. Providers of Infrastrusture as a Service (IaaS) lease virtual machines (VM) of different capacities at different costs to its customers. These VMs are deployed in hierarchical infrastructures composed of geographically distributed data centers composed of clusters of racks, each consisting of several servers. The communications bandwidth and delay between two VMs depends on their relative location within the infrastructure. This talk addresses the problem of optimally allocating VMs for a customer who indicates the communication strength between each pair of VMs subject to availability constraints. The goal of the optimization for the Iaas provider is to maximize its revenue given that its charges depend on the relative proximity of the VMs allocated. A second type of problem is that faced by Software as a Service (SaaS) providers that offer software applications to its customers. We assume that the SaaS provider leases VMs from an IaaS. We discuss the problem of optimally deciding how many and what type of VMs should be leased by the SaaS provider to satisfy the requests from its customers subject to response time constraints on the application's response time. The goal of the SaaS is to minimize its cost. Both problems are NP-hard. The talk discusses heuristic techniques used to solve these problems as well as experimental results.
Presenter bio: Daniel Menasce is a University Professor of Computer Science at George Mason University and was the Senior Associate Dean of its School of Engineering from 2005-2012. Menasce holds a PhD in Computer Science from the University of California at Los Angeles. He is the recipient of the 2001 A.A. Michelson Award from CMG, a Fellow of the ACM and of the IEEE, a recipient of the 2017 Outstanding Faculty Award from the State Council of Higher Education of Virginia, and the author of over 250 technical papers that received over 10,500 citations. He is also the author of five books published by Prentice Hall and translated into several languages.
Daniel A Menasce
pdf file

332 (CMG-T): CMG-T: Modeling and Forecasting - Part 3go to top

Room: Peraux
CMG-T: Modeling and Forecasting - Part 3
Michael Salsburg (Independent Consultant, USA)
CMG-T
Although most computing environments are heterogeneous, computer system modeling is, in most ways, platform neutral. The same techniques and tools can be used to model zSeries, Unix / Linux, and Windows. At the heart of these models is the essential queueing network. This course provides the details of the essential queueing network, including the necessary statistics that need to be collected from the system, as well as various modeling techniques that yield insights that cannot be gleaned from observing the actual computer system. Once the model is validated, it can be used to explore "what-if" scenarios where either the workload or the underlying configuration can be changed in the model so that the resulting service levels can be observed. If time permits, an additional section on the subject of time series estimation and forecasting will be presented. This course will not teach you everything you need, but it will give you a full survey of the various approaches with a full bibliography for future reference. This is the third of three sessions. It builds on the previous two sessions. Using the simulation concepts from the second session, the attendees are introduced to a simulation model that simulates a hypervisor that is used for server virtualization. The last portion of the session is focused on analytical methods to forecast trends. This includes the basics of linear regression as well as the basics of time series forecasting.
Presenter bio: Dr. Salsburg is an independent consultant. Previously, Dr. Salsburg was a Distinguished Engineer and Chief Architect for Unisys Technology Products. He was founder and president of Performance & Modeling, Inc. Dr. Salsburg has been awarded three international patents in the area of infrastructure performance modeling algorithms and software. In addition, he has published over 70 papers and has lectured world-wide on the topics of Real-Time Infrastructure, Cloud Computing and Infrastructure Optimization. In 2010, the Computer Measurement Group awarded Dr. Salsburg the A. A. Michelson Award.
Michael Salsburg
pdf file

333 (VIRT): Understanding VMware Capacitygo to top

Room: Draper
Understanding VMware Capacity
Phillip Bell (Metron Technology, United Kingdom (Great Britain))
VIRT
VMware is the go-to option for virtualization for many organizations, and has been for some time. The longer it's been around, the more focus there is on making efficiency savings for the organization. This is where the Capacity Manager really needs to understand the technology, how to monitor it, and how to decide what headroom exists. In this presentation, we'll take a look at some of the key topics in understanding VMware Capacity: • Why OS monitoring can be misleading • 5 Key Metrics • Measuring Processor Capacity • Measuring Memory Capacity • Calculating Headroom in Vms
Presenter bio: I first started working in the capacity management field in 2000. Initially I was involved with a product for Unisys Mainframes, and through various roles as both a user and vendor of capacity planning and management software, I have spread my experience over everything from AS400 to VMware. For the past 10 years I've been working for Metron as a Consultant. This role continues to bring me into contact with capacity management teams from all sectors, and using a wide variety of technology.
Phillip Bell

334 (CMG-T): CMG-T: z/OS Storage Performance: Tutorial Part 3 - Instrumentation in the Black Boxgo to top

Room: Cavalier
CMG-T: z/OS Storage Performance: Tutorial Part 3 - Instrumentation in the Black Box
Gilbert Houtekamer (IntelliMagic, The Netherlands)
CMG-T
Storage Systems are complicated processing systems themselves, that not simply store data, but do a lot more. Local copies, synchronous and asynchronous are some of obvious tasks done in the background, but there is a lot more housekeeping going on. And as more advanced functions are introduced, these internal tasks multiply. Because of the all the advanced functions in the storage systems, host-based data like RMF is not providing the full picture. To some extent this has been addressed a long time ago with cache counters, and more recently with link and RAID group metrics, but this just covers the surface. In this advanced tutorial we will show how you can use RMF-based metrics to estimate internal metrics such as host adapter utilization. We will also present instrumentation provided by the hardware vendors in their own SMF records: both IBM and EMC provide SMF records to monitor the status of remote replication.
Presenter bio: Dr. Gilbert Houtekamer started his career in computer performance analysis when obtaining his Ph.D. on MVS I/O from the Delft University of Technology in the 1980s. He now has over 25 years of experience in the field of z/OS and storage performance and obtained a place in the ‘Mainframe Hall of Fame'. Gilbert is founder and managing director of IntelliMagic, delivering software that applies built-in intelligence to measurement data for storage and z/OS to help customers manage performance, increase efficiency and predict availability issues.
Gilbert Houtekamer
pdf file

335 (APM): An Affordable Care Act Web Site Load Testing Experiencego to top

Room: Naylor
An Affordable Care Act Web Site Load Testing Experience
James Brady (State of Nevada, USA)
APM
Much has been written about performance problems with Affordable Care Act web sites and the fact that in many instances little or no performance testing was done prior to going online. The State of Nevada made performance testing a priority with its second round Nevada Health Link web site and this author was asked to load test its user account creation functionality. The effort revealed a mismatch between the hardware configuration and software design that caused a portion of the accounts to be partially created under load. A reconfiguration of the hardware prior to the November 15, 2014 go live date solved the problem. This paper walks through that experience from the challenges faced developing the load testing script to the insights gained in the performance testing process.
Presenter bio: Jim has worked 40 years in the telecommunications and computer industries for GTE, Tandem Computers, Siemens, and currently is the Capacity Planner for the State Of Nevada. At GTE he worked in both Data Center Capacity Planning and Digital Switching Traffic Capacity determination. While at Siemens he obtained EU and US patents for a traffic overload control mechanism used in multiple products including a VoIP Switch. He holds BS and MS degrees in Operations Research from The Ohio State University.
James Brady
pdf file

336 (C&M): Invited: Testing the Performance of Mobile Appsgo to top

Room: Jones
Invited: Testing the Performance of Mobile Apps
Bill Nicholson (Neotys, USA)
C&M
Today's remarkable mix of cloud computing, ever-smarter mobile devices, and prolific application development has changed the way we develop and test applications. Now, deployed applications deliver different content and functionality depending on whether the user is accessing it via a browser, a cell phone, a tablet, etc. Moreover, applications are accessed over a myriad of network configurations, including wireless and mobile networks. These new approaches create a previously unseen set of testing challenges. Every user has a different kind of network constraint (mobile network, Wi-Fi, Ethernet...etc.) that directly impacts the behavior and performance of applications. Emulating these constraints by introducing not only connection speeds, but also parameters such as packet loss and network latency is a crucial part of the load testing process. Bill Nicholson presents an in-depth look at the ramifications of these new technologies and testing constraints for mobile application performance. Discover some of the best approaches testers are taking to ensure high mobile application performance delivery to all end-users, at all times, regardless of device.
Presenter bio: I run Professional Services and Support for Neotys USA where I help organizations looking to solve load and performance testing problems for their web and mobile applications. I have more than 20 years of experience managing software quality assurance projects for industry-leading organizations including Fidelity Investments and the Commonwealth of Massachusetts. I am a performance testing tool expert with a number of Trainer/Teacher certifications from Neotys, IBM Rational, and HP. I have previously presented at STAREAST.
Bill Nicholson

337 (CONF): Vendor Tools: AppEnsurego to top

Sri Chaganty
Room: Morrison

Tuesday, November 3, 14:00 - 14:15

CONF: Breakgo to top

Tuesday, November 3, 14:15 - 15:15

341 (APM): Invited: Creating a Performance Testing Framework that Fits a Development Workflowgo to top

Room: Anacacho
Invited: Creating a Performance Testing Framework that Fits a Development Workflow
Anoush Najarian (MathWorks, USA)
APM
Automated performance testing of an evolving product presents many challenges, including the need to account for run-to-run variance, the test environment, and the application's previous performance. In order to face these challenges we developed PerftestRunner, a suite of general purpose tools that simplify performance and scalability testing in both interactive and automated environments. These tools provide a flexible, language-agnostic interface that allows almost any type of test to be run, simplifying testing by providing data visualization, data management, and built-in tracking of performance over time. This talk will go over how we apply these tools to MATLAB Production Server and other products to test performance within developer workflow and to do automated performance testing in our build-and-test environment.
Presenter bio: Anoush Najarian is a Software Engineering Manager at MathWorks in Natick, MA where she leads the MATLAB Performance Team. She holds Master's degrees in Computer Science and Mathematics from the University of Illinois at Urbana-Champaign, and an undergraduate degree in CS and applied math from the Yerevan State University in her native Armenia. Anoush has been serving on the CMG Board of Directors since 2014, and has served as the Social Media chair for CMG2015.
Anoush Najarian

342 (C&M): Invited: Performance Assurance for Big Data Worldgo to top

Room: Peraux
Invited: Performance Assurance for Big Data World
Boris Zibitsker (BEZNext, USA)
C&M
Boris Zibitsker CEO BEZNext bzibitsker@beznext.com Abstract Today's fast-paced businesses have to make business decisions in real-time. That creates pressure on IT leaders to develop Big Data applications incorporating advance analytics capable to process large volumes of data and deliver recommendations fast. Building these types of applications efficiently and cost effectively presents significant challenges. Decisions made during application life cycle, including proof of concept, design and selection of algorithms and architecture, application development, testing, implementation and performance management affect performance, scalability and cost. Many companies are attracted to Big Data hardware because it is cheap, and open source software is free. However, many users realize that management of complex infrastructures with dynamic workloads is not easy. In this presentation we will review several case studies of application of machine learning descriptive, diagnostic, predictive, and prescriptive and control analytics for performance assurance of Big Data applications during Life Cycle of Big Data application.
Presenter bio: Boris Zibitsker is a specialist in Predictive Analytics. As CEO of BEZNext, he manages development of new technologies and consults companies on applying predictive and prescriptive analytics for optimization of business and IT. As Founder, CTO and Chairman of BEZ Systems, he was managing development of the capacity management tools for Teradata, Oracle, DB2 and SQL Server until company was sold to Compuware. As CTO of Modeling and Optimization at Compuware he was developing algorithms for detecting and predicting performance and availabilities As Adjunct Associate Professor, Boris taught graduate courses at DePaul University in Chicago and seminars at Northwestern University, University of Chicago and Relational Institute. He also taught seminars in USA, South America, Europe, Asia and Africa. He is the author of many papers and organizer of Big Data Predictive Analytics training and certification.
Boris Zibitsker

343 (VIRT): Invited: Software Memories, Simulated Machinesgo to top

Room: Draper
Invited: Software Memories, Simulated Machines
William Louth (CMG & Autoletics BV, The Netherlands)
VIRT
Your software has memory but no memories. But what if software had the ability to recall and with it the ability to play out episodic (behavioral) memories time and time again in a different space - a simulated mirror world? What if software machines could see each other act, much like humans do, without the machine code needing to send a message or make a call? What if we created a matrix for the machine that allowed us to extend and augment software post-execution, irrespective of language, runtime and platform? Today it is common to replicate data across machine boundaries but what of execution behavior? Whilst distributed middleware has allowed us to move execution across process and machine boundaries, at a very coarse granularity, these calls do not necessarily represent the replication of inherent software behavior, but merely a form of service delegation. The type of mirroring referred to here is the simulated playback, online or offline, of a softwares execution behavior in which a thread performs a local function or procedure call that is near simultaneously mirrored in one or more "paired" runtimes. In this talk a vision is presented for the future of large scale distributed software development, deployment and monitoring that is based on mirrored simulation of software execution behaviour (motion) and its environment (state) for reinterpretation and augmentation across space and time. When fully realized across multiple languages and platforms this vision has the potential to be one of the most significant advances in the engineering of software systems. The talk will touch on the following topics which have inspired this approach: • activity theory • mirror neurons and simulated embodiment • simulation theory (and the matrix) • multiverses • episodic memories and dreams as well as indirectly: • discrete event simulation • actor programming model • supervision and control • signals and boundaries This talk offers a model of human and software understanding based on activities actioned by actors within an environment supporting observation and perception of such acts including the situational context surrounding them, both before and after. The model is used to capture software behavior that is then streamed and mirrored into a Machine Matrix in which extensions, adaptations and augmentations are applied post execution as playback of behavior is simulated across 1000s of threads and processes. The future of software will be simulated, as will the past and present...eventually.
Presenter bio: A renowned software engineer with particular expertise in self adaptive software runtimes, adaptive control, self-regulation, resilience engineering, information visualization, software simulation & mirroring as well as performance measurement and optimization.
William Louth

344 (CMG-T): CMG-T: Performance Engineering Guidelines for Tuning Multi-Tier Applications - Part 1go to top

Best Tutorial - CMG India
Room: Cavalier
CMG-T: Performance Engineering Guidelines for Tuning Multi-Tier Applications - Part 1
Sundarraj Kaushik (Tata Consultancy Services & Tata Consultancy Services, India); Prajakta Bhatt (Infosys Technologies Limited, India)
CMG-T
Web based applications have been popular for more than a decade and there is an abundance of performance tuning techniques available on the internet. Yet, IT projects repeatedly run in to performance issues over and over again. The irony of the situation is that most of the time the fixes are obvious and pretty much common sense. Unfortunately, quality control within IT delivery is just a process oriented checklist with no technical meat on how to avoid and rectify commonly occurring problems. This tutorial puts forth common sense in an explicit form, with the intention that IT project managers can use it as a useful checklist during design, implementation, and firefighting. We cover very simple performance tuning techniques across the Web, App and DB tiers, and also provide references to useful performance analysis tools. The set of anti-patterns and tuning techniques that will be presented are a must read not only for performance engineers but also for IT application designers, developers, and administrators. Following the techniques presented would help you avoid more than 90% of performance issues on the average, for web based OLTP applications. While we cover examples using J2EE and RDBMS, a lot of principles remain the same for other technologies as well.
Presenter bio: Sundarraj started his career on IBM 370. Was part of a team that worked on Stratus a setup automated trading for what is now the largest Stock Exchange in India. Working for the last 14+ years in developing Web Applications and closely associated with Performance Tuning of Web and other applications.

345 (APM): Automatic Workload Characterization Using System Log Analysisgo to top

Room: Naylor
Automatic Workload Characterization Using System Log Analysis
Mahmoud Awad and Daniel Menascé (George Mason University, USA)
APM
In a previous related work, the authors presented a framework for the dynamic derivation of analytical performance models in autonomic systems. The framework automates the process of deriving and parameterizing Queuing Network (QN) performance models even when detailed knowledge of the system characteristics and user behavior are not readily available. In this paper, we begin to explore and implement the various components of the framework, where we focus on automated workload characterization using system log analysis. In particular, we show an automated technique for generating a Customer Behavior Model Graph (CBMG) by reverse engineering high-level application workflows at the user interface level given the availability of system logs. We ran a number of experiments on the Apache OFBiz ERP system and used Logstash to parse the embedded Apache Tomcat access logs. The results show that our approach for deriving CBMGs is accurate and can be used to estimate the various system workloads.
Presenter bio: Mahmoud Awad is a Ph.D. candidate in Information Technology at George Mason University in Fairfax, Virginia. He is a systems architect with over twenty years of experience in information technology, software engineering and system architecture. He is currently working as a contractor for the U.S. Forest Service (USFS - USDA) . Prior to that, he worked as a software engineer at Oracle Corporation, McDonald Bradley, Inc. and SAIC-GSC on various information systems projects for the Environmental Protection Agency, Internal Revenue Service, Food and Drug Administration, National Institutes of Health and Federal Aviation Administration. He earned his Bachelor of Computer Science from Yarmouk University in Jordan and his Master of Computer Science from the University of Nebraska at Omaha.

346 (APM): What Performance and Capacity People Need to Know About Intelligent BPMgo to top

Room: Jones
What Performance and Capacity People Need to Know About Intelligent BPM
Denise P Kalm (Kalm Kreative, Inc, USA); John Rhodes (CM First Group, USA)
APM
Business Process Management (BPM) used to be a heavy-weight way to manage development, integrating business processes and IT workloads. Many didn't use it, despite its ability to offer a way to more easily craft applications. It just didn't offer what was required for smart development. Now, intelligent, lighter-weight BPM solutions offer a better way. iBPM is the link between business performance and cost to the underlying hardware and operating system. But what's in it for performance and capacity people? The biggest impact on performance and resource demand was baked into the application as it was coded. Parameters and performance tricks can only do so much. The only way that you can really tell if your business processes are more efficient is from iBPM hard, tangible performance metrics - for example measuring how efficiently loans are being approved, or insurance policies issued. When you understand the benefits that can be achieved with the continuous process improvement aspects of BPM, you can team with developers so that both can benefit from better performing, lighter footprint applications. Learn how BPM can be a great tool in your toolkit.
Presenter bio: Denise P. Kalm is the Chief Innovator at Kalm Kreative, Inc., a marketing services organization. Her experience as a performance analyst/capacity planner, software consultant, and then marketing maven at various software companies grounds her work providing contract writing, editing, marketing and speaking services. She is a frequently published author in both the IT world and outside and has 3 books: Lifestorm, Career Savvy-Keeping & Transforming Your Job, Tech Grief - Survive & Thrive Thru Career Losses (with L. Donovan). Kalm is a requested speaker at such venues as SHARE, CMG and ITFMA and has enhanced her skills through Toastmasters where she has earned her ACG/ALB . She is also a personal coach at DPK Coaching.
Denise P Kalm

347 (CONF): Vendor Tools: What's New in ASG PERFMAN 2020go to top

Peter Weilnau
Room: Morrison

ASG PERFMAN is used by some of the largest IT organizations to manage complex multiplatform environments. Typically made up of UNIX, Linux, Windows, VMware and z/OS environments. This session will provide demonstrations of some of the newest capabilities that have been added to PERFMAN 2020 which are specifically designed to simplify life for capacity planners and performance analysts. For current users, this will serve as a "what's new". For potential users, this will be a rapid fire introduction to "what is possible".

Tuesday, November 3, 15:15 - 15:45

CONF: Breakgo to top

Tuesday, November 3, 15:45 - 16:45

351 (Featured Speaker): Rethinking Randomness: What you need to knowgo to top

Room: Anacacho
Rethinking Randomness: What you need to know
Jeff Buzen (Indepedent Consultant, USA)
Featured Speaker
Models of computer performance - both analytic and simulation - are often based on the assumption that the detailed behavior of the system being analyzed is driven by random forces: in other words, each step-by-step change in a system's state is determined by samples drawn at random from associated probability distributions. Models that employ this approach can work very well in practice, even when the assumption of randomness is difficult to justify on an intuitive level and impossible to verify with certainty. Operational Analysis was developed in the 1970s to provide an alternative framework for specifying and analyzing models of computer performance without introducing the assumption of randomness. This presentation provides an overview of Observational Stochastics, the successor to Operational Analysis. Observational Stochastics, which is based upon structures known as loosely constrained deterministic (LCD) models, offers new insights into the relationship between randomness, uncertainty and observability while also resolving some misunderstandings about Operational Analysis that have arisen in the past.
Presenter bio: Co-founder and Chief Scientist, BGS Systems, 1975 - 1998
Jeff Buzen

352 (ARCAP): Developing Predictive and Prescriptive Business Analytics: A Case Studygo to top

Room: Peraux
Developing Predictive and Prescriptive Business Analytics: A Case Study
Chiranjoy Das (Co-Author, USA); Armin Roeseler (DirectBuy, USA); Gayatri Thakkar (Co-Author, USA)
ARCAP
In this paper, we show how a Business Intelligence (BI) function in a Corporate environment can be elevated to provide forward-looking analytics that enable decision makers to better position themselves vis-à-vis an uncertain future. Through the deployment of a suitable BI architecture, coupled with the selection of appropriate analysis tools, we provide a framework for advanced analytics. This framework not only helps to explain the operational past, but also enables analysts to predict future business outcomes. The elements of the BI architecture are introduced, and key characteristics of analysis tools are presented. Finally, a case study shows how an elevated BI function contributes to budget allocation decisions for marketing channels, and how customer retention is improved through targeted membership outreach campaigns in a Retail environment.
Presenter bio: CJ has 25 years of experience in IT as a Software Architect, IT Manager, and Director. He currently works for DirectBuy (E-Commerce) as a Director of Applications Development, and involved in managing and leading multiple global projects with high visibility and demand. He has been entrusted with making the company fully digital. He has a rich experience not only in managing clients & projects but also in dealing with all business units and vendors as well as varied technologies. He has been instrumental in defining and implementing IT Strategies with regards E-Commerce, BI, Mobile, Big Data, & Sentiment Analysis. His forte is to provide innovative technical solutions to the business.

353 (Featured Speaker): Getting Performance Information from Oracle Infrastructurego to top

Room: Draper
Getting Performance Information from Oracle Infrastructure
Kellyn Pot'Vin-Gorman (3528 W 113th Ave & Oracle USA, USA)
Featured Speaker
For Oracle Database Administrators, Enterprise Manager, (EM) is the de-facto monitoring and performance management tool in their product arsenal. This presentation will discuss what performance/capacity professionals should know about Oracle DB performance. The attendee will leave with a clear view of how much has changed in recent releases to provide everyone in IT with the information on performance, monitoring and management that provides them with the power to be pro-active in a reactive world. Come join Kellyn Pot'Vin-Gorman, the EM12c Goth Girl and learn why Enterprise Manager is becoming the center of the IT universe.
Presenter bio: Kellyn Pot'Vin-Gorman is a member of the Oak Table Network and was an Oracle ACE Director until joining Oracle as the Consulting Member for the Strategic Customer Program, a specialized group of Enterprise Manager Specialists. Kellyn is known for her extensive work with Enterprise Manager 12c, its command line interface, environment optimization tuning, automation and architecture design. Her blog, http://dbakevlar.com and social media activity under her handle, DBAKevlar is well respected for her insight and content. She is the lead author on a number of technical books, hosts webinars for numerous technical groups, including All Things Oracle and has presented at Oracle Open World, HotSos, Collaborate, KSCOPE, along with other US and European conferences. Kellyn is a strong advocate for Women in Technology, (WIT) citing education on topics regarding stereotypes and presenting opportunities early as the path to overcoming challenges.
Kellyn Pot'Vin-Gorman

354 (CMG-T): CMG-T: Performance Engineering Guidelines for tuning multi-tier Applications - Part 2go to top

Best Tutorial - CMG India
Room: Cavalier
CMG-T: Performance Engineering Guidelines for tuning multi-tier Applications - Part 2
Sundarraj Kaushik (Tata Consultancy Services & Tata Consultancy Services, India); Prajakta Bhatt (Infosys Technologies Limited, India)
CMG-T
Web based applications have been popular for more than a decade and there is an abundance of performance tuning techniques available on the internet. Yet, IT projects repeatedly run in to performance issues over and over again. The irony of the situation is that most of the time the fixes are obvious and pretty much common sense. Unfortunately, quality control within IT delivery is just a process oriented checklist with no technical meat on how to avoid and rectify commonly occurring problems. This tutorial puts forth common sense in an explicit form, with the intention that IT project managers can use it as a useful checklist during design, implementation, and firefighting. We cover very simple performance tuning techniques across the Web, App and DB tiers, and also provide references to useful performance analysis tools. The set of anti-patterns and tuning techniques that will be presented are a must read not only for performance engineers but also for IT application designers, developers, and administrators. Following the techniques presented would help you avoid more than 90% of performance issues on the average, for web based OLTP applications. While we cover examples using J2EE and RDBMS, a lot of principles remain the same for other technologies as well.
Presenter bio: Sundarraj started his career on IBM 370. Was part of a team that worked on Stratus a setup automated trading for what is now the largest Stock Exchange in India. Working for the last 14+ years in developing Web Applications and closely associated with Performance Tuning of Web and other applications.

355 (APM): Modeling the Tradeoffs Between System Performance and CPU Power Consumptiongo to top

Room: Naylor
Modeling the Tradeoffs Between System Performance and CPU Power Consumption
Daniel A Menasce (George Mason University, USA)
APM
Power consumption at modern data centers is now a significant component of the total cost of ownership. There are many components that contribute to server energy consumption. CPU, memory, and disks are among the most important ones. Most modern CPUs provide Dynamic Voltage and Frequency Scaling (DVFS), which allows the processor to operate at different levels of voltage and clock frequency values. The dynamic power consumed by a CPU is proportional to the product of the square of the voltage and the CPU clock frequency. Lower CPU clock frequencies increase the CPU execution time of a job. This paper examines the tradeoffs between system performance and CPU clock frequency. A multiclass analytic queuing network model is used to determine the optimal CPU clock frequency that minimizes the relative dynamic power while not exceeding user-established SLAs on response times. The paper also presents an autonomic DVFS framework that automatically adjusts the CPU clock frequency in response to the variation of workload intensities. Numerical examples illustrate the approach presented in the paper.
Presenter bio: Daniel Menasce is a University Professor of Computer Science at George Mason University and was the Senior Associate Dean of its School of Engineering from 2005-2012. Menasce holds a PhD in Computer Science from the University of California at Los Angeles. He is the recipient of the 2001 A.A. Michelson Award from CMG, a Fellow of the ACM and of the IEEE, a recipient of the 2017 Outstanding Faculty Award from the State Council of Higher Education of Virginia, and the author of over 250 technical papers that received over 10,500 citations. He is also the author of five books published by Prentice Hall and translated into several languages.
Daniel A Menasce

356 (APM): Performance Prediction for Enterprise Application Migrationgo to top

Room: Jones
Performance Prediction for Enterprise Application Migration
Dheeraj Chahal (TCS, India); Subhasri Duttagupta (Tata Consultancy Services, India); Manoj Nambiar (Tata Consultancy Services & Institute of Electrical and Electronics Engineers, India)
APM
Performance prediction of an enterprise application for a given workload when migrating from a test to a production environment or to a different target architecture is a challenging task. Prior works have attempted to use analytical modeling, simulation techniques or intuitive and empirical approaches to predict the performance of an application. These techniques require detailed knowledge of the application and exact resource demand information which may not be feasible to acquire in a production environment. In this paper, we propose strategies for performance prediction of an application of interest based on its similarity with previously profiled applications in a benchmark suite. Our approach uses performance metrics like CPU%, disk%, throughput and employs statistical technique called principal component analysis (PCA) to select a suitable proxy application(s) that can be used to predict performance on a target architecture. Further, using our previously developed extrapolation tool PerfExt, the proposed techniques are capable of predicting the maximum throughput, maximum concurrency (users) and the bottleneck resource on the target platform. We evaluate our approaches using an auction site prototype and an online shopping application. Performance metrics are predicted within less than 15% inaccuracy bound and resource bottlenecks are predicted correctly.
Presenter bio: Dheeraj Chahal is a Consultant and Sr. Scientist with Performance Engineering group at TCS innovations lab, Mumbai, India. Prior to joining TCS, he worked as Staff Software Engineer with HPC team at IBM, Bangalore, India. Dheeraj holds a PhD degree in Computer Science from Clemson University, SC, USA.

357 (CONF): Vendor Tools:z/OS Capping and Automation: What's in your Tool Box?go to top

John Baker, MVS Solutions
Room: Morrison

We all have our go-to tools. In z/OS, products associated with managing the R4HA, and automation are becoming more and more common. Managing the R4HA offers the most effective method to control software costs. Automation because even your best analyst can't balance workloads with unpredictable loads driven by digital drivers like mobile, Internet, and analytics.With z/OS, IBM includes a number of features and tools to assist in these areas, such as Defined and Group Capacity (DC/GC) and Capacity Provisioning Manager (CPM). The question is, what more can be done?This presentation will explore these free capabilities from IBM and provide details on their use, functionality and limitations.We'll explain how ThruPut Manager provides additional capabilities to manage the R4HA even without capping and provide automation to address the inherent limitations of the IBM facilities. Enjoy increased savings, balanced workloads, and improved deliverables by combining ThruPut Manager with your z/OS tool kit.

Tuesday, November 3, 16:45 - 17:00

CONF: Breakgo to top

Tuesday, November 3, 17:00 - 18:00

361 (ORG): IT Capacity Management 101go to top

Room: Anacacho
IT Capacity Management 101
Phillip Bell (Metron Technology, United Kingdom (Great Britain))
ORG
This is a high level session that will cover the basics of what the aim of IT Capacity Management is, what the capacity manager actually does, and how this fits with other IT functions. Capacity management has recently worked its way up the list of concerns that senior IT managers have. Gartner says, "By 2016, the availability of capacity and performance management skills for horizontally scaled architectures will be a major constraint or risk to growth for 80 percent of major businesses." This renewed focus on Capacity Management has come about from the introduction of new technologies such as virtualization and cloud computing. To achieve the promised cost objectives of virtualization and cloud infrastructure, Capacity Management needs to be in place and working properly. With traditional distributed architectures many companies have essentially "winged it" - this now comes with greater risks to both doing business and achieving budgets than it ever did before. If you're going to put all your eggs in one basket, it had better be big enough to hold them... Topics we'll cover include: • Goals of capacity management • How to implement Capacity Management • The Mechanics of Capacity Management • Where Capacity and Other Processes intersect
Presenter bio: I first started working in the capacity management field in 2000. Initially I was involved with a product for Unisys Mainframes, and through various roles as both a user and vendor of capacity planning and management software, I have spread my experience over everything from AS400 to VMware. For the past 10 years I've been working for Metron as a Consultant. This role continues to bring me into contact with capacity management teams from all sectors, and using a wide variety of technology.
Phillip Bell

362 (C&M): PANEL: How Do You Manage Hybrid Applications in the Cloudgo to top

Room: Peraux

Panelists Include: Michael Salsburg, Unisys; Jonah Kowall, AppDynamics; Amy Spellmann, 451 Research; Elisabeth Stahl, IBM.Moderator: Alexander Podelko

PANEL: How Do You Manage Hybrid Applications in the Cloud
Michael Salsburg (Independent Consultant, USA)
C&M
In the future, every enterprise will leverage cloud resources in some way or another. It is no longer "if", it's just "when". Deploying applications using cloud resources can often result in distributing layers of the application (e.g. web tier, app tier, data tier) to their appropriate platforms, depending on their requirements. For example, the web tier could use a public cloud, the app tier may use a private cloud and the data tier may not use any cloud resources at all. To further accelerate this trend, public clouds today are incorporating tools to simplify application deployment. This panel consists of experts in application management to discuss what is possible given the current state of the art, as well as where they think the technology is going in the future.

363 (zOS): Invited: z/OS Performance HOT Topicsgo to top

Room: Draper
Invited: z/OS Performance HOT Topics
Kathy Walsh (IBM, USA)
zOS
This fast paced, always new presentation, explores the latest information on z Systems and z/OS performance and capacity planning issues. Recent performance enhancements, gotchas and recommendations are reviewed. Get the latest information relating to recent performance APARs and ATS performance offerings. This session examines the newly announce IBM z13 and discuss some of the key performance features such as SMT and SIMD.
Presenter bio: Kathy is an IBM Distinguished Engineer who is an internationally recognized technical leader in the System z platform, covering both hardware and software, with a focus on z/OS performance and System z capacity planning. Kathy provides technical and project leadership within IBM and to customers on the use, deployment and benefits of System z technology. Extensive experience consulting with IBM clients and IBM account teams on the performance and management of their z/OS environments, often in support of customer critical situations. Areas of focus include support for System z processors, LPAR configuration and management, Parallel Sysplex performance, z/OS Workload Manager, RMF, Batch Window issues, Processor Sizing, and support for software pricing. Currently, Kathy is the team leader for the Performance and Capacity Planning team at the IBM Washington Systems Center within Advanced Technical Support.
Kathy Walsh

364 (CMG-T): CMG-T: Performance Engineering Guidelines for Tuning Multi-Tier Applications - Part 3go to top

Best Tutorial - CMG India
Room: Cavalier
CMG-T: Performance Engineering Guidelines for Tuning Multi-Tier Applications - Part 3
Sundarraj Kaushik (Tata Consultancy Services & Tata Consultancy Services, India); Prajakta Bhatt (Infosys Technologies Limited, India)
CMG-T
Web based applications have been popular for more than a decade and there is an abundance of performance tuning techniques available on the internet. Yet, IT projects repeatedly run in to performance issues over and over again. The irony of the situation is that most of the time the fixes are obvious and pretty much common sense. Unfortunately, quality control within IT delivery is just a process oriented checklist with no technical meat on how to avoid and rectify commonly occurring problems. This tutorial puts forth common sense in an explicit form, with the intention that IT project managers can use it as a useful checklist during design, implementation, and firefighting. We cover very simple performance tuning techniques across the Web, App and DB tiers, and also provide references to useful performance analysis tools. The set of anti-patterns and tuning techniques that will be presented are a must read not only for performance engineers but also for IT application designers, developers, and administrators. Following the techniques presented would help you avoid more than 90% of performance issues on the average, for web based OLTP applications. While we cover examples using J2EE and RDBMS, a lot of principles remain the same for other technologies as well.
Presenter bio: Sundarraj started his career on IBM 370. Was part of a team that worked on Stratus a setup automated trading for what is now the largest Stock Exchange in India. Working for the last 14+ years in developing Web Applications and closely associated with Performance Tuning of Web and other applications.

365 (APM): Invited: Capacity Planning for Java Application Performancego to top

Room: Naylor
Invited: Capacity Planning for Java Application Performance
Kingsum Chow (Alibaba Inc, USA)
APM
Java applications form an important class of applications running in the data center and the cloud. After a quick introduction to Java virtual machines, the characteristics of Java workloads will be described. It is then followed by general performance data collection and Java specific performance counters. A quick case study of the CPU-memory trade-offs of Java programs will be demonstrated using analytics. The knowledge can aid capacity planning for Java applications.
Presenter bio: Kingsum Chow is currently a Chief Scientist at Alibaba Infrastructure Services. Before joining Alibaba in May 2016, he was a Principal Engineer and Chief Data Scientist in the System Technology and Optimization (STO) division of the Intel Software and Services Group (SSG). He joined Intel in 1996 after receiving Ph.D. in Computer Science and Engineering from the University of Washington. Since then, he has been working on performance, modeling and analysis of software applications. At the Oracle OpenWorld in October 2015, Intel and Oracle CEO's announced the joint Cloud lab called project Apollo led by Kingsum in the opening keynote in front of tens of thousands of software developers. He has been issued more than 20 patents. He has presented more than 80 technical papers. In his spare time, he volunteers to coach multiple robotics teams to bring the joy of learning Science, Technology, Engineering and Mathematics to the K-12 students in his community.
Kingsum Chow
pdf file

366 (CONF): Sense and Respond? Why Not Predict and Prevent?go to top

Jacob P. Ukelson
Room: Jones

Mainframe monitoring does a great job of collecting and displaying system performance data. However in today's market monitoring is not enough - companies need accurate alerts, especially to handle issues arising from new mainframe usage paradigms driven by real-time end-user transaction systems and not just traditional backend transaction and batch processing. Also, many people with mainframe problem analysis skills are retiring making it harder to find people that can analyze mainframe monitor data for triage and problem resolution.This requires augmenting monitors with a "brain" " capable of accurate anomaly classification and alerts, and tying those alerts to deep dive activation - sense-and-respond can evolve into more a powerful mode of monitoring automation that we call predict-and-prevent. In this talk I'll discuss the capabilities needed by such a brain, which makes it possible to decide which anomalies are important and warrant a response, which need to be watched more closely to gather more information and which can be ignored.

367 (CONF): Vendor Toolsgo to top

Room: Morrison

Tuesday, November 3, 18:00 - 18:15

CONF: Breakgo to top

Tuesday, November 3, 18:15 - 19:15

CONF: BOFs / Exhibitor Presentationgo to top

Room: Peraux

CONF: BOFs / Exhibitor Presentationgo to top

Room: Draper

CONF: BOFs / Exhibitor Presentationgo to top

Room: Cavalier

CONF: BOFs / Exhibitor Presentationgo to top

Room: Naylor

Tuesday, November 3, 19:30 - 21:30

CONF: PARSgo to top

Location - TBD

Wednesday, November 4

Wednesday, November 4, 07:00 - 08:00

CONF: Breakfastgo to top

Peacock Alley

Wednesday, November 4, 08:00 - 09:00

401 (Featured Speaker): Plenary Session: Network Performance Analysis Using Open Source - The Evolution of WireSharkgo to top

Anacacho Room
Network Performance Analysis Using Open Source - The Evolution of WireShark
Gerald Combs (Riverbed, USA)
Featured Speaker
Gerald Combs will speak about Wireshark: how he came to develop it, how it has evolved over the years, and how it can be used for performance diagnostics. If you are in the network area, you know and use Wireshark. Wireshark is the most widely used packet sniffer in the world. It has won several industry awards over the years, including eWeek, InfoWorld, and PC Magazine. It is also the top-rated packet sniffer in the Insecure.Org network security tools survey and was the SourceForge Project of the Month in August 2010. Gerald Combs continues to maintain the overall code of Wireshark and issue releases of new versions of the software. The product website lists over 800 additional contributing authors.
Presenter bio: Gerald is the original developer of Wireshark. He started the project in 1998 while working at an ISP. Since then many bright and talented people have contributed to the project, making it the world's premier network protocol analyzer. He currently works at Riverbed Technology as the Director of Open Source Projects, and is the lead developer of Wireshark. In the past he has worked as a consultant for firms in a variety of industries, ranging from telecommunications to pharmaceuticals to finance. In 2003 he was the recipient of a UMKC Alumni Achievement Award for his contributions to the field of computer science.
Gerald Combs

Wednesday, November 4, 09:00 - 09:15

CONF: Breakgo to top

Wednesday, November 4, 09:15 - 10:15

411 (APM): Invited: Let's Put the "e" back in Testinggo to top

Room: Anacacho
Invited: Let's Put the "e" back in Testing
Dan Boutin (SOASTA & SOASTA, USA)
APM
Let's put the "E" back in Testing! For years, testing has been considered a second class citizen when it comes to the pecking order of importance in any endeavor. Whether the endeavor is e-Commerce related, or any other application, testing has been synonymous with with other commodities. The words, "I am a tester", might typically get a response along the lines of sympathy card from Hallmark. Oh, sorry to hear that. Keep plugging away, you'll make that leap up the hierarchy at some point. With today's complexity found in all areas of software, not just e-Commerce, but cloud, and just about anywhere that has software that in some way acts as the front end to some activity initiated by a user, testing has become a complex role that encompasses more than just an assembly line mentality of checking the box and moving onto the next item rolling down the assembly line. That's right. Today's testing is not your father's Oldsmobile. There is an "e" in testing. But it's not the one that you are thinking of-nope. Today's "e" is found in multiple areas that that today's testing encompasses. Today's "e" is Performance Engineering. Today's "e" is User Experience. Today's "e" is Data Science. This session will discuss and address the transformation of the process of testing and the maturation of the role of tester from just testing to today's performance testing architect. We will discuss the role of performance testing across the product lifecycle including the exploration and analysis of web and mobile user performance data and how using a cloud warehousing model for available user experience data enables test engineers and data scientists to gain instant insights into user experience and performance data collected from performance tests and to use that data in a continuous improvement model to drive not only the performance testing process, but also to influence architecture and application design, and, in turn, drive functional test automation. As an example, the data gathered by disparate activities like load testing and monitoring is immensely valuable for establishing patterns and trends, especially when it can be integrated to present a complete picture of the online business performance, and thus help drive testing architecture. With a performance engineering process that includes this type of data analysis, immediate answers can be placed in front of Digital Development, Operations, and Business teams by pulling together real-user performance data with business and other sources like marketing analytics, hardware or network monitors, while eliminating the need for complex data mining and consolidation.
Presenter bio: Based in Gainesville, Florida, Mr. Boutin is Vice President of Digital Strategy at SOASTA. Prior to that, Mr. Boutin has held roles at IBM Rational and Mercury/HP Software, and has worked for IBM Global Services, specializing in the areas of performance management, testing and ITIL. In addition, Mr. Boutin led the corporate SEI initiative at Lockheed Martin and was one of the contributors to ISO 12207, the U.S. commercial software standard. Mr. Boutin has previously presented his work at CMGimpact in 2015, the Big Data TechCon in Boston, STAREast, Atlantic Test Workshop (ATW) in Corsica, France, and Durham, New Hampshire. Mr. Boutin has also presented in 2015 at South Florida Agile, StarEAST, MobileWeek 2015, Jenkins User Conference(East) and at the itSMF National Conference and multiple Gartner Conferences. You can find him at dboutin@soasta.com, @DanBoutinGNV or at a conference or meet-up near you.
Dan Boutin
pdf file

412 (CMG-T): CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 1go to top

Room: Peraux
CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 1
Ivan L Gelb (GIS Corporation, USA)
CMG-T
This CMG training session will provide clear and concise fundamental concepts, methodologies and recommendations for zEnterprise performance and capacity management practitioner. The topics will be built on a set of sample recommended RMF and SMF data analysis reports which will illustrate how to build a report set which enables the analyst to be most effective. Also included will be specific systems tuning parameters recommendations. Audience participation will be encouraged. Bring your questions. You will return home from this CMG 2015 session with answers.
Presenter bio: Details provided at www.gelbis.com. Or, just call, 732-303-1333, for desired details.
Ivan L Gelb
pdf file

413 (ARCAP): Dealing with Fat Tailed Utilization Distributions and Long Term Correlationgo to top

Room: Draper
Dealing with Fat Tailed Utilization Distributions and Long Term Correlation
Joseph Temple, III (Low Country North Shore Consulting LLC, USA)
ARCAP
Last year I started looking into the nature of cpu utilization. This is an important subject because business value of infrastructure is closely tied to how busy or idle the infrastructure is. I wrote a paper that asserted that utilization distributions are "not normal" and often fat tailed. Recently I looked at utilization data from 27 servers including the 6 that were examined in the paper. The results show that utilization distributions are indeed "fat tailed". This implies that the "Normal" statistics that we use to understand server consolidations will be optimistic at least when we design for typical "2-3 sigma" service levels, expecting to cover 95 to 97.7% of the peaks. In many cases the appearance of results beyond "3 sigma" in measured data indicates an undefined underlying variance. When considering consolidations long term correlation of data due to the "workday" cycle is also common. This has an effect on performance modeling as well. This paper will examine these effects and suggest methods toe deal with the results.
Presenter bio: Joe Temple worked for IBM for nearly 4 decades and retired at end of 3Q2013 as an IBM Distinguished Engineer. After a 15 year career in Hardware Design and a 15 year Career pre and post sales client technical support, he spent the last decade working on and Leading IBM efforts to determine the Relative Capacity of Servers, Compare Server Architectures, developing and deploying sizing and "Fit for Purpose" Platform Selection methods. Joe continues to work in this area and started the Low Country North Shore Consulting shortly after his retirement from IBM. It is so named because he lives both in the "Low Country of South Carolina and on the North Shore of Long Island. He spends spare his time walking beaches with his wife Rae, making frequent attempts to play golf and accumulating hours towards a USCG "Operator of Unlicensed Passenger Vessels" He makes a batch of hard cider every year.
pdf file

414 (CMG-T): CMG-T: Introduction to the Storage Performance Management Life Cycle - Part 1go to top

Room: Cavalier
CMG-T: Introduction to the Storage Performance Management Life Cycle - Part 1
Brett Allison (IntelliMagic, USA)
CMG-T
Introduction to Storage Performance management and the effect on SLAs/SLOs, Reduction in MTTR, financial rationale, and organizational challenges
Presenter bio: Brett Allison is the Director of Technical Services with IntelliMagic focusing on storage performance management for distributed systems environments. Prior to joining IntelliMagic he spent 12 years in IBM Global services where he architected and implemented storage performance management and capacity planning tools and services. He has extensive performance analysis experience in SAN fabric, operating systems such as AIX, Windows, Solaris, and Linux and application environments including J2EE. He co-authored ‘DS8000 Performance Monitoring and Tuning' and has spoken numerous times at conferences including CMG and the IBM Storage Symposium.
Brett Allison

415 (C&M): Invited: Perfkit - Benchmarking the Cloudgo to top

Room: Naylor
Invited: Perfkit - Benchmarking the Cloud
Eric Hankland (Google, USA)
C&M
The Google Cloud Performance team is responsible for the competitive analysis of Google Cloud products. This talk will cover the problems the team faces benchmarking Google Cloud Platform, some of the solutions we adopted, as well as two of our tools - PerfKit Benchmarker and PerfKit Explorer, both recently open sourced.

416 (APM): Invited: Architecture and Design for Performance of a Large European Bank Payment Systemgo to top

Best Paper - CMG India
Room: Jones
Invited: Architecture and Design for Performance of a Large European Bank Payment System
Harikumar Ramasastry and Nityan Gulati (CMG India & Tata Consultancy Services, India)
APM
A Large software system is typically characterized by a large volume of transactions to be processed, considerable infrastructure and high number of concurrent users. Additionally it usually involves integration with a large number of up-stream and down-stream interfacing systems with varying processing requirements and constraints. The above parameters on its own may not pose a challenge when they are static in nature, but it gets tricky if the inputs keep changing and continuously evolving. In such conditions, how do we keep the system performance and resilience under control? This paper tries to explain the key design aspects that will need to be considered across various architectural layers to ensure a smooth post production performance
Presenter bio: Hi.. I am with Tata Consultancy Services for past 15 years. I hold a few PGs .such as MS in Software systems, BITS pilani, MS in information management, SUNY Buffalo university, eMBA from Amrita university. I have been persuing a technical line and currently in the role of an Enterprise Architect. Have been all through in the Banking Domain, gaining significant experience in the area of Core Banking, Payments, Cash Management and Reconcilation domains. Have taken up several consulting assignments for the customers in evaluating the system design and architetcure and specialize in conducting system performance reviews.
pdf file

417 (CONF): Vendor Tools: Truesight Capacity Optimizationgo to top

Renato Bonomini
Room: Morrison

Wednesday, November 4, 10:15 - 10:30

CONF: Breakgo to top

Wednesday, November 4, 10:30 - 11:30

421 (ORG): How to Gain Support for Your IT Performance Initiatives from Your Finance Partnergo to top

Room: Anacacho
How to Gain Support for Your IT Performance Initiatives from Your Finance Partner
Randy McCoy (DataKinetics, Canada)
ORG
As an IT professional, managing the challenges of your most important IT assets is imperative for the ongoing success of the company, its customers, and its shareholders. Whether it is reducing I/O, providing CPU savings, or reducing MSU consumption during your billing peak or R4HA, all of these will ultimately reduce costs, and result in a drastically reduced need for upgrades. It is for this reason that all technological approaches are researched, explored, and vetted to ensure problems are solved or, more importantly, avoided altogether. In fact, you may even have found the right product from the right vendor to ensure success. However, for many IT professionals the struggle now moves to the actual acquisition of these aforementioned solutions, but the finance department and / or senior management are not willing to approve the expenditure. There is a way to improve this dynamic, ensuring that your finance partner and the associated executives commit to, and support your ongoing IT initiatives. In this seminar, you will learn: * How to present an IT business case to a finance partner to help increase the likelihood of getting a request for project funding approved. * Practical frameworks for presenting business cases, defining key finance terms, and creating the appropriate illustrations.
Presenter bio: Randy McCoy joined Ottawa based DataKinetics as CFO in June 2011. He is a Chartered Professional Accountant (CPA,CA) with over twenty years of experience in financial reporting and management for software, manufacturing, and professional services businesses. Prior to joining DataKinetics, Randy served as Finance Director at Kinaxis and as Sr. Finance Manager at Autodesk.
Randy McCoy
pdf file

422 (CMG-T): CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 2go to top

Room: Peraux
CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 2
Ivan L Gelb (GIS Corporation, USA)
CMG-T
This CMG training session will provide clear and concise fundamental concepts, methodologies and recommendations for zEnterprise performance and capacity management practitioner. The topics will be built on a set of sample recommended RMF and SMF data analysis reports which will illustrate how to build a report set which enables the analyst to be most effective. Also included will be specific systems tuning parameters recommendations. Audience participation will be encouraged. Bring your questions. You will return home from this CMG 2015 session with answers.
Presenter bio: Details provided at www.gelbis.com. Or, just call, 732-303-1333, for desired details.
Ivan L Gelb
pdf file

423 (APM): Demystifying Mobile App and Browser Performance Testinggo to top

Room: Draper
Demystifying Mobile App and Browser Performance Testing
Mohit Verma (Tufts Health Plan, USA)
APM
With more and more mobile apps accessing enterprise systems and with the huge increase in in-app retail purchases, are you confident in your company's mobile apps performance right now? According to analyst research, 50 percent of those interviewed are not.* I will walk through mobile performance testing and analysis approaches applicable to browsers and native apps. Using case study scenarios,We will demonstrate specific tools and techniques to enhance your performance testing and diagnosis arsenal. This will help testers and software performance engineers learn how to: Test mobile apps and mobile browsers for performance problems Analyze and diagnose mobile performance issues quickly Apply industry-standard and open source tools to achieve the best mobile performance Employ WAN emulation tools to simulate network limitations
Presenter bio: Performance Engineering Architect
Mohit Verma
pdf file

424 (CMG-T): CMG-T: Enterprise Storage System Architecture Overview - Part 2go to top

Room: Cavalier
CMG-T: Enterprise Storage System Architecture Overview - Part 2
Brett Allison (IntelliMagic, USA)
CMG-T
Introduction to Enterprise Storage system architecture. This session will cover storage architectures including physical components such as SSDs and disk drives, RAID schemes, automated tiering and current trends in the Enterprise architecture space.
Presenter bio: Brett Allison is the Director of Technical Services with IntelliMagic focusing on storage performance management for distributed systems environments. Prior to joining IntelliMagic he spent 12 years in IBM Global services where he architected and implemented storage performance management and capacity planning tools and services. He has extensive performance analysis experience in SAN fabric, operating systems such as AIX, Windows, Solaris, and Linux and application environments including J2EE. He co-authored ‘DS8000 Performance Monitoring and Tuning' and has spoken numerous times at conferences including CMG and the IBM Storage Symposium.
Brett Allison

425 (C&M): Invited: Performance Analysis of Big Data Analytics on Lustre and HDFS File Systemsgo to top

Room: Naylor
Invited: Performance Analysis of Big Data Analytics on Lustre and HDFS File Systems
Rekha Singhal (TCS, India); Chetan Phalak (Tata Consultancy Services, India)
C&M
Big data technology is widely used for large volume data analysis. Wide acceptance of open source Hadoop platform encourages its use for real time analytics as well; this requires high performance from the system. Moreover, most of the High Performance Computing (HPC) applications may use data analytics as well to improve its execution time by reducing the number of simulation cycles. HDFS is the traditional file system used with Hadoop while Lustre is one of the file system popularly used in HPC systems. Does the same HPC setup be used for data analytics as well? - This paper addresses this question by comparing the performance of Hive SQL and Map-Reduce job executed on Lustre and HDFS file systems. The systems are evaluated for Financial, Telecom and Insurance applications on the Intel HPDA clusters. The results are presented in the paper which shows that application performance on Lustre is at least twice better than on HDFS. The paper also discuss the impact of horizontal and vertical scaling of cluster on performance of application deployed on Lustre and HDFS file systems.
Presenter bio: Dr. Rekha Singhal has 20 years of research and teaching experience. She has worked with CDAC and TRDDC research centers. Recently, one of CDAC products, Revival 2000, developed under her guidance has received NASSCOM Technology award. She has lots of publications in both national and international conferences and journals. She has filed patents in India. She has taught BE, ME, MCA and MBA students in prestigious Institutes such as TISS, NITIE etc. Her research interests are Query Performance Prediction, Database System optimization, Database Distributed systems, Storage Area Networks, TCP/IP networks and Health IT. She is Ph.D and M.tech from IIT Delhi. Currently she is working as Senior Scientist with TCS Innovation Labs, Mumbai.
pdf file

426 (APM): How to Integrate Performance Tests in Each Sprint of an Agile Development Processgo to top

Late Breaking
Bill Nicholson, Neotys
Room: Jones

427 (CONF): Vendor Tools: Truesight Capacity Optimizationgo to top

Renato Bonomini
Room: Morrison

Wednesday, November 4, 11:45 - 12:45

CONF: Lunchgo to top

Peacock Alley

Wednesday, November 4, 13:00 - 14:00

431 (APM): Invited: Why is this Web App Running Slowly?go to top

Room: Anacacho
Invited: Why is this Web App Running Slowly?
Mark Friedman (Demand Technology Software, USA)
APM
This presentation focuses on the YSlow conceptual model of web application performance, named after the YSlow performance tool originally developed at Yahoo and associated with the work of Steve Souders, which has proved extremely influential. The session looks at how the YSlow scalability model influenced the development of other web application performance tooling, culminating in the W3C specification of a navigation and timing API that provides access from JavaScript to web application performance measurements. It then drills into the W3C navigation and timing APIs to demonstrate how to gather and utilize these performance measurements, or Real User Measurements (RUM), as they have become known. The navigation and timing API is a great help to anyone with a need to understand the end-to-end web application response time experience of actual, real-life web site customers. It also casts a critical eye on the YSlow model of web application performance and highlights some areas where the reality of web application performance can depart from expectations raised by the model. In addition, there are some areas where the YSlow model is proving just a little too simple for the burgeoning complexity of networked-enabled applications developed for the web, the cloud, or both. Using an example of a data-rich ASP.NET application that requires extensive processing at the web server and the back-end database to generate Response messages, the presentation will discuss what additional measurements may be required to solve performance and scalability issues that transcend the diagnostic capabilities of YSlow and similar tools.
Mark Friedman
pdf file

432 (CMG-T): CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 3go to top

Room: Peraux
CMG-T: zEnterprise Performance and Capacity Management A-to-Z - Part 3
Ivan L Gelb (GIS Corporation, USA)
CMG-T
This CMG training session will provide clear and concise fundamental concepts, methodologies and recommendations for zEnterprise performance and capacity management practitioner. The topics will be built on a set of sample recommended RMF and SMF data analysis reports which will illustrate how to build a report set which enables the analyst to be most effective. Also included will be specific systems tuning parameters recommendations. Audience participation will be encouraged. Bring your questions. You will return home from this CMG 2015 session with answers.
Presenter bio: Details provided at www.gelbis.com. Or, just call, 732-303-1333, for desired details.
Ivan L Gelb
pdf file

433 (C&M): Invited: Let's Turn Real User Data into a Sciencego to top

Room: Draper
Invited: Let's Turn Real User Data into a Science
Dan Boutin (SOASTA & SOASTA, USA)
C&M
Let's Turn Real User Data Analysis into a Science This session will delve deep into the technical trade-offs and selection process involved in choosing the underlying architecture for the data warehouse that is to contain the real user beacon data that is collected from web & mobile users by the billions every second of every day. This beacon data can then be used for marketing campaigns, performance enhancements and any other types of data analysis required by a business to use the information to increase bottom line revenue. We will walk through and discuss the beacon collection architecture for the data collected and stored and we will discuss this architecture against the largest and fastest growing segment of big data analytics: customer experience enhancement. We will also discuss how some of today's advanced technologies available, particularly in the open source area, have opened the doors for building a data science platform including the required infrastructure, the handling of the data pipeline and the analysis & workflow. (e.g. examples will include use cases around Julia and Amazon RedShift, for the technology pieces.) Specific examples will include a deep dive into why Julia vs. R? Why Redshift vs. another data warehouse platform? Why did we choose the architecture we chose? This session will give attendees a hands-on option to "play along" as the session unfolds. We will create and run function calls created in Julia against the data warehouse in Redshift. (NOTE: We will provide links to download any required software or data prior to the session.) As an example, the data gathered by disparate activities like performance testing and monitoring is immensely valuable for establishing patterns and trends, especially when it can be integrated to present a complete picture of the online business performance. Coupling it with real user data will allow you have a clearer view of the data from all perspectives. Figuring out how you can ask the right questions of the data and how to visualize the results takes time that data scientists should be using to generate actionable insights from their studies. This session will show attendees how we do this—and how they can, too. With this type of data analysis architecture available immediate answers can be placed in front of Digital Development, Operations, and Business teams by pulling together real-user performance data with business and other sources like marketing analytics, hardware or network monitors, while eliminating the need for complex data mining and consolidation.
Presenter bio: Based in Gainesville, Florida, Mr. Boutin is Vice President of Digital Strategy at SOASTA. Prior to that, Mr. Boutin has held roles at IBM Rational and Mercury/HP Software, and has worked for IBM Global Services, specializing in the areas of performance management, testing and ITIL. In addition, Mr. Boutin led the corporate SEI initiative at Lockheed Martin and was one of the contributors to ISO 12207, the U.S. commercial software standard. Mr. Boutin has previously presented his work at CMGimpact in 2015, the Big Data TechCon in Boston, STAREast, Atlantic Test Workshop (ATW) in Corsica, France, and Durham, New Hampshire. Mr. Boutin has also presented in 2015 at South Florida Agile, StarEAST, MobileWeek 2015, Jenkins User Conference(East) and at the itSMF National Conference and multiple Gartner Conferences. You can find him at dboutin@soasta.com, @DanBoutinGNV or at a conference or meet-up near you.
Dan Boutin
pdf file

434 (CMG-T): CMG-T: Enterprise Disk and SAN Data Collection and Measurement - Part 3go to top

Room: Cavalier
CMG-T: Enterprise Disk and SAN Data Collection and Measurement - Part 3
Brett Allison (IntelliMagic, USA)
CMG-T
Understand what we can measure. Describe data collection options. Understand key performance metrics and configuration data.
Presenter bio: Brett Allison is the Director of Technical Services with IntelliMagic focusing on storage performance management for distributed systems environments. Prior to joining IntelliMagic he spent 12 years in IBM Global services where he architected and implemented storage performance management and capacity planning tools and services. He has extensive performance analysis experience in SAN fabric, operating systems such as AIX, Windows, Solaris, and Linux and application environments including J2EE. He co-authored ‘DS8000 Performance Monitoring and Tuning' and has spoken numerous times at conferences including CMG and the IBM Storage Symposium.
Brett Allison

435 (APM): Performance Evaluation of an Electronic Point of Sale System for a Retail Client -CANCELEDgo to top

CANCELED
Room: Naylor
Performance Evaluation of an Electronic Point of Sale System for a Retail Client
Veera Chava and Anjan Ruj (TCS, United Kingdom (Great Britain))
APM
One of the market leading retailers in the UK decided to upgrade the EPOS system with a customized market leading product in store, online and in customers' hands to engage more closely with the end consumers, help them to react quickly to the customer needs, wants and changes in consumer behavior, in designing market strategies to proactively address them and in keeping the competitive advantage over the rivals. This white paper presents an approach and best practices for performance testing and tuning of a large Retail implementation of the EPOS system, including store architecture, centre architecture and reporting warehouse. In the next few sections we will give an overview of EPOS system software architecture and implementation details, followed by performance testing and tuning strategies that were used successfully in this implementation.
Veera Chava

436 (Michelson): Invited: Incremental Risk Charge Calculation: A Case Study of Performance Optimization on Many/Multi Core Platformsgo to top

Room: Jones
Invited: Incremental Risk Charge Calculation: A Case Study of Performance Optimization on Many/Multi Core Platforms
Amit Kalele (Tata Consultancy Services, India); Manoj Nambiar (Tata Consultancy Services & Institute of Electrical and Electronics Engineers, India); Mahesh Barve (Tata Consultancy Services, India)
APM
Incremental Risk Charge calculation is a crucial part of credit risk estimation. This data intensive calculation requires huge compute resources. A large grid of workstations was deployed at a large European bank to carry out these computations. In this paper we show that with availability of many core coprocessors like GPU and MIC and parallel computing paradigms, speed up of order of magnitude can be achieved for the same workload with just a single server. This proof of concept demonstrates that with the help of performance analysis and tuning, coprocessors can be made to deliver high performance with low energy consumption, making them a "must-have" for financial institutions.

437 (CONF): Vendor Tools: Performance Tuning for DB2go to top

Kai Stroh
Room: Morrison

Improving DB2 performance includes buffer pool tuning and ongoing examination to alert to potential problems. This presentation will cover new metrics and utilize the UBS buffer pool tuning and alerting tool to show how to find and fix performance issues.

Wednesday, November 4, 14:00 - 14:15

CONF: Breakgo to top

Wednesday, November 4, 14:15 - 15:15

441 (ARCAP): Lessons from Capacity Planning a Java Enterprise Application: How to Keep Capacity Predictions on Target and Cut CPU Usage by 5xgo to top

2015 - Best Paper
Room: Anacacho
Lessons from Capacity Planning a Java Enterprise Application: How to Keep Capacity Predictions on Target and Cut CPU Usage by 5x
Stefano Doni (Moviri S.p.A., Italy)
ARCAP
Java application are ubiquitous in enterprise and online settings. Surprisingly, how to effectively predict the capacity and manage the efficiency of such environments is seldom discussed. We describe actionable methodologies and key metrics that enabled us to (a) highlight the hidden bottleneck of many Java applications (b) devise a business-oriented capacity model that represents Java memory bottlenecks (c) detect unsound memory usage patterns and anticipate memory leaks (d) uncover a well kept secret - the garbage collector drives the CPU usage of your servers (not your business!) and how to fix it (e) show how the garbage collector might be your first scalability bottleneck
Presenter bio: Stefano is a Moviri Senior Consultant based in Italy and leads the Capacity Management Special Interest Group, focusing on innovative capacity and performance topics. Since 2006, Stefano workes on capacity management projects for national and international enterprises. His interests include applied performance modeling and benchmarking of applications, server platforms and entire datacenters, more lately cloud and containers performance and capacity. CMG speaker since 2014, Stefano received the CMG Best Paper Award in 2015. Stefano holds an MS in Computer Engineering from Politecnico Milan (Italy), he's a happy father of two and enjoys flying model airplanes.
Stefano Doni

442 (zOS): PANEL: zEnterprise Performance and Capacity Management Q and Ago to top

Room: Peraux

Panelists Include: Kathy Walsh, IBM; Norman Hollander, IBM. Moderator: Ivan Gelb, GIS COrp.

PANEL: zEnterprise Performance and Capacity Management Q and A
Ivan L Gelb (GIS Corporation, USA)
zOS
Computer industry recognized subject matter experts will join Kathy Walsh from IBM's Washington Systems Center and answer questions submitted via email prior to the scheduled time and by the onsite attendees. Daily bulletin will provide the names of panelists. This session has been consistently one of the highest rated ones at past CMG conferences.
Presenter bio: Details provided at www.gelbis.com. Or, just call, 732-303-1333, for desired details.
Ivan L Gelb
pdf file

443 (C&M): Invited: Performance Considerations for Public Cloudgo to top

Room: Draper
Invited: Performance Considerations for Public Cloud
Jason Read (Gartner Inc. & CloudHarmony Inc., USA)
C&M
Measuring public cloud performance presents unique challenges. Basic principles of benchmarking such as repeatability and reproducibility are problematic due to often nondeterministic and time sensitive properties. This session provides example driven discussion about public cloud performance considerations including variability, burstability and throttling. Actual benchmark metrics for common services - Amazon EC2/EBS, Microsoft Azure and Google Compute Engine - will be presented to demonstrate discussion topics.
Jason Read

444 (CMG-T): CMG-T: Windows System Performance Measurement and Analysis - Part 1go to top

Room: Cavalier
CMG-T: Windows System Performance Measurement and Analysis - Part 1
Jeffry Schwartz (SQLRx Division & Integrated Services, Inc., USA)
CMG-T
Windows System Performance Measurement and Analysis Jeffry A. Schwartz, Integrated Services, Inc. This basic tutorial in the CMG-T foundation curriculum introduces the metrics that are available from the Windows operating system and most prevalent applications. The sheer number of available metrics makes it difficult for anyone, even those analysts who are well versed in performance analysis measurements on other platforms, to discern the most important performance counters. This course will provide the necessary information to enable the Windows performance analyst to ascertain what the most important metrics are, how to interpret them, and the most appropriate collection mechanisms. It will also explain measurements either that are not easily obtainable or must be calculated. Discussion will include performance data collection and analysis issues using commonly available tools. Note: All topics have been updated to include recent production versions of Windows, and the architecture portion has been trimmed to approximately 25 percent of the overall course to provide more time for review of actual analysis examples. Part 1 This section will cover the following topics: • Windows performance data - how it is maintained within the OS, collected, and secured • Overview of Windows processor, process, and thread architecture, including hyperthreaded and multi-core processors • Overview of Windows memory management architecture and behavior • Overview of Windows I/O subsystem architecture and behavior Part 2 This section will cover the following topics: • Monitoring Windows performance • Windows processor, memory, and I/O subsystem performance analysis Part 3 This section will cover the following topics: • Additional Windows I/O subsystem analysis • Calculating important missing disk and response time metrics using Windows performance counters • Obtaining important WMI operating system and system configuration information • Obtaining important Event Tracing for Windows (ETW) operating system, process, file, interrupt, DPC, and other important information • Using kernrate and Krview to obtain and analyze causes of excessive Windows kernel usage (excluding interrupts and DPCs) • Summary of other tools, including Xperf and Windows 7 Relog, that can expedite Windows performance analyses.
Presenter bio: Jeff has been conducting performance and capacity planning studies for over 35 years. He began his career in the mainframe world conducting performance and modeling studies, and has focused since 2000 entirely on high-end Windows and SQL Server performance. He has developed and given numerous courses and presentations on performance and capacity planning including several recent webinars on Windows and SQL Server performance. He developed the CMG-T Windows Performance Course in 2006, and continues to update and teach it annually at CMG. He has presented at CMG almost twenty times and continues to participate in CMG as a referee, mentor, and ERB member. During his career, he has received several consulting awards, the most recent of which was the 2008 Microsoft Data Management Solutions Partner of the Year for his 325,000+ line Windows and SQL Server performance analysis software framework for consultants.
Jeffry Schwartz

445 (ORG): You Test Where? Performance Testing in DR and Prod!go to top

Room: Naylor
You Test Where? Performance Testing in DR and Prod!
Kyle Parrish (Southern CMG, USA)
ORG
How do you stress test a brokerage system in production if you can't risk orders processing, trades executing, or violating regulatory obligations? The answer used to be, "we don't." But the flash crash and other market anomalies exposed the risks inherent in not testing production. Hear what we learned as we built a way to do what had been written off as "too big to test." This presentation deals with the challenges and opportunities inherent in using production class disaster recovery systems and actual production systems to run cloud based testing, in order to simulate real user activity at larger that peak volumes. Over 100K users, 300K accounts, and thousands of transactions per second at market open, fully executed and monitored to see where the system will fail.

446 (APM): Maximum User Concurrency and Blocking Probability for Managed and Open Access Applicationsgo to top

Room: Jones
Maximum User Concurrency and Blocking Probability for Managed and Open Access Applications
Xiaosong Lou and Steven Xu (YP Holdings, USA)
APM
Maximum number of concurrent users is one critical factor for application design and capacity planning. It has a direct impact on the settings of key system parameters such as connection pool sizes, maximum number of threads and buffer sizes. Depending on the type of the workload, there are a number of theories and models that studies user concurrency. Unfortunately, developers are not always aware of the appropriate approaches. Instead of systematically determining these parameters, we often see teams resort to empirical or even trial and error methods. This guesswork often leads to either excessive resources allocation or unnecessarily high blocking probability for users in production systems. In this paper, we discuss two typical use cases: managed access and open access applications. We describe the different theories and models applicable to these use cases and present the results of our experiments.

447 (CONF): Vendor Tools: Teemstone OnTune:Performance Analysis and Tuning at Intersection of System and Applications - A Shared Toolgo to top

Paul Tanwanteng
Room: Morrison

In this hands-on session, we will take the participants through a simulated case for analysis and performance tuning at the intersection of system administration and application performance with the onTune tool, including: A: Analysis from System Administrator's viewpoint B Analysis from Application Administrator's viewpoint

Wednesday, November 4, 15:15 - 15:45

CONF: Breakgo to top

Wednesday, November 4, 15:45 - 16:45

451 (ARCAP): HTTP/2: Implications for Web Application Performancego to top

Room: Anacacho
HTTP/2: Implications for Web Application Performance
Mark Friedman (Demand Technology Software, USA)
ARCAP
HTTP/2 is the first major revision of the HTTP protocol adopted since 1999 when HTTP/1.1 was finalized. HTTP/2 is designed to speed up page load times, mainly through the use of (1) multiplexing where the web client can make multiple requests in parallel over a single TCP connection, and (2) server push where the web server can send content to the web client that it expects the web client will need in the near future, based on the current GET Request. This paper describes what the HTTP/2 changes will and won't accomplish, based on what we know today about SPDY performance. In addition, it tries to give some specific recommendations to help you get ready and take advantage of the new capabilities in HTTP/2. HTTP/2 is a major change in the web's application processing model, requiring adjustments at both the web server and the web client to support multiplexing and server push. In addition, many web sites, currently built to take advantage of the capabilities in HTTP/1.x, may require re-architecting to take better advantage of HTTP/2. Performance tools for web application developers will also need to play catch up to provide visibility into how multiplexing and server push are operating in order to assist with these re-architecture projects.
Mark Friedman

452 (C&M): PANEL: Mobile Performance Testing and Managementgo to top

Room: Peraux

Panelists Include: Dan Boutin, SOASTA; Bill Nicholson, Neotys; Alexander Podelko, Oracle; Silvia Siqueira, HP. Moderator: Mohit Verma, Tufts Health Plan

PANEL: Mobile Performance Testing and Management
Mohit Verma (Tufts Health Plan, USA)
CONF
This panel will focus on mobile performance engineering and management. Focusing on existing tools, best practices and techniques to address the challenges in this area. Will be an interactive discussion from the beginning focusing on audience questions and real use-case stories.
Presenter bio: Performance Engineering Architect
Mohit Verma

453 (zOS): Invited: z/OS Central Storage Managementgo to top

Room: Draper
Invited: z/OS Central Storage Management
Kathy Walsh (IBM, USA)
zOS
How do you evaluate your z/OS Central Storage environment? What are the key metrics to look at and what do they mean? What do you need to know about defining and using 1MB and 2GB pages in z/OS? This session shows you how to assess a z/OS system to ensure there is sufficient processor storage to meet performance and availability concerns.
Presenter bio: Kathy is an IBM Distinguished Engineer who is an internationally recognized technical leader in the System z platform, covering both hardware and software, with a focus on z/OS performance and System z capacity planning. Kathy provides technical and project leadership within IBM and to customers on the use, deployment and benefits of System z technology. Extensive experience consulting with IBM clients and IBM account teams on the performance and management of their z/OS environments, often in support of customer critical situations. Areas of focus include support for System z processors, LPAR configuration and management, Parallel Sysplex performance, z/OS Workload Manager, RMF, Batch Window issues, Processor Sizing, and support for software pricing. Currently, Kathy is the team leader for the Performance and Capacity Planning team at the IBM Washington Systems Center within Advanced Technical Support.
Kathy Walsh

454 (CMG-T): CMG-T: Windows System Performance Measurement and Analysis - Part 2go to top

Room: Cavalier
CMG-T: Windows System Performance Measurement and Analysis - Part 2
Jeffry Schwartz (SQLRx Division & Integrated Services, Inc., USA)
CMG-T
Windows System Performance Measurement and Analysis Jeffry A. Schwartz, Integrated Services, Inc. This basic tutorial in the CMG-T foundation curriculum introduces the metrics that are available from the Windows operating system and most prevalent applications. The sheer number of available metrics makes it difficult for anyone, even those analysts who are well versed in performance analysis measurements on other platforms, to discern the most important performance counters. This course will provide the necessary information to enable the Windows performance analyst to ascertain what the most important metrics are, how to interpret them, and the most appropriate collection mechanisms. It will also explain measurements either that are not easily obtainable or must be calculated. Discussion will include performance data collection and analysis issues using commonly available tools. Note: All topics have been updated to include recent production versions of Windows, and the architecture portion has been trimmed to approximately 25 percent of the overall course to provide more time for review of actual analysis examples. Part 1 This section will cover the following topics: • Windows performance data - how it is maintained within the OS, collected, and secured • Overview of Windows processor, process, and thread architecture, including hyperthreaded and multi-core processors • Overview of Windows memory management architecture and behavior • Overview of Windows I/O subsystem architecture and behavior Part 2 This section will cover the following topics: • Monitoring Windows performance • Windows processor, memory, and I/O subsystem performance analysis Part 3 This section will cover the following topics: • Additional Windows I/O subsystem analysis • Calculating important missing disk and response time metrics using Windows performance counters • Obtaining important WMI operating system and system configuration information • Obtaining important Event Tracing for Windows (ETW) operating system, process, file, interrupt, DPC, and other important information • Using kernrate and Krview to obtain and analyze causes of excessive Windows kernel usage (excluding interrupts and DPCs) • Summary of other tools, including Xperf and Windows 7 Relog, that can expedite Windows performance analyses.
Presenter bio: Jeff has been conducting performance and capacity planning studies for over 35 years. He began his career in the mainframe world conducting performance and modeling studies, and has focused since 2000 entirely on high-end Windows and SQL Server performance. He has developed and given numerous courses and presentations on performance and capacity planning including several recent webinars on Windows and SQL Server performance. He developed the CMG-T Windows Performance Course in 2006, and continues to update and teach it annually at CMG. He has presented at CMG almost twenty times and continues to participate in CMG as a referee, mentor, and ERB member. During his career, he has received several consulting awards, the most recent of which was the 2008 Microsoft Data Management Solutions Partner of the Year for his 325,000+ line Windows and SQL Server performance analysis software framework for consultants.
Jeffry Schwartz

455 (APM): TBDgo to top

Room: Naylor

456 (NETCAP): Monitoring and Remediation of Cloud Services Based on 4R Approachgo to top

Room: Jones
Monitoring and Remediation of Cloud Services Based on 4R Approach
Yuri Ardulov (RingCentral Inc., USA); Serg Mescheryakov (St. Petersburg Polytechnic University, Russia); Dmitry Shchemelinin (RingCentral Inc, USA)
NETCAP
According to IT world experience in capacity and performance monitoring of cloud distributed environments, up to 90% of troubleshooting cases, which are escalated to production support engineering, can be easily recovered by either restarting the services or rebooting OS or redirecting the workload to a standby unit. This paper describes new architectural solution called remediation center, which allows automatically initiate a predefined remediation process on remote host in case of detected anomaly using 4R approach.
pdf file

457 (CONF): Vendor Toolsgo to top

Room: Morrison

Wednesday, November 4, 16:45 - 17:00

CONF: Breakgo to top

Wednesday, November 4, 17:00 - 18:00

461 (C&M): Invited: Hadoop Super Scalinggo to top

Room: Anacacho
Invited: Hadoop Super Scaling
Neil J Gunther (Performance Dynamics Company, USA)
C&M
The Hadoop framework is designed to facilitate parallel-processing massive amounts of unstructured data. Originally intended to be the basis of Yahoo's search-engine, it is now open sourced at Apache. Since Hadoop now has a broad range of corporate users, a number of companies offer commercial implementations and support for Hadoop. However, certain aspects of Hadoop performance---especially scalability---are not well understood. One such anomaly is the claimed flat scalability benefit for developing Hadoop applications. Another is that it's possible to achieve faster than parallel processing. In this presentation I will explain the source of these anomalies by presenting a consistent method for analyzing Hadoop application scalability.
Presenter bio: Neil J. Gunther, M.Sc., Ph.D., is an internationally known IT researcher, teacher and author who founded Performance Dynamics Company (www.perfdynamics.com) in 1994. He is well-known to CMG audiences for his conference presentations since 1993. Dr. Gunther was awarded CMG Best Technical Paper in 1996 and received the A.A. Michelson Award in 2008. Prior to founding Performance Dynamics, Dr. Gunther held teaching, research and management positions at San Jose State University, JPL/NASA, Xerox PARC and Pyramid-Siemens Technology. His "Guerrilla" training classes have been presented world wide at corporations and academia such as Boeing, FedEx, Morgan Stanley, Nokia, Stanford, Vodafone and Walmart. He is a member of AMS, APS and a senior member of ACM and IEEE.
Neil J Gunther

462 (ORG): Social Media and Analytics: What Performance and Capacity Engineers Need to Knowgo to top

Room: Peraux
Social Media and Analytics: What Performance and Capacity Engineers Need to Know
Anoush Najarian (MathWorks, USA)
ORG
To Be Determined
Presenter bio: Anoush Najarian is a Software Engineering Manager at MathWorks in Natick, MA where she leads the MATLAB Performance Team. She holds Master's degrees in Computer Science and Mathematics from the University of Illinois at Urbana-Champaign, and an undergraduate degree in CS and applied math from the Yerevan State University in her native Armenia. Anoush has been serving on the CMG Board of Directors since 2014, and has served as the Social Media chair for CMG2015.
Anoush Najarian

463 (zOS): Invited: WSC Experiences with the z13 and SMT: What the Numbers Meango to top

Room: Draper
Invited: WSC Experiences with the z13 and SMT: What the Numbers Mean
Kathy Walsh (IBM, USA)
zOS
The IBM Washington Systems Center has run several tests of zIIP enabled workloads to learn the effects of running in both single threaded and multi-threaded environments on the new IBM z13 processor. This presentation starts with an overview of the zIIP SMT implementation introduced with the z13 and also reviews the results of the WSC benchmarks. The session will discuss what was learned about the new RMF SMT metrics when running in an SMT environment.
Presenter bio: Kathy is an IBM Distinguished Engineer who is an internationally recognized technical leader in the System z platform, covering both hardware and software, with a focus on z/OS performance and System z capacity planning. Kathy provides technical and project leadership within IBM and to customers on the use, deployment and benefits of System z technology. Extensive experience consulting with IBM clients and IBM account teams on the performance and management of their z/OS environments, often in support of customer critical situations. Areas of focus include support for System z processors, LPAR configuration and management, Parallel Sysplex performance, z/OS Workload Manager, RMF, Batch Window issues, Processor Sizing, and support for software pricing. Currently, Kathy is the team leader for the Performance and Capacity Planning team at the IBM Washington Systems Center within Advanced Technical Support.
Kathy Walsh

464 (CMG-T): CMG-T: Windows System Performance Measurement and Analysis - Part 3go to top

Room: Cavalier
CMG-T: Windows System Performance Measurement and Analysis - Part 3
Jeffry Schwartz (SQLRx Division & Integrated Services, Inc., USA)
CMG-T
Windows System Performance Measurement and Analysis Jeffry A. Schwartz, Integrated Services, Inc. This basic tutorial in the CMG-T foundation curriculum introduces the metrics that are available from the Windows operating system and most prevalent applications. The sheer number of available metrics makes it difficult for anyone, even those analysts who are well versed in performance analysis measurements on other platforms, to discern the most important performance counters. This course will provide the necessary information to enable the Windows performance analyst to ascertain what the most important metrics are, how to interpret them, and the most appropriate collection mechanisms. It will also explain measurements either that are not easily obtainable or must be calculated. Discussion will include performance data collection and analysis issues using commonly available tools. Note: All topics have been updated to include recent production versions of Windows, and the architecture portion has been trimmed to approximately 25 percent of the overall course to provide more time for review of actual analysis examples. Part 1 This section will cover the following topics: • Windows performance data - how it is maintained within the OS, collected, and secured • Overview of Windows processor, process, and thread architecture, including hyperthreaded and multi-core processors • Overview of Windows memory management architecture and behavior • Overview of Windows I/O subsystem architecture and behavior Part 2 This section will cover the following topics: • Monitoring Windows performance • Windows processor, memory, and I/O subsystem performance analysis Part 3 This section will cover the following topics: • Additional Windows I/O subsystem analysis • Calculating important missing disk and response time metrics using Windows performance counters • Obtaining important WMI operating system and system configuration information • Obtaining important Event Tracing for Windows (ETW) operating system, process, file, interrupt, DPC, and other important information • Using kernrate and Krview to obtain and analyze causes of excessive Windows kernel usage (excluding interrupts and DPCs) • Summary of other tools, including Xperf and Windows 7 Relog, that can expedite Windows performance analyses.
Presenter bio: Jeff has been conducting performance and capacity planning studies for over 35 years. He began his career in the mainframe world conducting performance and modeling studies, and has focused since 2000 entirely on high-end Windows and SQL Server performance. He has developed and given numerous courses and presentations on performance and capacity planning including several recent webinars on Windows and SQL Server performance. He developed the CMG-T Windows Performance Course in 2006, and continues to update and teach it annually at CMG. He has presented at CMG almost twenty times and continues to participate in CMG as a referee, mentor, and ERB member. During his career, he has received several consulting awards, the most recent of which was the 2008 Microsoft Data Management Solutions Partner of the Year for his 325,000+ line Windows and SQL Server performance analysis software framework for consultants.
Jeffry Schwartz

465 (STOR): Invited: Performance Measurement of Deduplication Applied to Block Storagego to top

Room: Naylor
Invited: Performance Measurement of Deduplication Applied to Block Storage
Bruce McNutt (IBM, USA); Steve Daniel (Nimble Storage, USA)
STOR
The past few years have seen growing demand for data deduplication in storage products intended for mainstream commercial use. Many of the most recent all-flash array products feature dedup in some manner. It seems clear that deduplication will rapidly gain in importance, and so will the ability to assess the performance of dedup technology. This paper shows how it is possible to undertake practical performance measurements of storage solutions that incorporate deduplication and examines in detail the method for dedup measurements that has been incorporated into the most recent version of the SPC-1 benchmark.
Presenter bio: Bruce McNutt, CMG's 2009 Michelson Award recipient, is a senior scientist/engineer and master inventor working in the Systems and Technology Group of International Business Machines Corporation. He has specialized in disk storage performance since joining IBM in 1983 and has published one of the key books on that subject. Among the many papers which he has presented to the annual conference of the Computer Measurement Group, as an active participant for more than 25 years, are three that received CMG "best paper" awards.
Bruce McNutt

466 (NETCAP): TBDgo to top

Room: Jones

467 (CONF): Vendor Tools: SOASTAgo to top

Dan Boutin
Room: Morrison

Wednesday, November 4, 18:00 - 18:15

CONF: Breakgo to top

Wednesday, November 4, 18:15 - 19:15

CONF: Exhibitor Presentation: HPE: High Volume Performance Testing in a Mobile Worldgo to top

Mustali Barma, Hewlett Packard Enterprise
Room: Peraux

Mobile users account for a large percentage of users accessing web applications today. When users come to the system hosting the application through mobile devices, the system behaves differently. Our discussion will focus on the impact that mobile users have on application performance and how we can test and tune our systems in advance.

CONF: Exhibitor Presentation: Metron-Athene: athene® ES/1: Delivering More with Lessgo to top

Room: Draper

CONF: BOFs / Exhibitor Presentationgo to top

Room: Cavalier

CONF: CMG 2016 Kick-Off Meetinggo to top

Room: Naylor

Wednesday, November 4, 19:30 - 21:30

CONF: PARSgo to top

Location - TBD

Thursday, November 5

Thursday, November 5, 07:00 - 08:00

CONF: Breakfastgo to top

Peacock Alley

Thursday, November 5, 08:00 - 09:00

501 (Featured Speaker): Plenary Session: Five Trends in Computing Leading to Multi-Cloud Applications and Their Managementgo to top

Anacacho Room

Five trends that are driving the computation currently are: Cloud computing, Software defined networking, Computation in the edge, Function virtualization, and Smart Everything. While cloud computing is now very common, these trends will lead soon to multi-cloud computation. In this talk we talk about each of these trends and then about our project of managing such globally distributed multi-cloud applications.

Five Trends in Computing Leading to Multi-Cloud Applications and Their Management
Raj Jain (Washington University in St. Louis, USA)
Featured Speaker
Five trends that are driving the computation currently are: Cloud computing, Software defined networking, Computation in the edge, Function virtualization, and Smart Everything. While cloud computing is now very common, these trends will lead soon to multi-cloud computation. In this talk we talk about each of these trends and then about our project of managing such globally distributed multi-cloud applications.
Presenter bio: Raj Jain is a Fellow of IEEE, a Fellow of ACM, a winner of ACM SIGCOMM Test of Time award and ranks among the top 50 in Citeseer's list of Most Cited Authors in Computer Science. Dr. Jain is currently a Professor of Computer Science and Engineering at Washington University in St. Louis. Previously, he was one of the Co-founders of Nayna Networks, Inc - a next generation telecommunications systems company in San Jose, CA. He was a Senior Consulting Engineer at Digital Equipment Corporation in Littleton, Mass and then a professor of Computer and Information Sciences at Ohio State University in Columbus, Ohio. He is the author of ``Art of Computer Systems Performance Analysis,'' which won the 1991 ``Best-Advanced How-to Book, Systems'' award from Computer Press Association. His fourth book entitled " High-Performance TCP/IP: Concepts, Issues, and Solutions," was published by Prentice Hall in November 2003.
Raj Jain

Thursday, November 5, 09:00 - 09:15

CONF: Breakgo to top

Thursday, November 5, 09:15 - 10:15

511 (ORG): Invited: The Languages of Capacity Planning: Business, Infrastructure & Facilitiesgo to top

Room: Anacacho
Invited: The Languages of Capacity Planning: Business, Infrastructure & Facilities
Amy Spellmann (451 Research, USA); Richard Gimarc (Independent, USA)
ORG
Capacity planning for today's Digital Infrastructure demands collaboration between the business, infrastructure and facilities. In general, these silos have a dysfunctional relationship. What's needed is a more synergistic relationship that supports capacity planning in today's cloud and converged IT service delivery environments. One of the major challenges is language; each silo has its own terminology and vocabulary. The business uses words such as customers, revenue, cost and reputation. Application planners talk about performance, response time and transaction volumes. IT's focus is on utilization, availability and hardware procurement. And finally, facilities' view is in terms of power, space and cooling. How does an organization translate and blend these different views, metrics and languages into a coherent description of IT service delivery? This paper describes a communication plan and common language that promotes capacity planning across the Digital Infrastructure.
Presenter bio: Amy Spellmann is a Global Practice Principal with 451 Research Advisory, where she specializes in cloud and digital infrastructure capacity planning and application performance. Amy's expertise includes modeling IT energy footprint projections and strategies for managing IT capacity to reduce space, power and cooling consumption in the datacenter. Amy's extensive experience in capacity and performance planning guides Fortune 500 companies in optimizing and managing complex IT infrastructures including private/public/hybrid cloud. One of her specialties is coordinating with IT and business partners to ensure cost-effective service delivery through the entire digital infrastructure stack.
Amy Spellmann

512 (CMG-T): CMG-T: Network Performance Engineering - Part 1go to top

Room: Peraux
CMG-T: Network Performance Engineering - Part 1
Sundarraj Kaushik (Tata Consultancy Services & Tata Consultancy Services, India); Manoj Nambiar (Tata Consultancy Services & Institute of Electrical and Electronics Engineers, India)
CMG-T
Network Performance Although one may not be conscious of it, networks are an integral part of most enterprise systems and applications. It naturally follows that network performance is crucial to overall system performance. Knowing how networks affect applications helps in optimizing application performance and avoiding application blackouts or brownouts. Participants can expect to learn the following from this session • Networks, TCP/IP, their characteristics and how they impact performance • How can applications be designed and tuned for best network performance? • Tools for network performance analysis • Diagnosing application performance using network sniffers • Network devices available today and their effect on performance • Network Monitoring • Network Sizing A basic understanding of networks and its layered architecture is expected from participants.
Presenter bio: Sundarraj started his career on IBM 370. Was part of a team that worked on Stratus a setup automated trading for what is now the largest Stock Exchange in India. Working for the last 14+ years in developing Web Applications and closely associated with Performance Tuning of Web and other applications.

513 (C&M): CMP (Cloud Management Platform) - Performance Workload Analysisgo to top

Room: Draper
CMP (Cloud Management Platform) - Performance Workload Analysis
Vaibhav Gosavi (BMC Software, India)
C&M
Cloud management platforms (CMP) are integrated products that provide for the management of public, private and hybrid cloud environments. CMPs are products that incorporate self-service interfaces, provision system images, enable metering and billing, and provide for some degree of workload optimization through established policies. (1) With large number of players in CMP market, large enterprises looking for a solution to manage their private, public, or hybrid cloud environment struggle to nail down on the appropriate CMP. As implementing CMP is a strategic and many a times a 'Sticky' decision, evaluating CMP from Performance and scalability aspects is a very critical for the business. This paper illustrates workload characteristics and common use cases for understanding performance and scale benchmarks for CMPs. This paper covers in details the key use cases, workload matrix and factors impacting performance and scale of a CMP. This paper describes Performance workload analysis "of" the CMP and not for applications running "from" cloud.

514 (CMG-T): CMG-T: Capacity and Performance for Newbs and Nerds - Part 1go to top

Room: Cavalier
CMG-T: Capacity and Performance for Newbs and Nerds - Part 1
Neil J Gunther (Performance Dynamics Company, USA)
CMG-T
In this tutorial I will bust some entrenched myths and develop basic capacity and performance concepts from the ground up. In fact, any performance metric can be boiled down to one of just three metrics. Even if you already know metrics like, throughput and utilization, that's not the most important thing: it's the relationship *between* those metrics that's vital! For example, there are at least three different definitions of utilization. Can you state them? This level of understanding can make a big difference when it comes to solving performance problems or presenting capacity planning results. Other myths that will get busted along the way include: o There is no response-time knee. o Throughput is not the same as execution rate. o Throughput and latency are not independent metrics. o There is no parallel computing. o All performance measurements are wrong by definition. No particular knowledge about capacity and performance management is assumed.
Presenter bio: Neil J. Gunther, M.Sc., Ph.D., is an internationally known IT researcher, teacher and author who founded Performance Dynamics Company (www.perfdynamics.com) in 1994. He is well-known to CMG audiences for his conference presentations since 1993. Dr. Gunther was awarded CMG Best Technical Paper in 1996 and received the A.A. Michelson Award in 2008. Prior to founding Performance Dynamics, Dr. Gunther held teaching, research and management positions at San Jose State University, JPL/NASA, Xerox PARC and Pyramid-Siemens Technology. His "Guerrilla" training classes have been presented world wide at corporations and academia such as Boeing, FedEx, Morgan Stanley, Nokia, Stanford, Vodafone and Walmart. He is a member of AMS, APS and a senior member of ACM and IEEE.
Neil J Gunther

515 (zOS): Invited: Memory Management in the TB Agego to top

Room: Naylor
Invited: Memory Management in the TB Age
Scott Chapman (Enterprise Performance Strategies Inc & Enterprise Performance Strategies, USA)
zOS
In January IBM announced that you would be able to order a z13 with 10TB of memory. What could one do with all that memory? We are poised to see z/OS systems with dramatically larger memory sizes in the near future. This will have a significant impact on the performance of our applications running on z/OS. Come to this session with Scott Chapman to learn why today performance is all about memory. Scott will give you an overview of the hardware memory hierarchy, the potential performance implications of large memory sizes, and performance metrics relevant to managing z/OS memory. The use of Flash Express will be discussed and the latest z13 hardware will be contrasted with previous mainframe models.
Presenter bio: Scott Chapman has over two decades of experience in the IBM mainframe environment. Much of this experience has focused on performance, from both the application and systems perspective. He's written COBOL application code and Assembler system exit code. His mainframe responsibilities have spanned application development, performance tuning, capacity planning, software cost management, system tuning, sysplex configuration, WLM configuration, and most other facets of keeping a mainframe environment running effectively. Scott has spoken extensively at user group meetings and was honored to receive the Computer Measurement Group's 2009 Mullen award, and also co-authored CMG's 2012 best paper. Scott is a founding steering committee member of the Central Ohio Mainframe User's Group.
Scott Chapman

516 (ITSM): Establishing Better Governance for IT Service Management through ISO20K Accreditation and ITIL Capacity Managementgo to top

Room: Jones
Establishing Better Governance for IT Service Management through ISO20K Accreditation and ITIL Capacity Management
Jamie Baker (Metron Technology Ltd, United Kingdom (Great Britain))
ITSM
The ISO20K accreditation was developed to provide a formal framework in which businesses and organizations could align and integrate their processes with established and well adopted ITIL and CoBIT frameworks. This in turn allowed IT Service Management (ITSM) teams to help improve their Service Improvement Plans (SIP) to deliver efficient, cost-effective and assured IT services, while managing and policing services effectively. This presentation provides a brief introduction of the ISO20K definition, touching on the overall benefits and focusing on ITIL Capacity Management. Looking at proactive process alignment, establishing interfaces and information flows, reporting essential KPIs and the creation of capacity plans and how these provide ITSM with the essential governance required to ensure successful, effective and assured IT services that the business and organization can rely on.
Presenter bio: Jamie has been an IT professional since 1998 after graduating from the University of Kent with a BSc in Management Science. After initially working on UNIX systems as an Operator and then a Systems Administrator, he joined Metron in 2002 and has been working on Capacity Management projects and supporting Metron's Athene tool ever since. Jamie is a Principal Consultant with extensive IT experience, specifically within Capacity Management of virtualized and distributed systems.
Jamie Baker
pdf file

517 (CONF): Vendor Toolsgo to top

Room: Morrison

Thursday, November 5, 10:15 - 10:30

CONF: Breakgo to top

Thursday, November 5, 10:30 - 11:30

521 (Featured Speaker): Performance Monitoring vs. Capacity Management: Does it Matter?go to top

Room: Anacacho
Performance Monitoring vs. Capacity Management: Does it Matter?
Norman Hollander (IBM, USA)
Featured Speaker
In today's complex Data Center environments, monitoring performance is still an important function. Capacity management is also an important function. But do these concepts really matter individually? Or do we have to look at them differently? Traditional metrics we've used to define capacity may actually be a performance metric. Multiple performance metrics may be need to help determine capacity challenges. New concepts and methodologies may need to be adopted to provide effective management of both. In addition, we now have analytics solutions to help us evaluate all of these metrics. The need to understand the relationship of all these metrics is important to help us quickly determine how well systems are performing, and when it may be time to upgrade. This session will look at some of these older and new concepts.
Presenter bio: Norman D. Hollander Technical Sale Specialist, Mobile & Cloud Solutions on z Systems, z/Operating Systems, and z/Performance Specialist, zOSEM Technical Specialist Norman has been in Systems Programming and Information Technology Management for more than 4 decades, specializing in Operating Systems, Hardware Planning, Workload Manager (WLM), and Performance & Tuning. He is also an internationally recognized expert on most things related to z Systems. Norman is currently with IBM Corporation working with Mobile & Cloud Solutions, including Systems Management, System Performance, Virtualization, and Analytics solutions. Previously, Norman was with CA and held the positions of Senior Principle Engineering Architect, the Director of Product Management for Mainframe 2.0, for Mainframe Software Manager (CA MSM) and for CA SYSVIEW®. Prior to CA, he worked for Candle Corporation as a Senior Consultant in Performance and Monitoring Solutions; again, as a System/390, z/OS, and WLM specialist. Norman has spent many years in enterprise-wide performance and capacity planning in diverse industries, including: a large Utility Company, large Financial Corporations, an Airline, a large Telecommunications Corporation, and a University. In addition to the performance and capacity planning responsibilities, Norman has been a Systems Programmer Manager, and a Project Manager for high-profile Data Center Projects. Norman has been a member, a panelist, a referee, an editor and a mentor for Computer Measurement Group (CMG); SHARE EWCP Project Manager, and speaker for SHARE; an invited speaker for Guide/Share Europe- UK, a invited speaker for Guide/Share Europe- Nordic, a presenter at IBM’s z/EXPO; a presenter at CAWorld; a contributor to Cheryl Watson’s Tuning Newsletter, and one of the Technical Editors for Steve Samson’s “MVS Performance Management.” Norman’s solid background in Operating Systems and Performance & Tuning has allowed him to lend his expertise to many customers all over the globe.
Norman Hollander

522 (CMG-T): CMG-T: Network Performance Engineering - Part 2go to top

Room: Peraux
CMG-T: Network Performance Engineering - Part 2
Sundarraj Kaushik (Tata Consultancy Services & Tata Consultancy Services, India); Manoj Nambiar (Tata Consultancy Services & Institute of Electrical and Electronics Engineers, India)
CMG-T
Network Performance Although one may not be conscious of it, networks are an integral part of most enterprise systems and applications. It naturally follows that network performance is crucial to overall system performance. Knowing how networks affect applications helps in optimizing application performance and avoiding application blackouts or brownouts. Participants can expect to learn the following from this session • Networks, TCP/IP, their characteristics and how they impact performance • How can applications be designed and tuned for best network performance? • Tools for network performance analysis • Diagnosing application performance using network sniffers • Network devices available today and their effect on performance • Network Monitoring • Network Sizing A basic understanding of networks and its layered architecture is expected from participants.
Presenter bio: Sundarraj started his career on IBM 370. Was part of a team that worked on Stratus a setup automated trading for what is now the largest Stock Exchange in India. Working for the last 14+ years in developing Web Applications and closely associated with Performance Tuning of Web and other applications.

523 (APM): Spinning Your Wheels: CPU Time vs Instructionsgo to top

Room: Draper
Spinning Your Wheels: CPU Time vs Instructions
John Baker (MVS Solutions); Claire Cates (SAS, USA)
APM
Imagine sitting in traffic in a taxi. The engine - and the meter - continues to run but you're not going anywhere. This is very much what it is like in your computer when your CPU has a cache miss. Today's high frequency processors have the capacity to process instructions at an incredible rate. However, the ability of a CPU to cycle 5 billion times per second does not necessarily mean 5 billion instructions are completed. Prior to executing each instruction, the instruction itself as well as the necessary data must be fetched from memory. Virtually all CPU's utilize a multi-stage cache infrastructure to optimize this process. Small, but fast Level 1 caches will reside very close to the CPU core with larger L2, L3 and so on further out. If the required data and instructions are resident in the local L1 cache, the latency to fetch will be very small; perhaps 1 or 2 clock cycles. If, on the other hand the data is out is a large main memory, the wait time could be hundreds of clock cycles. During this wait time, the CPU is spinning away. Join Claire and John as they explore this issue. We will explain how memory management works as well as discuss practical solutions to improve performance. This session is appropriate for anyone responsible for performance or capacity on any computing platform. Come listen, learn, and share your experiences. It's time to get out of traffic.
Presenter bio: Bio: Over 25 years in IT industry as both a customer and consultant. As a customer, John designed, implemented and maintained many critical projects such as WLM Goal Mode and GDPS/Data Mirroring. He has extensive experience with many performance analysis tools and techniques at the hardware, OS, and application levels. As a consultant, John has assisted many of the world's largest datacenters with their z/OS performance challenges and held Subject Area Chair positions with CMG for both Storage and Capacity Planning for several years. John has hosted many sessions at CMG, SHARE, etc. as well as several regional user groups. In 2017, John joined forces with internationally-recognized performance specialist Peter Enrico.
John Baker

524 (CMG-T): CMG-T: Capacity and Performance for Newbs and Nerds - Part 2go to top

Room: Cavalier
CMG-T: Capacity and Performance for Newbs and Nerds - Part 2
Neil J Gunther (Performance Dynamics Company, USA)
CMG-T
In this tutorial I will bust some entrenched myths and develop basic capacity and performance concepts from the ground up. In fact, any performance metric can be boiled down to one of just three metrics. Even if you already know metrics like, throughput and utilization, that's not the most important thing: it's the relationship *between* those metrics that's vital! For example, there are at least three different definitions of utilization. Can you state them? This level of understanding can make a big difference when it comes to solving performance problems or presenting capacity planning results. Other myths that will get busted along the way include: o There is no response-time knee. o Throughput is not the same as execution rate. o Throughput and latency are not independent metrics. o There is no parallel computing. o All performance measurements are wrong by definition. No particular knowledge about capacity and performance management is assumed.
Presenter bio: Neil J. Gunther, M.Sc., Ph.D., is an internationally known IT researcher, teacher and author who founded Performance Dynamics Company (www.perfdynamics.com) in 1994. He is well-known to CMG audiences for his conference presentations since 1993. Dr. Gunther was awarded CMG Best Technical Paper in 1996 and received the A.A. Michelson Award in 2008. Prior to founding Performance Dynamics, Dr. Gunther held teaching, research and management positions at San Jose State University, JPL/NASA, Xerox PARC and Pyramid-Siemens Technology. His "Guerrilla" training classes have been presented world wide at corporations and academia such as Boeing, FedEx, Morgan Stanley, Nokia, Stanford, Vodafone and Walmart. He is a member of AMS, APS and a senior member of ACM and IEEE.
Neil J Gunther

525 (NETCAP): Percentile-Based Approach to Forecasting Workload Growthgo to top

Room: Naylor
Percentile-Based Approach to Forecasting Workload Growth
NETCAP
When forecasting resource workloads (traffic, CPU load, memory usage, etc.), we often extrapolate from the upper percentiles of data distributions. This works very well when the resource is far enough from its saturation point. However, when the resource utilization gets closer to the workload-carrying capacity of the resource, upper percentiles level off (the phenomenon is colloquially known as flat-topping or clipping), leading to underpredictions of future workload and potentially to undersized resources. This paper explains the phenomenon and proposes a new approach that can be used for making useful forecasts of workload when historical data for the forecast are collected from a resource approaching saturation.
Presenter bio: Alexander Gilgur is a Data Scientist and Systems Analyst with over 20 years of experience in a wide variety of domains - Control Systems, Chemical Industry, Aviation, Semiconductor manufacturing, Information Technologies, and Networking - and a solid track record of implementing his innovations in production. He has authored and co-authored a number of know-hows, publications, and patents. Alex enjoys applying the beauty of Math and Statistics to solving capacity and performance problems and is interested in non-stationary processes, which make the core of IT problems today. Presently, he is a Network Data Scientist at Facebook and an occasional faculty member at UC Berkeley's MIDS program. He is also a father, a husband, a skier, a soccer player, a sport psychologist, a licensed soccer coach, a licensed professional engineer (PE), and a music aficionado. Alex's technical blog is at http://alexonsimanddata.blogspot.com.
Alexander Gilgur

526 (CONF): TBDgo to top

Room: Jones

527 (CONF): Vendor Tools: IntelliMagic Vision Overview with the Foundergo to top

Gilbert Houtekamer & Brent Phillips
Room: Morrison

Join Dr. Gilbert Houtekamer, founder and chief architect at IntelliMagic, as well as Brett Allison, Director of Technical Services at IntelliMagic, for an interactive training session with IntelliMagic Vision. IntelliMagic Vision enables you to outsmart unavailability by interpreting the performance and configuration data using expert domain knowledge. It supports the z/OS mainframe infrastructure (Processor to Disk, as well as Virtual Tape and Replication) and also SAN storage environments. Come and discuss how to use the product to see and understand infrastructure risk in order to address the underlying root cause(s) before service disruptions occur.

Thursday, November 5, 11:45 - 12:45

CONF: Lunchgo to top

Peacock Alley

Thursday, November 5, 13:00 - 14:00

531 (APM): Invited: Performance Engineering for the Internet of Things and Other Real-Time Embedded Systemsgo to top

Room: Anacacho
Invited: Performance Engineering for the Internet of Things and Other Real-Time Embedded Systems
Connie Smith (Performance Engineering Services, USA)
APM
When real-time embedded systems fail: patients die, warships shoot passenger jets, airplanes crash, cars stop on freeways or accelerate uncontrollably, and other documented problems. Preventing these problems saves lives, money, enables faster delivery, improves architectures, and improves performance. Performance engineering enables developers to predict performance, identify, and correct problems before products are built that contain serious potential failures. This talk examines current technical and performance issues in real-time embedded systems including software and systems developed for the Internet of Things (IOT). It reviews the elements of Performance Engineering (PE), identifies the relevant PE technology and how it can be adapted to the IOT. In particular, we take a close look at both the performance prediction models of embedded systems and performance antipatterns that identify common performance problems and how to correct them. A case study illustrates how it is possible to predict performance problems and correct them before the system is built thus avoiding costly mistakes.

532 (Featured Speaker): PANEL: Advancing in Performance Careersgo to top

Room: Peraux

Panelists Include: Rex Black, RBCS; Neil Gunther, Performance Dynamics; Elisabeth Stahl, IBM; Raj Jain, Washington Univ. St. Louis. Moderator: Alexander Podelko, Oracle

PANEL: Advancing in Performance Careers
Alexander Podelko (Oracle, USA)
Featured Speaker
A panels of diverse experts /authors / educators will discuss how to became a performance professional and how to advance in performance-related careers. What we have and what we need: university education, vendor and independent training, books and publications, conferences, body of knowledge, certifications. What is the core performance engineering knowledge every professional should have?

533 (ARCAP): Identifying the Causes of High Latencies in Storage Traces Using Workload Decomposition and Feature Selectiongo to top

Room: Draper
Identifying the Causes of High Latencies in Storage Traces Using Workload Decomposition and Feature Selection
Daniel S Myers (Rollins College, USA)
ARCAP
We describe a new methodology for identifying the causes of high latencies in complex commercial storage traces. Our approach decomposes the complete trace into a set of sub-workloads, then applies a statistical feature selection algorithm to identify key multi-dimensional workload characteristics associated with periods of high latency. To demonstrate this methodology, we identify performance insights in a pair of commercial Microsoft storage traces, then investigate improved system designs using simulation.

534 (CMG-T): CMG-T: Java - Part 1go to top

Room: Cavalier
CMG-T: Java - Part 1
Peter Johnson (Unisys Corporation, USA)
CMG-T
Attendees at these CMG-T sessions will benefit from my many years of doing Java performance tuning, including in our lab where we ran industry standard benchmarks, in our application excellence centers where I have helped tuning our customer's real-world applications, and in day-to-day operations of applications we have running in our data center. I will cover the following topics: 1) Analyzing the garbage collector (GC) a) Understanding how the GC works b) Gathering GC data (there are three different formats for this data) c) Graphing the GC data and understanding what the graphs mean d) Examining some real-world examples, what the GC graphs showed, what tuning was done, what the results were (hint: significantly improved performance) 2) Survey of various GC algorithms a) Description of the default collector b) Description of the parallel collector c) Description of the mostly-concurrent mark/sweep collector i) Using a parallel collector in conjunction with the mark/sweep collector d) Description of the new garbage-first collector available in JDK 7 e) Description of the pros and cons of each collector 3) Miscellaneous JVM tuning and tips for solving real-world Java issues.
Presenter bio: Peter Johnson has 35 years of IT industry experience, mostly in application development. For many years he was the chief architect of a team that analyzed performance of Java applications on large-scale Intel-based machines and evaluated various open source software for enterprise readiness. He currently is a lead architect for Unisy Choreographer, a cloud-based solution. Peter is a frequent speaker at the annual CMG conference, speaking mainly on Java performance. He is also a co-author of JBoss in Action.
Peter Johnson
pdf file

535 (C&M): Invited: Network Visibility in the Cloudgo to top

Room: Naylor
Invited: Network Visibility in the Cloud
C&M
Public cloud computing, and in particular Infrastructure as a Service, promises to revolutionize the IT word and is being adopted by more and more organizations, big and small. The many benefits of Cloud Computing include elasticity, programmability and cost reduction. These benefits, however, come at the cost of reduced visibility. Network visibility, in particular, tends to be challenging in public cloud environments, where traditional observation points like span ports and network taps are not an option anymore. Moreover, the typical elasticity of public cloud infrastructures means that the network topology and dependency relationships are changing all the time, making observation even more challenging. This talk will describe a novel approach to network monitoring in the cloud: sysdig, an open source visibility tool. Sysdig uses high frequency operating system-level instrumentation to offer unique insight into cloud network activity. The talk will describe the core technology, and showcase the real time topology visualizations that are possible through it. It will also show how this technique can be applied to achieve optimal network monitoring for container-based infrastructures.

536 (zOS): Invited: Beyond RMF/SMF Reporting - Using Availability Intelligence to Protect Availability at the Production Sitego to top

Room: Jones
Invited: Beyond RMF/SMF Reporting - Using Availability Intelligence to Protect Availability at the Production Site
Brent Phillips (IntelliMagic, USA)
zOS
"Availability" is often associated with technologies and processes used to recover from service disruptions once availability to production applications has been lost. But protecting availability before it is lost is even better than recovering it after the production environment is affected. Availability Intelligence is able to produce visibility of upcoming threats that can disrupt service availability so that they can be avoided before impacting production users. It is created through the automatic application of built-in infrastructure knowledge to the measurement data. This session will discuss what Availability Intelligence is, why most every mainframe site will be using this by 2020, and the process to create it.
Brent Phillips

537 (CONF): Vendor Tools: Nimble Storage InfoSight: Defining a New Storage Experiencego to top

TBD
Room: Morrison

Learn how InfoSight, Nimble Storage's cloud-based analytics software redefines IT infrastructure management. The analytics proactively monitors application availability, performance, and data protection while simplifying infrastructure planning and management.

Thursday, November 5, 14:00 - 14:15

CONF: Breakgo to top

Thursday, November 5, 14:15 - 15:15

541 (APM): Developing Our Intuition About Queuing Network Modelsgo to top

Room: Anacacho
Developing Our Intuition About Queuing Network Models
Richard Gimarc (Independent, USA); Nghia Nguyen (CA Technologies, USA)
APM
Intuition plays an important role in the analysis of computer system performance. When studying a system, we collect and analyze data in order to gain an understanding of the responsiveness and dynamics of the system. If the results we produce do not make sense to us, intuitively, then we have learned little. In this paper we apply intuition and theory to examine the performance of three "equivalent" queuing network models. We begin with a qualitative investigation of each model; what we can learn by simply looking at each model's workflow and dynamics? We then apply theory to either confirm or refute our intuition. Our intent is to take a step towards developing our modeling intuition instead of blindly accepting model results.
Presenter bio: Richard Gimarc is an independent consultant that specializes in capacity planning, performance engineering and performance analysis. Over the years Richard has developed techniques and applied his expertise in a wide range of complex, diverse and challenging environments. Richard has authored 30+ papers that include topics such as application scalability, green capacity planning and cloud performance. Richard is a regular speaker at both CMG international and regional conferences.
Richard Gimarc

542 (CMG-T): CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 1go to top

Room: Peraux
CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 1
Odysseas Pentakalos (SYSNET International, Inc., USA)
CMG-T
As the amount of data and the computational resources needed to process that data exceeds the capacity of a single machine, it becomes necessary to distribute the load across multiple machines. Hadoop is an application framework that allows for the processing of large amounts of data to be distributed across any number of servers without requiring the user to manually deal with the complexities of distributing the work and handling network and server failures. In this tutorial we will introduce the audience to Hadoop and the MapReduce framework and how it can be utilized to process large amounts of log data to extract useful information. During the workshop we will demonstrate the use of this framework in analyzing a collection of server logs, making sure the audience members will be able to apply the techniques learned to their work.
Presenter bio: Dr. Odysseas Pentakalos is Chief Technology Officer of SYSNET International, Inc., where he focuses on providing his clients consulting services with the architecture of large scale, high-performance enterprise applications, focusing predictive analytics and health information exchange solutions. He holds a Ph.D. in Computer Science from the University of Maryland. He has published dozens of papers in conference proceedings and journals, is a frequent speaker at industry conferences and is the co-author of the book Windows 2000 Performance Guide that is published by O'Reilly. Odysseas can be reached at odysseas@sysnetint.
Odysseas Pentakalos

543 (APM): Turning Performance Data into Actionsgo to top

Chapman Lever, Rigor
Room: Draper

The web continues to grow with richer, more vivid content and functionality. Unfortunately a handful of simple yet all too common frontend mistakes can lead to slow websites and a terrible user experience. These performance issues are amplified even further for websites which see more than 50% of traffic coming from mobile devices. While many many website and web application owners understand the old adage "you can't improve what you aren't measuring", few monitoring technologies provide clear explanations of why a site is performing poorly or define plans for remediating common problems.In this presentation, we will go beyond data samples and measurements to explore the reason sites are slow. Specifically we will show 3 simple optimizations to increase speed and user happiness on today's web.

544 (CMG-T): CMG-T: Java - Part 2go to top

Room: Cavalier
CMG-T: Java - Part 2
Peter Johnson (Unisys Corporation, USA)
CMG-T
Attendees at these CMG-T sessions will benefit from my many years of doing Java performance tuning, including in our lab where we ran industry standard benchmarks, in our application excellence centers where I have helped tuning our customer's real-world applications, and in day-to-day operations of applications we have running in our data center. I will cover the following topics: 1) Analyzing the garbage collector (GC) a) Understanding how the GC works b) Gathering GC data (there are three different formats for this data) c) Graphing the GC data and understanding what the graphs mean d) Examining some real-world examples, what the GC graphs showed, what tuning was done, what the results were (hint: significantly improved performance) 2) Survey of various GC algorithms a) Description of the default collector b) Description of the parallel collector c) Description of the mostly-concurrent mark/sweep collector i) Using a parallel collector in conjunction with the mark/sweep collector d) Description of the new garbage-first collector available in JDK 7 e) Description of the pros and cons of each collector 3) Miscellaneous JVM tuning and tips for solving real-world Java issues.
Presenter bio: Peter Johnson has 35 years of IT industry experience, mostly in application development. For many years he was the chief architect of a team that analyzed performance of Java applications on large-scale Intel-based machines and evaluated various open source software for enterprise readiness. He currently is a lead architect for Unisy Choreographer, a cloud-based solution. Peter is a frequent speaker at the annual CMG conference, speaking mainly on Java performance. He is also a co-author of JBoss in Action.
Peter Johnson

545 (zOS): Invited: Lessons Learned from Implementing an IDAAgo to top

Room: Naylor
Invited: Lessons Learned from Implementing an IDAA
Scott Chapman (Enterprise Performance Strategies Inc & Enterprise Performance Strategies, USA)
zOS
The IBM DB2 Analytics Accelerator has the potential to revolutionize DB2 on z/OS. The appliance transparently makes certain queries run in a tiny fraction of the time they would have taken to run in DB2. It's almost like magic. But of course the implementation of any sufficiently advanced technology is a little more complicated than "just plug it in". Come to this presentation to hear the lessons Scott Chapman learned while implementing an IDAA. While implementation is not really difficult, there are some surprising details and potential issues that you should understand before starting. You will leave this session having learned some of the IDAA lessons the easy way: without having to live through them. Fair warning though: you may very well also leave wanting to implement an IDAA!
Presenter bio: Scott Chapman has over two decades of experience in the IBM mainframe environment. Much of this experience has focused on performance, from both the application and systems perspective. He's written COBOL application code and Assembler system exit code. His mainframe responsibilities have spanned application development, performance tuning, capacity planning, software cost management, system tuning, sysplex configuration, WLM configuration, and most other facets of keeping a mainframe environment running effectively. Scott has spoken extensively at user group meetings and was honored to receive the Computer Measurement Group's 2009 Mullen award, and also co-authored CMG's 2012 best paper. Scott is a founding steering committee member of the Central Ohio Mainframe User's Group.
Scott Chapman
pdf file

546 (ARCAP): Data Correlation for Capacity Managementgo to top

Room: Jones
Data Correlation for Capacity Management
Dale Feiste (Metron-Athene Inc., USA)
ARCAP
Correlation is used across many disciplines to identify predictive relationships that can be used in decision support. Correlating capacity and performance data is an important tool, that analyst's should be well versed in. Many software applications are available to assist the analyst in finding correlations and identifying the significance of those dependencies. A classic example is correlating workload volumes to resource consumption when calibrating models. Many types of data can be correlated to gain insight into what drives resource utilization and performance throughout the entire computing environment. This paper presents a high-level discussion of using correlation in practice and does not attempt a rigorous mathematical explanation of the underlying statistics. A rigorous mathematical review can be found online at many websites with an academic focus for those readers who are interested. We will review basic concepts of correlation, significance coefficients, limitations of correlation, data types, and examples. The purpose is to give readers a better working knowledge of how correlation can be used in practice to make informed decisions regarding capacity and performance management. • Basic concepts of correlation and dependence • Correlation coefficients • Limitations of correlation • Types of data to correlate • Examples of using correlation
Presenter bio: Dale is a consultant at Metron-Athene with over 15 years of experience in systems performance and capacity management. Dale has broad knowledge in many aspects of capacity management and performance engineering. He has worked at some of the largest financial firms in the United States. He holds many certifications across a diverse set of technologies, and a degree in computer information systems from Excelsior college. Dale attended his first CMG conference in 2000.
Dale Feiste
pdf file

547 (CONF): Vendor Tools: IBM Proof of Technology Labs - Part 1go to top

Norman Hollander
Room: Morrison

This session provides a hands-on lab environment for you to explore new innovations for z systems Capacity Management Analytics.The IBM Capacity Management Analytics (CMA) is an end to end IT analytics platform solution for the datacenter that helps customers understand current state, predict future growth and perform gap analysis on various "what-if" scenarios- and then take what you have learned to better handle business model dynamics, reduce surprises, manage the economic performance of IT investments, SCA, Anomaly Detection, Distributed, zSystems, and improve quality of service across your entire IT complex. As an IT analytics platform CMA can also be used to develop new predictions & reports. The hands on lab will show you the basics of developing a quick timeseries model within SPSS Modeler & reporting on the data through Cognos BI.This unit takes about 60-90 minutes, self-paced, at your choice. Instructors and hand-outs are available. For those attendees who did not attend Tuesday labs 317 and 327, those labs will be available during these sessions.

Thursday, November 5, 15:15 - 15:45

CONF: Breakgo to top

Thursday, November 5, 15:45 - 16:45

551 (ITSM): Maturing the Capacity Management Processgo to top

Room: Anacacho
Maturing the Capacity Management Process
Jamie Baker (Metron Technology Ltd, United Kingdom (Great Britain))
ITSM
The advent of next-generation infrastructure technologies such as infrastructure as a service (IaaS), software-defined data centers, and open source technologies makes understanding capacity and performance from a business, service, and component level critical but more complicated than ever before. Most organizations have a Capacity Management process whether they realize it or not. Many of these organizations, however, operate these processes at a much lower maturity level than desired. This presentation will cover the capacity management process according to good practice guidelines and attendees will learn about the skills and tools needed to properly mature their processes - from monitoring and measuring to analysis, prediction, and reporting. Topics covered will include: • Best Practice guidelines for Capacity Management • Process analysis and improvement • Business, Service, and Component Capacity Management • Achieving key goals and aligning IT services to business functions
Presenter bio: Jamie has been an IT professional since 1998 after graduating from the University of Kent with a BSc in Management Science. After initially working on UNIX systems as an Operator and then a Systems Administrator, he joined Metron in 2002 and has been working on Capacity Management projects and supporting Metron's Athene tool ever since. Jamie is a Principal Consultant with extensive IT experience, specifically within Capacity Management of virtualized and distributed systems.
Jamie Baker
pdf file

552 (CMG-T): CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 2go to top

Room: Peraux
CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 2
Odysseas Pentakalos (SYSNET International, Inc., USA)
CMG-T
As the amount of data and the computational resources needed to process that data exceeds the capacity of a single machine, it becomes necessary to distribute the load across multiple machines. Hadoop is an application framework that allows for the processing of large amounts of data to be distributed across any number of servers without requiring the user to manually deal with the complexities of distributing the work and handling network and server failures. In this tutorial we will introduce the audience to Hadoop and the MapReduce framework and how it can be utilized to process large amounts of log data to extract useful information. During the workshop we will demonstrate the use of this framework in analyzing a collection of server logs, making sure the audience members will be able to apply the techniques learned to their work.
Presenter bio: Dr. Odysseas Pentakalos is Chief Technology Officer of SYSNET International, Inc., where he focuses on providing his clients consulting services with the architecture of large scale, high-performance enterprise applications, focusing predictive analytics and health information exchange solutions. He holds a Ph.D. in Computer Science from the University of Maryland. He has published dozens of papers in conference proceedings and journals, is a frequent speaker at industry conferences and is the co-author of the book Windows 2000 Performance Guide that is published by O'Reilly. Odysseas can be reached at odysseas@sysnetint.
Odysseas Pentakalos

553 (Featured Speaker): Four Steps to Performance Risk Mitigationgo to top

Room: Draper
Four Steps to Performance Risk Mitigation
Rex Black (RBCS, USA)
Featured Speaker
Are you one of those people who has worked on a project where performance was the last item on the agenda--right up to the point where it blew up in everyone's face? Have you seen whole projects cancelled due to performance problems? It doesn't have to be this way. Nasty end-of-project performance surprises are avoidable. If you've suffered through one or two of these disasters and are looking to avoid them in the future, this presentation will illustrate, through a real-world case study, a four step process you can use to avoid performance disasters, with a minimum of fuss, drama, and expense.
Presenter bio: Rex Black is President of RBCS (www.rbcs-us.com), a worldwide leader in testing services, including consulting, outsourcing, assessment, and training. RBCS has over 100 clients spanning twenty countries on six continents, and Rex’s best-seller, Managing the Testing Process, has reached over 100,000 readers on six continents. Rex is past President of the ASTQB and the ISTQB.
pdf file

554 (CMG-T): CMG-T: Java - Part 3go to top

Room: Cavalier
CMG-T: Java - Part 3
Peter Johnson (Unisys Corporation, USA)
CMG-T
Attendees at these CMG-T sessions will benefit from my many years of doing Java performance tuning, including in our lab where we ran industry standard benchmarks, in our application excellence centers where I have helped tuning our customer's real-world applications, and in day-to-day operations of applications we have running in our data center. I will cover the following topics: 1) Analyzing the garbage collector (GC) a) Understanding how the GC works b) Gathering GC data (there are three different formats for this data) c) Graphing the GC data and understanding what the graphs mean d) Examining some real-world examples, what the GC graphs showed, what tuning was done, what the results were (hint: significantly improved performance) 2) Survey of various GC algorithms a) Description of the default collector b) Description of the parallel collector c) Description of the mostly-concurrent mark/sweep collector i) Using a parallel collector in conjunction with the mark/sweep collector d) Description of the new garbage-first collector available in JDK 7 e) Description of the pros and cons of each collector 3) Miscellaneous JVM tuning and tips for solving real-world Java issues.
Presenter bio: Peter Johnson has 35 years of IT industry experience, mostly in application development. For many years he was the chief architect of a team that analyzed performance of Java applications on large-scale Intel-based machines and evaluated various open source software for enterprise readiness. He currently is a lead architect for Unisy Choreographer, a cloud-based solution. Peter is a frequent speaker at the annual CMG conference, speaking mainly on Java performance. He is also a co-author of JBoss in Action.
Peter Johnson

555 (APM): Invited: Determination of Web Performance Envelopego to top

Room: Naylor
Invited: Determination of Web Performance Envelope
Paddy Ganti (Interana)
APM
In aerodynamics, the flight envelope of an aircraft refers to the capabilities of a design in terms of airspeed and load factor or altitude. We extend the metaphor here for a website and define a performance envelope for a site in terms of network optimization, speed and responsiveness and finally memory management. We do a real world demo of a website to follow up on the points we raise using WebPageTest.
Presenter bio: Paddy Ganti is currently an architect at Interana where he tried to peek into the future by knowing something that he doesnt know. Previously he was a Director of Solutions at Instart Logic where he provides compelling technical analyses to his prospects thus convincing them to become customers. Prior to this, he was a Software Engineer at Facebook where he worked on site and mobile performance. Paddy is most interested in protocols like HTTP, TCP and DNS and their role in making Web Applications performant. Paddy has learned about various Internet Service Delivery Models by working previously at companies like CDNetworks, Akamai, Netli and Keynote.

556 (ORG): There's Something Happening Here, but What It is Ain't Exactly Clear - Capturing the Real User Experiencego to top

Room: Jones
There's Something Happening Here, but What It is Ain't Exactly Clear - Capturing the Real User Experience
Kyle Parrish (Southern CMG, USA)
ORG
Business is demanding better visibility into the customer experience! IT and Business Operations need more proactive information delivery. Real User Experience Monitoring - the ability to monitor response times for "real users" when they utilize an application - provides insights into that customer experience that, to this point, was extremely difficult to capture and measure. In this presentation I will talk about how effective Real User Experience Monitoring enables IT operations and application stakeholders to assure that the real end users of an application or website are experiencing the best possible performance. I will discuss the tools in the market, the capabilities they provide, and the challenges organizationally to building true end-to-end real user monitoring in a large enterprise. the technical challenges are only a part.

557 (CONF): Vendor Tools: IBM Proof of Technology Labs - Part 2go to top

Norman Hollander
Room: Morrison

This session provides a hands-on lab environment for you to explore new innovations for z systems Capacity Management Analytics.The IBM Capacity Management Analytics (CMA) is an end to end IT analytics platform solution for the datacenter that helps customers understand current state, predict future growth and perform gap analysis on various "what-if" scenarios- and then take what you have learned to better handle business model dynamics, reduce surprises, manage the economic performance of IT investments, SCA, Anomaly Detection, Distributed, zSystems, and improve quality of service across your entire IT complex. As an IT analytics platform CMA can also be used to develop new predictions & reports. The hands on lab will show you the basics of developing a quick timeseries model within SPSS Modeler & reporting on the data through Cognos BI.This unit takes about 60-90 minutes, self-paced, at your choice. Instructors and hand-outs are available. For those attendees who did not attend Tuesday labs 317 and 327, those labs will be available during these sessions.

Thursday, November 5, 16:45 - 17:00

CONF: Breakgo to top

Thursday, November 5, 17:00 - 18:00

561 (Featured Speaker): No Session Scheduledgo to top

Room: Anacacho

562 (CMG-T): CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 3go to top

Room: Peraux
CMG-T: Using Hadoop and MapReduce to Process Big Data - Part 3
Odysseas Pentakalos (SYSNET International, Inc., USA)
CMG-T
As the amount of data and the computational resources needed to process that data exceeds the capacity of a single machine, it becomes necessary to distribute the load across multiple machines. Hadoop is an application framework that allows for the processing of large amounts of data to be distributed across any number of servers without requiring the user to manually deal with the complexities of distributing the work and handling network and server failures. In this tutorial we will introduce the audience to Hadoop and the MapReduce framework and how it can be utilized to process large amounts of log data to extract useful information. During the workshop we will demonstrate the use of this framework in analyzing a collection of server logs, making sure the audience members will be able to apply the techniques learned to their work.
Presenter bio: Dr. Odysseas Pentakalos is Chief Technology Officer of SYSNET International, Inc., where he focuses on providing his clients consulting services with the architecture of large scale, high-performance enterprise applications, focusing predictive analytics and health information exchange solutions. He holds a Ph.D. in Computer Science from the University of Maryland. He has published dozens of papers in conference proceedings and journals, is a frequent speaker at industry conferences and is the co-author of the book Windows 2000 Performance Guide that is published by O'Reilly. Odysseas can be reached at odysseas@sysnetint.
Odysseas Pentakalos

563 (ARCAP): TBDgo to top

Room: Draper

564 (ITSM): Invited: Summary of 4-Part Series Published in CMG Journalgo to top

Room: Cavalier
Invited: Summary of 4-Part Series Published in CMG Journal
Ann Dowling (MetLife, USA)
ITSM
This paper is a summary of the 4-part series published in the CMG Journal on "Exploring Analytics to Enable the Business and Service Value of Capacity Planning" covering: Part 1 - Introduction to Analytics for Capacity Planning Part 2 - Analytics and the Capacity Management Information System Part 3 - Analytics Techniques in Capacity Management: Forecasting and Modeling Part 4 - Visualization for Capacity Planning The paper will augment the Series with current updates.
Presenter bio: Ann Dowling is Director of Capacity & Forecast Engineering at MetLife. Ann is certified in ITIL Foundations for IT Service Management with decades of experience in various disciplines including capacity planning, process architecture, performance engineering, and IT accounting. Ann has a special interest in business-driven forecasting and analytics that underscore the linkage of business capacity management, service capacity management, and component capacity management.
Ann Dowling

565 (ARCAP): Essential Reporting for Capacity and Performance Managementgo to top

Room: Naylor
Essential Reporting for Capacity and Performance Management
Dale Feiste (Metron-Athene Inc., USA)
ARCAP
Reporting is a cornerstone of the capacity and performance management process, especially when it identifies problems before they happen. Enterprise datacenters continue to evolve with new technology and capabilities. External cloud providers enable functionality to be moved outside of datacenters. More components are tied together by applications across different infrastructure environments than ever before. Managing capacity and performance for all of these environments is a difficult task, and it is sometimes moved to a low priority, neglected, or not done at all. Automation is the key to managing capacity and performance for complex environments. Implementing an effective reporting process requires a good understanding of who the customers will be and what they need. Creating reports that are not used can lead to false perceptions that capacity and performance management is not worth the effort. Cost justification may be a key part of the reporting itself. Ensuring visibility of success and management support is often required to show value. Certain types of reporting should be implemented for different audiences, and presentation style can have a big impact on how reporting output is perceived. • Proactive reporting • Evolving datacenters • Return on investment • Visibility and management support • Report types and presentation style
Presenter bio: Dale is a consultant at Metron-Athene with over 15 years of experience in systems performance and capacity management. Dale has broad knowledge in many aspects of capacity management and performance engineering. He has worked at some of the largest financial firms in the United States. He holds many certifications across a diverse set of technologies, and a degree in computer information systems from Excelsior college. Dale attended his first CMG conference in 2000.
Dale Feiste
pdf file

566 (Featured Speaker): High Performance Computing Tutorialgo to top

Room: Jones
High Performance Computing Tutorial
Sundarraj Kaushik (Tata Consultancy Services & Tata Consultancy Services, India); Manoj Nambiar (Tata Consultancy Services & Institute of Electrical and Electronics Engineers, India)
CMG-T
HPC High Performance Computing can considered as a set of hardware and software technologies that have been developed to meet the performance requirements of compute and data intensive applications. In this tutorial we will review these technologies from the perspective of one such application. Technologies covered include server clusters, multi-core, graphical processing units (GPU) and message passing interface (MPI). The parameters of each technology which are sensitive to performance will be discussed. Also presented will be general guidelines on achieving optimal application performance using these technologies. These techniques will be reinforced using optimization case studies covering each technology. At the end of the tutorial the participants will learn tricks of optimizing software on new technologies which can be applied in the workplace. Participants - No background on HPC is required. Basic programming skills are required. Knowledge of computer organization is desirable.
Presenter bio: Sundarraj started his career on IBM 370. Was part of a team that worked on Stratus a setup automated trading for what is now the largest Stock Exchange in India. Working for the last 14+ years in developing Web Applications and closely associated with Performance Tuning of Web and other applications.

567 (CONF): Vendor Toolsgo to top

Room: Morrison

Thursday, November 5, 18:00 - 18:15

CONF: Breakgo to top

Thursday, November 5, 18:15 - 19:15

CONF: BOFs / Exhibitor Presentationgo to top

Room: Peraux

CONF: BOFs / Exhibitor Presentationgo to top

Room: Draper

CONF: BOFs / Exhibitor Presentationgo to top

Room: Cavalier

CONF: BOFs / Exhibitor Presentationgo to top

Room: Naylor

Thursday, November 5, 19:30 - 22:30

CONF: Gala Receptiongo to top

Location - TBD