Program for CMG imPACt 2017

Missing user or conference.
Time Louisiana Ballroom II Beauregard St. Landry Pointe Coupe LaFourche Feliciana West Feliciana East

Monday, November 6

08:00 am-09:45 am   TNG1: TRAINING: Using Hadoop, MapReduce, and Spark to Process Big Data   TNG2: TRAINING: Capacity Management Maturity TNG3: TRAINING: Quality in the Data Center: Data Collection and Analysis - & - Scaling Software Performance TNG4: TRAINING: Modeling and Forecasting  
09:45 am-10:00 am REFRESHMENT BREAK
10:00 am-11:45 am   TNG1: TRAINING: Machines are for Answers, Humans are for Questions   TNG2: TRAINING: Capacity Management Maturity TNG3: TRAINING: Software Performance in the Cloud TNG4: TRAINING: Modeling and Forecasting  
12:00 pm-01:00 pm LUNCH
01:30 pm-02:30 pm CMG imPACt OPENING SESSION & CMG BOD ANNUAL BUSINESS MEETING and MICHELSON AWARD PRESENTATION
02:45 pm-03:45 pm 191: Processor Reporting from Capture Ratio to RNI 192: How to Drive Business Value with Capacity Management   194: Do This, Not That--A Guide For Job Seekers & Job Keepers 195: Hybrid IT: The Conflict of Availability, Accessibility, and System of Record 196: The Cost of Performance: Evolving the Capacity Planning Practice  
03:45 pm-04:00 pm REFRESHMENT BREAK
04:00 pm-05:00 pm 1K1: KEYNOTE: Building and Rebuilding a Data Center Every Day            
05:30 pm-07:00 pm WELCOME RECEPTION
07:30 pm-09:30 pm Dinner with Peers

Tuesday, November 7

06:00 am-07:00 am Business District Sneaker Tour
08:00 am-08:30 am Continental BREAKFAST
08:45 am-09:45 am 2K1: KEYNOTE: GIVE A DAMN: The New Business Philosophy!            
09:50 am-10:50 am   202: Enterprise Dilemmas: Innovation on Legacy   204: How to build "Cloudy" Continuous Performance Pipeline as a Service 205: ITIL Capacity Management for the Newbie 206: Best Paper CMG India: Performance Anomaly Detection & Forecasting Model (PADFM) for eRetailer Web application 207: PANEL: Mainframe Expert Panel
10:50 am-11:00 am BEVERAGE BREAK
11:00 am-12:00 pm   222a: Using Performance Measurements to Diagnose Concurrent Programming Issues   224a: Mainframe Capacity Management - Time to Come Out of the Silo 225: Improved IT Operations Management for IT Managers and Capacity Planners 226a: Blockchain Use Case Best Practices 227a: IOT Analytics at the Edge
11:35 am-12:05 pm   222b: Machines are for Answers, Humans are for Questions   224b: Rosetta Stone - How to Speak the Language of Revenue in Performance 226b: Unleash Your Presentation Superpowers! 227b: The Latest IBM Z Performance Brief
12:10 pm-12:15 pm     Roundtable Focus Group Luncheon (12:15 - 1:15 PM)        
12:15 pm-01:15 pm LUNCH
01:15 pm-02:15 pm   242: To MIPS or Not to MIPS, That is the CP Question!   244: Meeting Web Application Performance Service Level Requirements Head-on 245: Proactive Performance Management of FICON Storage Networks with CUP Diagnostics and the IBM z/OS Health Checker 246: Don't Put That "Thing" on our IoT System: SPE for IoT 247: Metrics and Methods that avoid the ITR Trap
02:30 pm-02:45 pm   2L2a: TBD     2L5a: Applying Artificial Intelligence for Performance Engineering 2L6a: Adventures with Charge Back and the Value of a Useful Consistent Lie 2L7a: Business Intelligence in Capacity Management
02:50 pm-03:05 pm   2L2b: Performance Testing Approach to AWS kinesis Stream and Loadrunner     2L5b: IT & Shadow IT (Embrace or Squash) 2L6b: Capacity Management Chronicles: What I Learned in My First 10 Years as a Global Consultant  
03:10 pm-03:25 pm   2L2c: Top Performance Problems found in Large Scale Hybrid-Cloud Applications     2L5c: The Road to Actionable Intelligence is Paved with Minimum, Average, 95th Percentile and Maximum 2L6c: Performance Management Service Level and Activities Calculator  
03:30 pm-03:45 pm REFRESHMENT BREAK
03:45 pm-04:45 pm   282: #SpeedNOLA Hackathon Presentations   284: Megawhosis and Gigawhatsis?! Microprocessors Demystified, Transistors Explained and the Increased Importance of Well-written Software Discussed 285: DevOps: Reliability, Monitoring and Management with Service Asset and Configuration Management 286: Capacity Management Essentials: a Framework for Capacity Analysis 287: PANEL: Application Performance Management in Complex Multi-Platform Environments
05:30 pm-07:30 pm Big Easy Bar Crawl

Wednesday, November 8

08:00 am-08:30 am Continental BREAKFAST
08:45 am-09:45 am 3K1: KEYNOTE: Large Data Interaction, Visualization, and Analysis            
09:50 am-10:50 am   302: Screaming into the Void   304: Best Paper CMG Brazil: z/OS 2.3 in Clouds 305: Continuous Performance Testing: Myths and Realities 306: Performance Engineering and Testing Using Cloud Based Tools 307: PANEL: Can Performance Engineering Leverage Machine Learning and AI?
10:50 am-11:00 am BEVERAGE BREAK
11:00 am-12:00 pm   322a: Benchmarking Deep Learning   324: Impact on Existing Security and Compliance when Migrating to Third-Party Hosted Cloud 325a: What I Learned about DevOps Around the World! 326a: Cost Savings While Increasing Capacity  
11:35 am-12:05 pm   322b: Behaviour-driven Cost Reduction for IT Hardware & Software   325b: Can a Robot Read Your Performance Reports? Deep Learning and Machine Learning for Performance and Capacity Engineers 326b: Demonstrating Return on Investment for Capacity Management 327b: Emerging Workloads: Selecting the Best Execution Venue
12:10 pm-12:15 pm     Women in Tech Luncheon (12:15 -1:15 PM)        
12:15 pm-01:15 pm LUNCH
01:15 pm-02:15 pm   342: 2017 BEST PAPER: Continuous Availability: From the Shift Paradigm to Unmanned Operation. Is it Still a Dream?   344: Removing Silos While Developing A Comprehensive Hybrid Cloud Resiliency Solution 345: Understanding MultiHop FICON Performance, Management, and Configurations 346: Streamlined Model-Driven Performance Engineering 347: z/OS SMT: Deciding Whether to Enable
02:20 pm-02:50 pm   362: The Model Factory - Correlating Server and Database Utilization with Customer Activity   364: SMF 99 - The Lost Gold of WLM Analytics 365: Automated Performance Testing in Preproduction with CI and OSS Tools 366: Benchmarking ML Algorithms and Libraries for Big Data Applications 367: Inside look of z/OS Workload Manager
02:50 pm-03:05 pm REFRESHMENT BREAK
03:05 pm-04:05 pm   372: The History and Future of Monitoring   374: Practical Lessons for Business-Aligned Capacity Management 375: Best Paper Brazil: Planning and Performance Study in the Consolidation of Mainframe CECs 376: The Curse of P90: An Elegant Way to Overcome it Without Magic 377: PANEL: Emerging Technologies: Performance Engineering Implications
04:15 pm-05:15 pm 3K21: KEYNOTE: New Orleans: The Cradle of Civilized Drinking            
05:15 pm-06:15 pm HAPPY HOUR Reception!
07:00 pm-09:00 pm Dinner with Peers

Thursday, November 9

08:00 am-08:30 am Continental BREAKFAST
08:45 am-09:45 am 4K1: KEYNOTE: Is Capacity Management Needed in the Cloud?            
09:50 am-10:50 am     403: Performance Evaluation of Heterogeneous Multi-Queues with Job Replication   405: 2017 BEST PAPER: Achieving CPU (& MLC) Savings through Optimizing Processor Cache    
10:50 am-11:00 am BEVERAGE BREAK
11:00 am-12:00 pm   412: Incorporating Weather Data into Capacity Planning Analysis 413: Rules of Thumb for Response Time Percentiles: How Risky are they? 414: Cloud Capacity Management 415: Performance Insights for the Newest areas of your z/OS Infrastructure    
12:00 pm-01:00 pm LUNCH
01:00 pm-02:00 pm   432: Multivariate IT Capacity Modeling 433: The RNI-based LSPR and the Latest z Systems Performance Brief 434: Dynamic Performance Management of Big Data Clusters 435: Performance Aware Capacity Provisioning and Management    

Monday, November 6

Monday, November 6, 08:00 - 09:45

TNG1 (EMT): TRAINING: Using Hadoop, MapReduce, and Spark to Process Big Datago to top

Room: Beauregard
8:00 Using Hadoop, MapReduce, and Spark to Process Big Data
EMT
As the amount of data and the computational resources needed to process that data exceeds the capacity of a single machine it becomes necessary to distribute the load across multiple machines. Hadoop is an application framework that allows for the processing of large amounts of data to be distributed across any number of servers without requiring the user to manually deal with the complexities of distributing the work and handling network and server failures. In this tutorial we will first introduce the audience to Hadoop and the MapReduce framework and how they can be utilized to process large amounts of data. We will then discuss the Spark distributed data processing engine which has gained adoption at a considerable pace recently, and look at stream processing systems such as Kafka and Storm. We will then bring all the ideas together by putting together a framework for processing performance data from a data center to extract useful metrics.
Presenter bio: Dr. Odysseas Pentakalos is Chief Technology Officer of SYSNET International, Inc., where he focuses on providing his clients consulting services with the architecture of large scale, high-performance enterprise applications, focusing predictive analytics and health information exchange solutions. He holds a Ph.D. in Computer Science from the University of Maryland. He has published dozens of papers in conference proceedings and journals, is a frequent speaker at industry conferences and is the co-author of the book Windows 2000 Performance Guide that is published by O'Reilly. Odysseas can be reached at odysseas@sysnetint.
Odysseas Pentakalos

TNG2 (CAP): TRAINING: Capacity Management Maturitygo to top

Room: Pointe Coupe
8:00 Capacity Management Maturity
CAP
This multi-hour presentation/workshop sets the scene on the Capacity Management function or process and why we need it. How is the process constructed and what governance does it or is it supposed to provide? The sessions will cover multiple elements of what a mature CM process should look like and ultimately what benefits should be expected from it. We take a look at the ITIL defined objectives for CM, what information is required and how we should report that information to different stakeholders within the business, as well as looking at key targeted CM specific questions and the reasoning behind them. The session will provide an overview on why we need a Capacity Management Information System (CMIS), how we can explain in detail how each maturity level from Initial to Optimised is defined and associated focus on the key elements, through to demonstrating how proven generic process improvement techniques can benefit not just CM but the business/organisation as a whole.
Presenter bio: Jamie has been an IT professional since 1998 after graduating from the University of Kent with a BSc in Management Science. After initially working on UNIX systems as an Operator and then a Systems Administrator, he joined Metron in 2002 and has been working on Capacity Management projects and supporting Metron's Athene tool ever since. Jamie is Metron's Product Manager with extensive IT experience, specifically within Capacity Management of virtualized and distributed systems.
Jamie Baker

TNG3 (PERF): TRAINING: Quality in the Data Center: Data Collection and Analysis - & - Scaling Software Performancego to top

Room: LaFourche
8:00 Quality in the Data Center: Data Collection and Analysis
PERF
The emergence of large-scale software deployments in the data center has led to several challenges in software performance analysis. This presentation describes how performance analysis has changed. It highlights the transition from single server performance tools to large-scale analytics that span across data centers. It introduces data collection and analysis methods that can help in assuring quality in the data center.
8:52 Scaling Software Performance
PERF
Effective software performance analysis needs to be conducted by crossing multiple disciplines such as algorithms, data structures, effective coding, performance data collection and its associated overheads, computer architecture, operating systems, containers and virtual machines, statistical analysis, machine learning and applied mathematics. However, no students are prepared to learn all these subjects in school. This workshop starts with software performance in the small and ends with applying analytics to software performance scaling analysis. It also presents seven common mistakes made in the industry.
Presenter bio: Kingsum Chow is currently a Chief Scientist at Alibaba Infrastructure Services. Before joining Alibaba in May 2016, he was a Principal Engineer and Chief Data Scientist in the System Technology and Optimization (STO) division of the Intel Software and Services Group (SSG). He joined Intel in 1996 after receiving Ph.D. in Computer Science and Engineering from the University of Washington. Since then, he has been working on performance, modeling and analysis of software applications. At the Oracle OpenWorld in October 2015, Intel and Oracle CEO's announced the joint Cloud lab called project Apollo led by Kingsum in the opening keynote in front of tens of thousands of software developers. He has been issued more than 20 patents. He has presented more than 80 technical papers. In his spare time, he volunteers to coach multiple robotics teams to bring the joy of learning Science, Technology, Engineering and Mathematics to the K-12 students in his community.
Kingsum Chow

TNG4 (PERF): TRAINING: Modeling and Forecastinggo to top

Room: Feliciana West
8:00 Modeling and Forecasting
PERF
Although most computing environments are heterogeneous, computer system modeling is, in most ways, platform neutral. The same techniques and tools can be used to model zSeries, Unix / Linux, and Windows. At the heart of these models is the essential queueing network. This two-hour presentation provides the details of the essential queueing network, including the necessary statistics that need to be collected from the system, as well as various modeling techniques that yield insights that cannot be gleaned from observing the actual computer system. Once the model is validated, it can be used to explore "what-if" scenarios where either the workload or the underlying configuration can be changed in the model so that the resulting service levels can be observed. If time permits, an additional section on the subject of time series estimation and forecasting will be presented. This course will not teach you everything you need, but it will give you a full survey of the various approaches with a full bibliography for future reference.
Presenter bio: Dr. Salsburg is an independent consultant. Previously, Dr. Salsburg was a Distinguished Engineer and Chief Architect for Unisys Technology Products. He was founder and president of Performance & Modeling, Inc. Dr. Salsburg has been awarded three international patents in the area of infrastructure performance modeling algorithms and software. In addition, he has published over 70 papers and has lectured world-wide on the topics of Real-Time Infrastructure, Cloud Computing and Infrastructure Optimization. In 2010, the Computer Measurement Group awarded Dr. Salsburg the A. A. Michelson Award.
Michael Salsburg

Monday, November 6, 09:45 - 10:00

CONF: REFRESHMENT BREAKgo to top

Monday, November 6, 10:00 - 11:45

TNG1 (CONF): TRAINING: Machines are for Answers, Humans are for Questionsgo to top

Room: Beauregard
10:00 Machines are for Answers, Humans are for Questions
PERF
Big data is the big thing, and AI and predictive analytics will all put us out of job, or so some think. We all know that computers are way better than humans for processing large amounts of data. They also, if artificially intelligent enough, are especially good at doing the things you tell them to do that would otherwise require human intelligence and unrealistic amounts of time. So instead of human vs. machine, it can become human and machine vs. problem. This is especially needed for predicting and diagnosing IT infrastructure performance and cost-efficiency problems. These problems are simply too time consuming for the human analysts to proactively search the vast and complex data sources for answers that machines can provide in seconds. In this presentation, we will show how enabling the machine to utilize domain knowledge and workload information, also known as human intelligence, can be used in modern IT Operations Analytics (ITOA) to automatically highlight the areas that are the most important for a human eye and brain to spend time on. This discussion applies to any IT environment, our examples will be from z/OS and VMware SAN.
Presenter bio: Dr. Gilbert Houtekamer started his career in computer performance analysis when obtaining his Ph.D. on MVS I/O from the Delft University of Technology in the 1980s. He now has over 25 years of experience in the field of z/OS and storage performance and obtained a place in the ‘Mainframe Hall of Fame'. Gilbert is founder and managing director of IntelliMagic, delivering software that applies built-in intelligence to measurement data for storage and z/OS to help customers manage performance, increase efficiency and predict availability issues.
Gilbert Houtekamer

TNG2 (CAP): TRAINING: Capacity Management Maturitygo to top

Room: Pointe Coupe
10:00 Capacity Management Maturity
CAP
This multi-hour presentation/workshop sets the scene on the Capacity Management function or process and why we need it. How is the process constructed and what governance does it or is it supposed to provide? The sessions will cover multiple elements of what a mature CM process should look like and ultimately what benefits should be expected from it. We take a look at the ITIL defined objectives for CM, what information is required and how we should report that information to different stakeholders within the business, as well as looking at key targeted CM specific questions and the reasoning behind them. The session will provide an overview on why we need a Capacity Management Information System (CMIS), how we can explain in detail how each maturity level from Initial to Optimised is defined and associated focus on the key elements, through to demonstrating how proven generic process improvement techniques can benefit not just CM but the business/organisation as a whole.
Presenter bio: Jamie has been an IT professional since 1998 after graduating from the University of Kent with a BSc in Management Science. After initially working on UNIX systems as an Operator and then a Systems Administrator, he joined Metron in 2002 and has been working on Capacity Management projects and supporting Metron's Athene tool ever since. Jamie is Metron's Product Manager with extensive IT experience, specifically within Capacity Management of virtualized and distributed systems.
Jamie Baker

TNG3 (EMT): TRAINING: Software Performance in the Cloudgo to top

Room: LaFourche
10:00 Software Performance in the Cloud
EMT
The emergence of large-scale software deployments in the cloud has led to several challenges: (1) measuring software performance in the data center, and (2) optimizing software for resource management. This workshop addresses the two challenges by bringing the knowledge of software performance monitoring in the data center to the world of applying performance analytics. It introduces data transformations for software performance metrics. The transformations enable effective applications of analytics to software performance in the cloud.
Presenter bio: Kingsum Chow is currently a Chief Scientist at Alibaba Infrastructure Services. Before joining Alibaba in May 2016, he was a Principal Engineer and Chief Data Scientist in the System Technology and Optimization (STO) division of the Intel Software and Services Group (SSG). He joined Intel in 1996 after receiving Ph.D. in Computer Science and Engineering from the University of Washington. Since then, he has been working on performance, modeling and analysis of software applications. At the Oracle OpenWorld in October 2015, Intel and Oracle CEO's announced the joint Cloud lab called project Apollo led by Kingsum in the opening keynote in front of tens of thousands of software developers. He has been issued more than 20 patents. He has presented more than 80 technical papers. In his spare time, he volunteers to coach multiple robotics teams to bring the joy of learning Science, Technology, Engineering and Mathematics to the K-12 students in his community.
Kingsum Chow

TNG4 (PERF): TRAINING: Modeling and Forecastinggo to top

Michael Salsburg
Room: Feliciana West

Although most computing environments are heterogeneous, computer system modeling is, in most ways, platform neutral. The same techniques and tools can be used to model zSeries, Unix / Linux, and Windows. At the heart of these models is the essential queueing network. This two-hour presentation provides the details of the essential queueing network, including the necessary statistics that need to be collected from the system, as well as various modeling techniques that yield insights that cannot be gleaned from observing the actual computer system. Once the model is validated, it can be used to explore "what-if" scenarios where either the workload or the underlying configuration can be changed in the model so that the resulting service levels can be observed. If time permits, an additional section on the subject of time series estimation and forecasting will be presented. This course will not teach you everything you need, but it will give you a full survey of the various approaches with a full bibliography for future reference.

Monday, November 6, 12:00 - 13:00

CONF: LUNCHgo to top

Louisiana Ballroom 1

Monday, November 6, 13:30 - 14:30

CONF: CMG imPACt OPENING SESSION & CMG BOD ANNUAL BUSINESS MEETING and MICHELSON AWARD PRESENTATIONgo to top

Louisiana Ballroom II

Monday, November 6, 14:45 - 15:45

191 (MFR): Processor Reporting from Capture Ratio to RNIgo to top

Room: Louisiana Ballroom II
2:45 Processor Reporting from Capture Ratio to RNI
MFR
The processor remains the most expensive resource in the data center, both directly with the cost of the hardware itself, and indirectly with the MSU software charges based on processor usage. All installations use some form of processor reporting, but in many cases this reporting was developed many, many years ago. Capture Ratios are still interesting, but there is a lot more now. Today's processor performance relies more and more on efficient use of processor cache. It's also important to look at new metrics like the SMF 113 Hardware Counters and the SMF 99.14 topology data. In this presentation, we will cover processor reporting from Capture Ratio to RNI (Relative Nest Intensity), and show you how understanding these metrics can help you tune your system.
Presenter bio: Dr. Gilbert Houtekamer started his career in computer performance analysis when obtaining his Ph.D. on MVS I/O from the Delft University of Technology in the 1980s. He now has over 25 years of experience in the field of z/OS and storage performance and obtained a place in the ‘Mainframe Hall of Fame'. Gilbert is founder and managing director of IntelliMagic, delivering software that applies built-in intelligence to measurement data for storage and z/OS to help customers manage performance, increase efficiency and predict availability issues.
Gilbert Houtekamer
pdf file

192 (CAP): How to Drive Business Value with Capacity Managementgo to top

Room: Beauregard
2:45 How to Drive Business Value with Capacity Management
CAP
Capacity management goes a long way towards right-sizing your IT environment. But you might be wondering, "What else can I get out of it?" Find out how capacity management can have a greater impact on your bottom line (and beyond). You'll learn how to: • Assess your IT maturity today—and figure out where you want to be tomorrow • Analyze your gaps and develop a roadmap to greater maturity • Determine the difference between strategic and operational capacity management • Find greater value—and savings—with smarter capacity management Bio Jeff Schultz is a product specialist who has answered questions for customers and 'tire kickers' for more than 20 years. He boasts a background in helping IT organizations get up to speed on their IT environment for their capacity planning and performance management objectives.
Presenter bio: Jeff is a product specialist who has answered questions for customers and ‘tire kickers' for more than 20 years. He boasts a background in helping IT organizations get up to speed on their IT environment for their capacity planning and performance management objectives.
Jeff Schultz
pdf file

194 (EMT): Do This, Not That--A Guide For Job Seekers & Job Keepersgo to top

Room: Pointe Coupe
2:45 Do This, Not That--A Guide For Job Seekers & Job Keepers
EMT
The job market can be an unforgiving ground if you don't get every interaction right. If you're new to job hunting or haven't been looking in a while, learn some of the tips that can give you the edge in the market. It can be easier, saner and faster—a little performance planning can make the whole experience better for you. Learn what not to do and, if there's time, learn a bit about how to make yourself a bit more bullet-proof in your current job.
Presenter bio: Denise P. Kalm is the Chief Innovator at Kalm Kreative, Inc., a marketing services organization. Her experience as a performance analyst/capacity planner, software consultant, and then marketing maven at various software companies grounds her work providing contract writing, editing, marketing and speaking services. She is a frequently published author in both the IT world and outside and has 3 books: Lifestorm, Career Savvy-Keeping & Transforming Your Job, Tech Grief - Survive & Thrive Thru Career Losses (with L. Donovan). Kalm is a requested speaker at such venues as SHARE, CMG and ITFMA and has enhanced her skills through Toastmasters where she has earned her ACG/ALB . She is also a personal coach at DPK Coaching.
Denise P Kalm
pdf file

195 (MFR): Hybrid IT: The Conflict of Availability, Accessibility, and System of Recordgo to top

Room: LaFourche
2:45 Hybrid IT: The Conflict of Availability, Accessibility, and System of Record
MFR
Today's enterprise organizations, and some smaller companies are facing a fork in the road: The path to remaining on-prem where the System of Record is prevalent, or the yellow-brick road to Cloudland for availability and accessibility. The reasons to go down one path or the other are compelling, however at the end of either, how will you be positioned for the future? Choosing one path over the other may prevent an organization from seeing benefits that will help their bottom line, but there is a way to bridge the two. Knowing the obstacles and hurdles to build that bridge is critical. The conflict can be mitigated.
Presenter bio: As Chief Technology Officer, Jeff brings over 20 years of experience in IT in the manufacturing, telecommunications and financial industries. Prior to joining GT Software, he honed his leadership skills at SITA and, 4 Access Communications. Jeff also served as the Chief Software Architect at Broadsource, Inc. in Atlanta. He has a consistent track record of increasing productivity, exceeding marketing and sales expectations and improving customer satisfaction.
Jeff Andrews
pdf file

196 (PERF): The Cost of Performance: Evolving the Capacity Planning Practicego to top

Room: Feliciana West
2:45 The Cost of Performance: Evolving the Capacity Planning Practice
PERF
What's most important to your business, the performance of IT systems or the cost, or the value? How do you report performance, cost & value to your executives? How does the cost of performance engineering impact the delivery of business services? Executives listen when we describe the results of our performance analysis in terms of dollars and the impact to the bottom line. In this paper, we apply a structured methodology and a set of metrics to evaluate cost, value & performance in a case study for a large financial institution. How does cost effect best execution environment choices? Who understands the value of the business service? How is it measured? We propose some simple metrics to compare: infrastructure & development dollars per transaction, revenue per transaction, transaction response time & describe the process and style for communicating these metrics up to the executive suite.
Presenter bio: Amy Spellmann is a Global Practice Principal with 451 Research Advisory, where she specializes in cloud and digital infrastructure capacity planning and application performance. Amy's expertise includes modeling IT energy footprint projections and strategies for managing IT capacity to reduce space, power and cooling consumption in the datacenter. Amy's extensive experience in capacity and performance planning guides Fortune 500 companies in optimizing and managing complex IT infrastructures including private/public/hybrid cloud. One of her specialties is coordinating with IT and business partners to ensure cost-effective service delivery through the entire digital infrastructure stack.
Presenter bio: Richard Gimarc is an independent consultant that specializes in capacity planning, performance engineering and performance analysis. Over the years Richard has developed techniques and applied his expertise in a wide range of complex, diverse and challenging environments. Richard has authored 30+ papers that include topics such as application scalability, green capacity planning and cloud performance. Richard is a regular speaker at both CMG international and regional conferences.
Amy SpellmannRichard Gimarc

Monday, November 6, 15:45 - 16:00

CONF: REFRESHMENT BREAKgo to top

Monday, November 6, 16:00 - 17:00

1K1 (CAP): KEYNOTE: Building and Rebuilding a Data Center Every Daygo to top

Louisiana Ballroom II
Room: Louisiana Ballroom II
4:00 Building and Rebuilding a Data Center Every Day
CAP
Netflix streaming service serves over 100m customers around the world. The demand for streaming is not constant, rising and falling throughout the day. Rather than maintain constant capacity for peak demand we deploy and retire enough virtual machines to build a small data center every 24-48 hours. This talk will reveal some of the logistics around managing large capacity in a dynamically changing environment including some of the tools we use for deployment, monitoring and performance tuning of our service.
Presenter bio: Ed Hunter is the manager of the Performance and Operating Systems group for the streaming service at Netflix. His team helps the owners of the applications which comprise Netflix tune and configure their service for maximum efficiency at reasonable cost. Prior to Netflix he spent time as a director of software at Juniper Networks and Sun Microsystems.
Edward Hunter

Monday, November 6, 17:30 - 19:00

CONF: WELCOME RECEPTIONgo to top

Parish Hall

Monday, November 6, 19:30 - 21:30

CONF: Dinner with Peersgo to top

Monday and Wednesday evenings after the receptions - Go Cajun! Join attendees with similar interests or create a BOF at a bar and go out for dinner at some great restaurants in the area.

Tuesday, November 7

Tuesday, November 7, 06:00 - 07:00

CONF: Business District Sneaker Tourgo to top

Join us for the unofficial CMG 5k Run/Walk. Participants will receive a group photo and a high-five. Tuesday morning, 6:00 AM - we'll be waiting for you! Meet in the ground floor hotel lobby.

Tuesday, November 7, 08:00 - 08:30

CONF: Continental BREAKFASTgo to top

Louisiana Ballroom I

Tuesday, November 7, 08:45 - 09:45

2K1 (EMT): KEYNOTE: GIVE A DAMN: The New Business Philosophy!go to top

Louisiana Ballroom II
Room: Louisiana Ballroom II
8:45 GIVE A DAMN: The New Business Philosophy!
EMT
Our own actions, individual and collective, determine the world in which we live. The mindset of today's society and business environment has deteriorated because people just don't care about others the way they did in the past. Society has become more ego and excuse driven, selfish, entitled, violent and more complacent (lazy) than ever before, and this has much to do with a lack of positive role models in the media and in our government. The "what's in it for me?" attitude drives many people's behavior, and responsibility, accountability and selflessness have become more the exception than the rule. The author and presenter of this session, Mark S. Lewis, asserts that if we would all just GIVE A DAMN about others, we could lead by example and change the business climate and our society for the better. The author offers suggestions for simple ways to affect change, and he challenges each of us to join the GIVE A DAMN revolution to help make our world a better place.
Presenter bio: Mark received his Finance degree from Boston College and an MBA in Marketing from Tulane. After working at IBM for 13 years, he formed an Internet company in 1994, which was sold 3 years later. He started another company, merged it with two other companies and in 2000 became the Rising Tide Small Business of the Year in Technology. In 2002, Mark became President of the Louisiana Technology Council in 2005 he was selected as Technology Leader of the Year as Louisiana went from 49th to 32nd in technology employment. Mark regularly appears on a monthly TV program called Digital Gumbo, which helps promote local technology companies. Mark is now Managing Director of Simmons & White, a coaching/consulting firm and he runs the Louisiana IT Symposium for CIOs, CTOs, VP's of IT and their direct reports. Mark recently published his book, GIVE A DAMN!, which discusses the challenges facing society and business and how organizations can create a new attitude in their organization.
Mark S. Lewis

Tuesday, November 7, 09:50 - 10:50

202 (EMT): Enterprise Dilemmas: Innovation on Legacygo to top

Room: Beauregard
9:50 Enterprise Dilemmas: Innovation on Legacy
EMT
Innovate or die, digital transformation, move quickly or be beaten. These are all key phrases we've been hearing since the digital revolution during the last decade. Although there are clearly good examples of this, whether it is Uber, Amazon, or Airbnb. We've heard it all before. These new natively digital models clearly have advantages, as they are not just born digital, but also born in the cloud, created without the thinking (or baggage) of prior businesses. They still rely upon traditional enterprises to deliver their digital services examples include credit card processors, logistics companies, freight transport, supply chain, and computer hardware, manufacturers. These traditional enterprise businesses, in turn, rely upon specialized systems, built over decades, often created on platforms which come with the burden of being crafted in a different time. Today these systems are called "legacy," but there is often a logical reason for their perseverance and reliability in today's enterprises. These businesses themselves are under threat and feel the pressure to evolve and innovate at a pace which is uncomfortable to them. How does a traditional company or organization balance the needs of their existing business and customers, while trying to reinvent themselves? We will take a look at these problems, and how they manifest themselves in culture, people, process, and of course technology. We will explore some of the ways enterprises are tackling these challenges whether through agile transformations, DevOps, or balancing by using a bi-modal IT approach as Gartner recommends. Key takeaways: ● Learn how today's digital companies remain linked to traditional enterprise businesses. ● Discover how traditional organizations are accelerating innovation, and what are often the pitfalls ● What are methods that traditional enterprises are overcoming the need to balance two modes of business? ● Legacy systems often underpin the new innovative systems, What are the challenges associated with this approach?
Presenter bio: Jonah Kowall is the Vice President of Market Development and Insights at AppDynamics. Jonah has a diverse background including 15 years as an IT practitioner at several startups and larger enterprises with a focus on infrastructure and operations, security, and performance engineering. His experience includes running tactical and strategic operational initiatives and monitoring of infrastructure and application components. Jonah previously worked at Gartner as a research Vice President, specializing in availability and performance monitoring and IT operations management. His research focused on IT leaders and CIOs and he has spoken at many conferences on these topics. Jonah led Gartner's influential application performance monitoring and network performance monitoring and diagnostics magic quadrants.
Jonah Kowall
pdf file

204 (PERF): How to build "Cloudy" Continuous Performance Pipeline as a Servicego to top

Room: Pointe Coupe
9:50 How to build "Cloudy" Continuous Performance Pipeline as a Service
PERF
In this day of age IT is about automation and about self-service. The same is true for Performance Engineering. In this session I would talk about how to leverage cloud technologies to build a continuous performance environment that can be leveraged as a service by developers/architects/devops teams … This not only allows for Performance as a Service but also enables you to "Shift-Left" Performance in earlier stages of the delivery pipeline
Presenter bio: Andreas has been working in software quality for the past 15 years helping companies from small startup to large enterprise figuring out why their current application falls short on quality and how to prevent quality issues for future development. He is a regular speaker at international conferences, meetups & user groups. He has done DevOps Boston, Velocity Santa Clara, Agile Testing Days, Star West or STPCon in the recent years. Besides being excited about software quality he is also an enthusiastic salsa dancer
Andreas Grabner
pdf file

205 (CAP): ITIL Capacity Management for the Newbiego to top

Room: LaFourche
9:50 ITIL Capacity Management for the Newbie
CAP
Capacity Management is a daunting task for a newbie or newly formed groups involved in the process.  This assists in the basic concepts of Capacity Management and provides practical examples to help the newbie get started.  The items to be covered in the presentation are: ITIL Definition of Capacity Management How Capacity Management fits into other ITIL activities What is the difference between Capacity Management and Performance Management? What information do "I" need to be successful in producing my Capacity Management Plan How do I talk to Business stakeholders about Capacity Management in order to get the information I need and their buy-in to the process? What is a baseline and how does it affects forecasting? What are the different types of Forecasts I can use, and when are the different types appropriate? Examples of Capacity Management scenarios for Mainframe, Virtualized & Distributed systems
Presenter bio: Jamie has been an IT professional since 1998 after graduating from the University of Kent with a BSc in Management Science. After initially working on UNIX systems as an Operator and then a Systems Administrator, he joined Metron in 2002 and has been working on Capacity Management projects and supporting Metron's Athene tool ever since. Jamie is Metron's Product Manager with extensive IT experience, specifically within Capacity Management of virtualized and distributed systems.
Jamie Baker

206 (EMT): Best Paper CMG India: Performance Anomaly Detection & Forecasting Model (PADFM) for eRetailer Web applicationgo to top

Room: Feliciana West
9:50 Performance Anomaly Detection & Forecasting Model (PADFM) for eRetailer Web application
EMT
With high performance becoming a mandate, its impact & need for sophisticated performance management is realized by every e-business. Though Application Performance Management (APM) tools has brought down the performance problem diagnosis time to a great extend, these tools don't actually help in detecting the anomalies in the production environment (online or offline mode) and make forecasts on the server performance metrics for capacity sizing. Hence, robust performance anomaly detection and forecasting solution is in demand to detect anomalies in production environment and to provide forecasts on server resource demand to support in server sizing. This paper deals with the implementation of Performance Anomaly detection and Forecasting Model for an online retailer business application using statistical modeling & machine learning techniques that has yielded multi-fold benefit to the business
Presenter bio: Ramya R Moorthy carries about 14+ years of experience in application performance analysis & Capacity Planning. She is the Co-Founder of QA EliteSouls LLC (US) & Founder of EliteSouls LLP (India). She has done consulting for several clients across business domains, for solving their complex performance & capacity problems.Performance prediction analysis, Performance Modelling & predictive performance analytics are her key areas of interest. She enjoys solving technical problems for assuring the system performance, scalability & capacity. She runs an online academy for Performance Engineers. She likes teaching & mentoring to build elite professionals for future. She is an active blogger, conference speaker & writer. She is part of CMG India board of directors 2017.
Ramya RamalingaMoorthy
pdf file

207 (MFR): PANEL: Mainframe Expert Panelgo to top

Room: Feliciana East
9:50 PANEL: Mainframe Expert Panel
MFR
Bring your questions! This panel will feature the best mainframe performance experts anywhere. Submit questions early to john.baker@epstrategies.com
Presenter bio: Bio: Over 25 years in IT industry as both a customer and consultant. As a customer, John designed, implemented and maintained many critical projects such as WLM Goal Mode and GDPS/Data Mirroring. He has extensive experience with many performance analysis tools and techniques at the hardware, OS, and application levels. As a consultant, John has assisted many of the world's largest datacenters with their z/OS performance challenges and held Subject Area Chair positions with CMG for both Storage and Capacity Planning for several years. John has hosted many sessions at CMG, SHARE, etc. as well as several regional user groups. In 2017, John joined forces with internationally-recognized performance specialist Peter Enrico.
Presenter bio: Peter Enrico has strong and diverse experience with the IBM zArchitecture platforms, and a solid background in z/OS, Workload Manager, Parallel Sysplex, UNIX System Services, and WebSphere e-business performance. Peter also has extensive experience measuring, analyzing, and tuning the performance of z/OS systems, Sysplexes, and subsystems. Peter's abilities extend beyond just z/OS performance and capacity planning. He is considered a highly qualified and effective communicator and seminar instructor. For details of Peter's performance workshop and seminar schedule, his services, and access to past papers and presentations, please visit www.pivotor.com or www.epstrategies.com.
John BakerPeter Enrico

Tuesday, November 7, 10:50 - 11:00

CONF: BEVERAGE BREAKgo to top

Tuesday, November 7, 11:00 - 11:30

222a (PERF): Using Performance Measurements to Diagnose Concurrent Programming Issuesgo to top

Room: Beauregard
11:00 Using Performance Measurements to Diagnose Concurrent Programming Issues
PERF
Performance tests of transaction-oriented systems should be structured to reveal whether average CPU, bandwidth, and other resource utilizations are linear with respect to offered transaction rates and constant as long as the loads are held constant. This behavior is predicted by queueing network models of performance when the assumptions that enable the system to reach steady state hold. Deviations from this behavior are indicative of concurrent programming issues that violate the steady state assumptions and undermine performance even when resources might not otherwise be scarce. Examples of such issues include proneness to deadlock, the repeated onset of deadlock followed by recovery, and the failure to implement mutual exclusion correctly, repeated polling without respite, and the incorrect implementation of synchronization or mutual exclusion. Heavy processor utilization in the absence of applied load, e.g., before the start or after the end of a performance test, indicates the occurrence of repeated polling without respite. The degradation of response time or throughput (sometimes accompanied by an increase in transaction failure rates) when the number of available cores is increased indicates the incorrect use of synchronization methods resulting in a lack of thread safety. Repeated resource idle times even in the presence of a constant load can indicate the occurrence of deadlock or the failure of a synchronization message to be delivered, followed by a timeout. We illustrate many of these behavior patterns with measurement data.
Presenter bio: Andre B. Bondi is the founder of Software Performance and Scalability Consulting LLC. During the fall of 2016, he was a visiting professor at the University of L'Aquila. He is a recipient of CMG's A. A. Michelson Award. Until 2015, he was a Senior Staff Engineer at Siemens Corp., Corporate Technologies in Princeton. His book, Foundations of Software and Systems Performance Engineering, was published by Addison-Wesley in August 2014. He has worked on performance issues in many domains, including telecommunications and train control. Prior to joining Siemens, he held senior performance positions at two startup companies. He spent more than ten years working performance and operational issues at AT&T Labs and Bell Labs. He taught courses in performance, simulation, operating systems, and architecture at UCSB for three years. He holds a Ph.D. in computer science from Purdue University, and an M.Sc. in statistics from University College London. He holds nine US patents.
Andre Bondi

224a (MFR): Mainframe Capacity Management - Time to Come Out of the Silogo to top

Room: Pointe Coupe
11:00 Mainframe Capacity Management - Time to Come Out of the Silo
MFR
The mainframe has traditionally adhered to its own set of procedures, processes, and reporting, as it was a mature discipline within an organization. That has often led to a silo mentality; in many cases, new technologies were not considered in its purview. These new technologies include Cloud, Hyper-Convergence computing, and Big Data. Capacity management for the mainframe, to be successful, needs to stretch out its arms and embrace these new platforms: they are integral to the success of the enterprise. The disciplines developed for the mainframe can be used to assist in the maturing of these other areas if everyone comes at it with an open mind. The skillsets needed by Capacity Managers these days must broaden as there are few individuals to perform that function. The landscape is continually evolving within the enterprise and we need to embrace and contribute to all areas, and not focus on just our discipline. Our session will discuss: • The good aspects of mainframe Capacity Management • The bad aspects needing resolution • New architectures affecting the mainframe • The changing organization • How does Capacity Management evolve over the new few years? • How does mainframe Capacity Management fit into the new world?
Presenter bio: Charles Johnson has been in the Information Technology industry for over 30 years. This has included working at a number of Fortune 500 organizations in different business sectors, including insurance, financial and automotive. Charles has been involved in Performance and Capacity for zOS for the majority of his career, both as a technician and manager. Charles is currently a Principal Consultant with Metron-Athene, Inc., a worldwide software organization specializing in Performance and Capacity Management.
Charles W. Johnson, Jr.
pdf file

Tuesday, November 7, 11:00 - 12:00

225 (CAP): Improved IT Operations Management for IT Managers and Capacity Plannersgo to top

Room: LaFourche
11:00 Improved IT Operations Management for IT Managers and Capacity Planners
CAP
IT and business managers are increasingly concerned with the rising costs associated with their highly complex and ever-growing mainframe and distributed systems datacenters. They are also concerned about controlling outages, and mitigating the lack of transparency the various business units have into capacity and cost drivers - including their outsourced environments. New tools with modernized reporting techniques allow you to merge basic business data with the abundance of IT data already being collected, to improve the transparency into the relationship between IT costs and business activities. This new-found clarity of IT resource usage and cost, down to the business unit level, provides a common language between IT managers and business managers. It also allows IT spending decisions to become fact-based - removing the guess-work and uncertainty common in today's over-burdened IT organizations. Join us as we discuss: • Understanding cost drivers for both mainframe and distributed components of your data center • Using cloud-based analytics to fix IT problems before they affect the business • Improving oversight of outsourcers and billing
Presenter bio: Andrew Armstrong is the Chief Customer Officer at DataKinetics, the leader in mainframe performance and optimization solutions. As CCO, one of Andrew's most important responsibilities is to strive to understand the nuances of the Fortune 1000, to be aware of the IT challenges that they face, and the strengths and weaknesses of the solutions that are available to them. An experienced C-level executive, he has put in place the right products and partnerships, and the right sales and marketing programs, and has transformed businesses to attain continuous profitability and growth while maintaining customer satisfaction as a top-level concern.
Presenter bio: Bio: Over 25 years in IT industry as both a customer and consultant. As a customer, John designed, implemented and maintained many critical projects such as WLM Goal Mode and GDPS/Data Mirroring. He has extensive experience with many performance analysis tools and techniques at the hardware, OS, and application levels. As a consultant, John has assisted many of the world's largest datacenters with their z/OS performance challenges and held Subject Area Chair positions with CMG for both Storage and Capacity Planning for several years. John has hosted many sessions at CMG, SHARE, etc. as well as several regional user groups. In 2017, John joined forces with internationally-recognized performance specialist Peter Enrico.
Andrew ArmstrongJohn Baker

Tuesday, November 7, 11:00 - 11:30

226a (EMT): Blockchain Use Case Best Practicesgo to top

Room: Feliciana West
11:00 Blockchain Use Case Best Practices
EMT
What are some of the best use cases for Blockchain? How do I determine if a workload is a good fit? What are the performance implications? How can Blockchain reduce time, cost, and risk for my organization? Attend this session to find out!
Presenter bio: Elisabeth Stahl is an IBM Distinguished Engineer and has been working in IT Infrastructure Optimization for over 25 years. She is a member of the IBM Academy of Technology, IEEE Senior Member, Computer Measurement Group Program Chair and is on the Board of Directors at The Music Settlement. Elisabeth received a BA in Mathematics from the University of Pennsylvania and an MBA from NYU. Follow her on Twitter @ibmperformance .
Elisabeth Stahl

227a (EMT): IOT Analytics at the Edgego to top

Room: Feliciana East
11:00 IOT Analytics at the Edge
EMT
IoT deployments typically incorporate a simple analytics framework in which sampled data is sent from "Things", either devices or sensors, to the Cloud for storage and analysis. This approach does not scale well either for large numbers of devices or for data that is sampled at high frequency. Embedded Device Analytics or Edge Analytics architectures incorporate part of the analytics functionality into the device or sensor; this dramatically reduces the volume of data sent to, and stored in, the Cloud, and enables devices to be more intelligent. Implementing analytics functionality in real time in resource constrained devices can be a challenge however this approach has been successfully used in a range of applications, for example in multimedia devices. This presentation draws on extensive field experience in applying edge analytics to specific application areas and outlines approaches to implementation in the broader IoT context.
Presenter bio: Founder and CEO of Telchemy Incorporated Previously - CTO of Hayes - Director, Research and Strategy, Dowty Communications - System Architect, British Telecom
Alan Clark

Tuesday, November 7, 11:35 - 12:05

222b (PERF): Machines are for Answers, Humans are for Questionsgo to top

Room: Beauregard
11:35 Machines are for Answers, Humans are for Questions
PERF
Big data is the big thing, and AI and predictive analytics will all put us out of job, or so some think. We all know that computers are way better than humans for processing large amounts of data. They also, if artificially intelligent enough, are especially good at doing the things you tell them to do that would otherwise require human intelligence and unrealistic amounts of time. So instead of human vs. machine, it can become human and machine vs. problem. This is especially needed for predicting and diagnosing IT infrastructure performance and cost-efficiency problems. These problems are simply too time consuming for the human analysts to proactively search the vast and complex data sources for answers that machines can provide in seconds. In this presentation, we will show how enabling the machine to utilize domain knowledge and workload information, also known as human intelligence, can be used in modern IT Operations Analytics (ITOA) to automatically highlight the areas that are the most important for a human eye and brain to spend time on. This discussion applies to any IT environment, our examples will be from z/OS and VMware SAN.
Presenter bio: Dr. Gilbert Houtekamer started his career in computer performance analysis when obtaining his Ph.D. on MVS I/O from the Delft University of Technology in the 1980s. He now has over 25 years of experience in the field of z/OS and storage performance and obtained a place in the ‘Mainframe Hall of Fame'. Gilbert is founder and managing director of IntelliMagic, delivering software that applies built-in intelligence to measurement data for storage and z/OS to help customers manage performance, increase efficiency and predict availability issues.
Gilbert Houtekamer

224b (PERF): Rosetta Stone - How to Speak the Language of Revenue in Performancego to top

Room: Pointe Coupe
11:35 Rosetta Stone - How to Speak the Language of Revenue in Performance
PERF
For years, web performance has been the domain expertise of the IT and technology organizations. Yet, this information, when coupled with the appropriate business metrics, can be a powerful asset to any business. This session will discuss and present how to use the language of web performance to speak with the business, especially around how marketing runs campaigns & promotions. Attending this session will arm you with the Information needed to make yourself valuable to the business and marketing organizations and to speak their language and understand their metrics.
Presenter bio: With nearly 20 years working in the eCommerce performance space, Dan is responsible for taking the first revenue-focused solution in the digital performance management space to market as a trusted advisor for Blue Triangle's strategic customers, and changing the way eCommerce organizations approach today's eCommerce challenges. Dan can be reached on Twitter @DanBoutinGNV
Dan Boutin
pdf file

226b (EMT): Unleash Your Presentation Superpowers!go to top

Room: Feliciana West
11:35 Unleash Your Presentation Superpowers!
EMT
How does an awkward geek (and a non-native English speaker to boot!) turn into someone who gets Thank You notes from senior management for building effective technical presentations? I did it by learning and practicing these presentation superpowers! Best of all, you will be able to do the same!
Presenter bio: Anoush Najarian is a Software Engineering Manager at MathWorks in Natick, MA where she leads the MATLAB Performance Team. She holds Master's degrees in Computer Science and Mathematics from the University of Illinois at Urbana-Champaign, and an undergraduate degree in CS and applied math from the Yerevan State University in her native Armenia. Anoush has been serving on the CMG Board of Directors since 2014, and has served as the Social Media chair for the CMGimPACt conferences.
Anoush Najarian

227b (MFR): The Latest IBM Z Performance Briefgo to top

Room: Feliciana East
11:35 The Latest IBM Z Performance Brief
MFR
The Large System Performance Reference (LSPR) ratings of the latest IBM Z mainframe will be discussed along with its hardware performance drivers; topics include processors, cache and memory topology, SMT, subcapacity models, and workload variability.
Presenter bio: David has been a part of the IBM Z Hardware Performance and Design teams for 17 years. In his current role he is a client-facing lab representative for system performance inquiries and situations worldwide and co-develops the Large Systems Performance Reference (LSPR). He co-developed five generations of System z core performance models that have helped to shape the hardware designs and supply projection data to brand/marketing. He is a co-author of ~30 patents.
David Hutton
pdf file

Tuesday, November 7, 12:10 - 12:15

CONF: Roundtable Focus Group Luncheon (12:15 - 1:15 PM)go to top

Room: St. Landry

Attend our Tuesday lunch and tell us what you want and need from CMG in 2018. Get your lunch through the main buffet and meet us in St. Landry on the 9th floor.

Tuesday, November 7, 12:15 - 13:15

CONF: LUNCHgo to top

Louisiana Ballroom I

Tuesday, November 7, 13:15 - 14:15

242 (MFR): To MIPS or Not to MIPS, That is the CP Question!go to top

Room: Beauregard
1:15 To MIPS or Not to MIPS, That is the CP Question!
MFR
All workloads are not created equal, nor are all processors. Learn how workload characteristics (instruction mix, memory footprint, IO rates) battle it out with the processor design (micro, gigahertz, cache structures, Nway) to produce a variety of processor capacity relationships for IBM's z Systems processors. And everyone wants to boil all this mayhem down to "one number fits all" - yikes. Utilization, LPAR configuration and sypslex affects capacity ratios too? They sure do and will be discussed as well. After seeing this presentation you may never want to use MIPS for capacity planning again ... just kidding ... but you will find out when its okay to use MIPS and when you need to dig a little deeper.
Presenter bio: Gary is an IBM Distinguished Engineer in z Systems Design and Performance. Since joining IBM in 1974, he has been involved in the design and evaluation of the major system resource managers of z/OS, Parallel Sysplex, coupling facilities and high-end servers. Gary has developed a number of techniques and methodologies for performance analysis and capacity planning which have been used both to direct product development and to assist clients with performance management of their systems. He holds eleven patents and has spoken at numerous regional, national and international conferences. CMG recognized Gary as the 2012 winner of the A.A. Michelson Award.
Gary M King
pdf file

244 (CAP): Meeting Web Application Performance Service Level Requirements Head-ongo to top

Room: Pointe Coupe
1:15 Meeting Web Application Performance Service Level Requirements Head-on
CAP
Performance service level requirements are often included in web application development contracts and SLAs without much thought given to the measurements and analysis needed to demonstrate compliance with those requirements. Many times specifications are so vague they lead to customer and vendor disagreeing over what data should be used to determine compliance, or agreeing on a data collection and analysis scheme that is inconsistent with fundamental statistical principles. This paper looks at what constitutes a good web application performance service level specification and how compliance with that requirement can be demonstrated with a measurement mechanism which mirrors it as well as conforms to standard statistical inference methods.
Presenter bio: Jim has worked 40 years in the telecommunications and computer industries for GTE, Tandem Computers, Siemens, and currently is the Capacity Planner for the State Of Nevada. At GTE he worked in both Data Center Capacity Planning and Digital Switching Traffic Capacity determination. While at Siemens he obtained EU and US patents for a traffic overload control mechanism used in multiple products including a VoIP Switch. He holds BS and MS degrees in Operations Research from The Ohio State University.
James Brady
pdf file

245 (MFR): Proactive Performance Management of FICON Storage Networks with CUP Diagnostics and the IBM z/OS Health Checkergo to top

Room: LaFourche
1:15 Proactive Performance Management of FICON Storage Networks with CUP Diagnostics and the IBM z/OS Health Checker
MFR
The FICON Control Unit Port (CUP) provides an in-band management interface defined by IBM that defines the channel command words (CCWs) that the FICON host can use for managing the switch. Several new advanced diagnostic features have been added to FICON CUP over the past two years, and these new features have been extensively integrated with the IBM Health Checker for z/OS.  This paper will review some basic FICON CUP functionality, and then introduce those new diagnostic features.
Presenter bio: Dr. Steve Guendert is z Systems Technology CTO for Brocade Communications, where he leads the mainframe-related business efforts. He was inducted into the IBM Mainframe Hall of Fame in 2017 for his contributions in the area of I/O technology. He is a senior member of the Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), and a member of the Computer Measurement Group (CMG). He is a former member of both the SHARE and CMG boards of directors. Steve has authored over 50 papers on the subject of mainframe I/O and storage networking, as well as two books.
Stephen Guendert
pdf file

246 (EMT): Don't Put That "Thing" on our IoT System: SPE for IoTgo to top

Room: Feliciana West
1:15 Don't Put That "Thing" on our IoT System: SPE for IoT
EMT
Developers and architects are seeing how different Internet of Things (IoT) infrastructures are from other cloud based or virtual applications. As Software Performance Engineering (SPE) professionals, we know the effort and resistance to building design-stage models. Now is the time for using faster and easier performance predictions and embedding SPE in the development process. With IoT come architectural and design decisions regarding data analysis and storage, security and encryption, network constraints, and best processing placement: at the edge vs. the core vs. the cloud. This presentation explains SPE methods for translating Model-Based Engineering (MBE) designs into performance models that can predict performance and evaluate design options early in the development process. For demonstration, we present a case study based on a sensor network application that requires encryption to prevent security breaches. The key to the successful implementation of this system is the automated translation of the design into performance models that evaluate software decisions and hardware infrastructure options.
Presenter bio: Dr. Smith, a principal consultant of the Performance Engineering Services Division of L&S Computer Technology, Inc., is known for her work in defining the field of SPE and integrating SPE into the development of new software systems. Dr. Smith received the Computer Measurement Group's prestigious AA Michelson Award for technical excellence and professional contributions for her SPE work. She also authored the original SPE book: Performance Engineering of Software Systems, published in 1990 by Addison-Wesley, and approximately 100 scientific papers. She is the creator of the SPE·ED™ performance engineering tool. She has over 30 years of experience in the practice, teaching, research and development of the SPE performance prediction techniques.
Presenter bio: Amy has over 25 years of hands on experience in performance engineering, infrastructure capacity planning, and modeling. She has worked with hundreds of Fortune 1000 companies, assisting them to efficiently manage their IT infrastructure while improving end-user response times. One of her specialties is coordinating with IT and business partners to ensure that applications and IT services meet service level agreements and performance requirements cost effectively. Amy's technology focus over the last 8 years has been in Cloud and digital infrastructures (a holistic view of the entire service delivery stack from the business, application, IT, to facilities). She has held multiple positions at The Uptime Institute, 451 Research and HyPerformix as a leader in consulting practices for Digital Infrastructure capacity planning and performance engineering.
Connie SmithAmy Spellmann

247 (PERF): Metrics and Methods that avoid the ITR Trapgo to top

Room: Feliciana East
1:15 Metrics and Methods that avoid the ITR Trap
PERF
There are several posts on Linkedin and an article in IBM System Magazine about the ITR Trap. (Here is a link: http://www.ibmsystemsmag.com/mainframe/Business-Strategy/Competitive-Advantage/ITR-performance/ ). This paper will develop metrics and estimators and method by taking a deeper dive into some real utilization measurements to gain insight into how to avoid the trap.
Presenter bio: Joe Temple worked for IBM for nearly 4 decades and retired at end of 3Q2013 as an IBM Distinguished Engineer. After a 15 year career in Hardware Design and a 15 year Career pre and post sales client technical support, he spent the last decade working on and Leading IBM efforts to determine the Relative Capacity of Servers, Compare Server Architectures, developing and deploying sizing and "Fit for Purpose" Platform Selection methods. Joe continues to work in this area and started the Low Country North Shore Consulting shortly after his retirement from IBM. It is so named because he lives both in the "Low Country of South Carolina and on the North Shore of Long Island. He spends spare his time walking beaches with his wife Rae, making frequent attempts to play golf and accumulating hours towards a USCG "Operator of Unlicensed Passenger Vessels" He makes a batch of hard cider every year.
pdf file

Tuesday, November 7, 14:30 - 14:45

2L2a (MFR): TBDgo to top

Room: Beauregard

2L5a (EMT): Applying Artificial Intelligence for Performance Engineeringgo to top

Room: LaFourche
2:30 Applying Artificial Intelligence for Performance Engineering
EMT
Environments become very complex thanks to new technology and architectures. Change happens more rapidly. Performance Engineering is either part of the pipeline or happens in production. This work cant be done by looking at static dashboards any longer and drawing conclusions based on years of experience. Performance Engineering has to leverage new approaches such as anomaly detection, machine learning and artificial intelligence. In this session I talk about how Dynatrace leverages AI to scale and automate many of the performance engineering tasks
Presenter bio: Andreas has been working in software quality for the past 15 years helping companies from small startup to large enterprise figuring out why their current application falls short on quality and how to prevent quality issues for future development. He is a regular speaker at international conferences, meetups & user groups. He has done DevOps Boston, Velocity Santa Clara, Agile Testing Days, Star West or STPCon in the recent years. Besides being excited about software quality he is also an enthusiastic salsa dancer
Andreas Grabner
pdf file

2L6a (CAP): Adventures with Charge Back and the Value of a Useful Consistent Liego to top

Room: Feliciana West
2:30 Adventures with Charge Back and the Value of a Useful Consistent Lie
CAP
This is a summary of our adventures with chargeback, and the value of "useful consistent lies". Let's start with stating that change back is easy. Trivial even. It is simple math. What is your recovery target divided by the number of items expected to be 'sold'. There you have it. That is it. You are done. So what is a useful consistent lie, and what is its value?? Well it is everything around you and how you understand it. When your understanding is sufficient for the conversation, you use that understanding, that useful consistent lie, within the conversation, and the value of the useful consistent lie is that the conversation can now be conducted with a reasonable degree of mutual understanding. Like this. What time is it, right now? Whatever your answer, it is not the most correct to the exclusion of all others, answer. It IS a useful consistent lie. The time may be 12:18 in the afternoon by my digital watch but you may have said it is quarter after noon, lunchtime, mid day, or any other of a host of answers. If you wish the most correct answer you should ask a physicists and set aside a weekend for the nuance of the 'right answer'. So how do these ideas fit together? In a 15 minute session we attempt to explain that. At the end we expect that you realize charge back is easy except for the people. But you should try it anyway.
Presenter bio: Mr. Ben is a senior consultant at Movìri, the original developer of what has become BMC TrueSight Capacity Optimization. Before consulting, he was a power user of the capacity tool, presenting several sessions at BMC's Global User Conferences. His has presented at security conferences, audit conferences, was published in a military journal, and even mentioned in a tech article of the New York Times back in April 1996. He has an eclectic background in US Navy Submarines, IT security & audit, IT management and now capacity management. His presentations are entertaining and actionable with the express intent to help you suck less. @MrBenHoney
Benjamin Davies
pdf file

2L7a (CAP): Business Intelligence in Capacity Managementgo to top

Room: Feliciana East
2:30 Business Intelligence in Capacity Management
CAP
This paper explores similarities in tools, techniques, and philosophies used in Business Intelligence practices and how they can be leveraged for improving Capacity Management processes. Specifically, it discusses leveraging additional data outside of component utilization and performance metrics to provide a complete picture of demand by including business demand and response time metrics to the capacity management information system.
Presenter bio: Craig Leikis, MScIT is the Asst. VP IT System Analytics at GM Financial (GMF) where he founded the Capacity Management team and established ITIL-based capacity management in 2014. He has over 15 years of experience in UNIX system administration, performance management, and capacity planning for large-scale high availability computing environments at Fortune 100 companies.
Craig Leikis
pdf file

Tuesday, November 7, 14:50 - 15:05

2L2b (PERF): Performance Testing Approach to AWS kinesis Stream and Loadrunnergo to top

Room: Beauregard
2:50 Performance Testing Approach to AWS kinesis Stream and Loadrunner
PERF
In the wake of an increasingly digital economy, businesses are racing to build operational knowledge around the vast amounts of data they produce each day. Now with data at the center of almost every business function, developing practices that work with data is critical, regardless of your organization's size or industry. AWS Kinesis Stream provides a platform to consume these data from various sources and process them for analytical purposes. In the current age of Digital Marketing, the speed of data processing is critical to success. The AWS Cloud provides a broad set of infrastructure services, such as computing power, storage options, and databases that are delivered as a utility - on-demand, available in seconds, with pay-as-you-go pricing. Amazon Kinesis is a platform for streaming data on AWS, offering powerful services to effortlessly load and analyze streaming data, and also enabling customized builds of streaming data applications for specialized needs. To ensure timely execution and to determine proper infrastructure to host the Kinesis Stream, thorough Performance Testing is critical. However, the Data Message flow to Kinesis Stream gets authenticated using AES256 encryption which most performance testing tools (e.g. HP LoadRunner) does not support. • This paper presents a succinct approach to implement the performance testing steps overcoming the authentication issue and execute Load Test on Kinesis Stream. • The approach explains how to overcome the authentication issue with the help of Node.js and integrate it with LoadRunner and execute the performance test. Node.js is a platform to develop and execute javascript code to run in AWS environment. • The paper also provides details on the critical metrics to be monitored from CloudWatch for analyzing the Kinesis Stream performance. In Summary this paper provides an approach for testing Kinesis Stream with help of Node.js. The paper also provides benefits on the approach and how it can be utilized for other tools and testing approaches.
Presenter bio: Devdutta Dasgupta is a performance test engineer with more than 10 years in the industry. He has been involved in performance testing of various applications across different domains and technologies. He is a big enthusiast for Big Data, Cloud Technologies and AI; He follows Jeff Barr and Elon Musk and their blogs. He has multiple certifications like HP AIS certified in Loadrunner 11, AWS Certified Solution Architect Associate and Big Data 101 under his belt.
Devdutta Dasgupta
pdf file

2L5b (EMT): IT & Shadow IT (Embrace or Squash)go to top

Room: LaFourche
2:50 IT & Shadow IT (Embrace or Squash)
EMT
Shadow IT is used to describe IT systems and applications built and used inside departments without explicit organizational approval. It is also used to describe solutions specified and deployed by departments other than the IT department. Shadow IT has been garnering press these days as departments or services within organizations are looking for ways to develop solutions without the adhering to the normal development paradigm. This can be good in bringing innovation but also bad in safeguards put in place for new applications are bypassed, even for prototypes. Come and join this discussion on how Shadow IT may affect your organization.
Presenter bio: Charles Johnson has been in the Information Technology industry for over 30 years. This has included working at a number of Fortune 500 organizations in different business sectors, including insurance, financial and automotive. Charles has been involved in Performance and Capacity for zOS for the majority of his career, both as a technician and manager. Charles is currently a Principal Consultant with Metron-Athene, Inc., a worldwide software organization specializing in Performance and Capacity Management.
Charles W. Johnson, Jr.
pdf file

2L6b (CAP): Capacity Management Chronicles: What I Learned in My First 10 Years as a Global Consultantgo to top

Room: Feliciana West
2:50 Capacity Management Chronicles: What I Learned in My First 10 Years as a Global Consultant
CAP
I recently celebrated my first 10 years in the Capacity Management business and took the chance to assess how much I have learned during an amazing ride across 3 continents, 30+ world-class organizations, 50 different major cities, and 10 market segments. The exodus to distributed systems and open source software, the virtualization wave, the flash storage revolution, the mobile disruption, the rise of the cloud, the data becoming big and services shrinking to micro, the promised land of AI Ops: a lot have changed, and not surprisingly I saw how curiosity and drive are distinctive mindset features of all the successful individuals I have met in the industry around the globe. And for how different the organizations may appear, some problems, questions, topics are always there to be addressed, no matter where you are. What does it take to become effective then? Well, let me share what I have learned over the last 10 years as a global consultant.
Presenter bio: IT professional with 10+ years of experience in the industry, innovator, leader, influencer, analytics and data engineering enthusiast. My experience is characterized by a strong link between the academic and the IT industry: I started working as Politecnico di Milano intern for the Italian Space Agency and then moved into the Consulting business working with the most IT-intensive firms in the World. As Architect and Team Leader I delivered dozens of successful initiatives for several Fortune 100 companies worldwide, and mentored several young stars. As Head of Operations, I lead an international team of engineers, support the hiring process and look for innovation opportunities being the main point of contact for the most important universities in the Country. I am responsible for the fulfillment of the Company revenue and growth expected results, reporting directly to the Executive Board.
Andrea Vasco
pdf file

Tuesday, November 7, 15:10 - 15:25

2L2c (PERF): Top Performance Problems found in Large Scale Hybrid-Cloud Applicationsgo to top

Room: Beauregard
3:10 Top Performance Problems found in Large Scale Hybrid-Cloud Applications
PERF
This could be a nice "what we have seen in the field" session where I talk about most common problems we detected in modern cloud/micro-service environments
Presenter bio: Andreas has been working in software quality for the past 15 years helping companies from small startup to large enterprise figuring out why their current application falls short on quality and how to prevent quality issues for future development. He is a regular speaker at international conferences, meetups & user groups. He has done DevOps Boston, Velocity Santa Clara, Agile Testing Days, Star West or STPCon in the recent years. Besides being excited about software quality he is also an enthusiastic salsa dancer
Andreas Grabner
pdf file

2L5c (CAP): The Road to Actionable Intelligence is Paved with Minimum, Average, 95th Percentile and Maximumgo to top

Room: LaFourche
3:10 The Road to Actionable Intelligence is Paved with Minimum, Average, 95th Percentile and Maximum
CAP
Using a real-world example, we will explore how reliance on average on 15 second data slices, failed to reveal actionable intelligence. We will encourage the use of other metrics that seem neglected such as minimum, maximum, and 95th Percentile. The example data is minimum idle workers, an overlooked JVM metric that caused problems in our production environment for months but once found and monitored effectively, eliminated dozens of support incidents a week.
Presenter bio: Mr. Ben is a senior consultant at Movìri, the original developer of what has become BMC TrueSight Capacity Optimization. Before consulting, he was a power user of the capacity tool, presenting several sessions at BMC's Global User Conferences. His has presented at security conferences, audit conferences, was published in a military journal, and even mentioned in a tech article of the New York Times back in April 1996. He has an eclectic background in US Navy Submarines, IT security & audit, IT management and now capacity management. His presentations are entertaining and actionable with the express intent to help you suck less. @MrBenHoney
Benjamin Davies
pdf file

2L6c (CAP): Performance Management Service Level and Activities Calculatorgo to top

Room: Feliciana West
3:10 Performance Management Service Level and Activities Calculator
CAP
This paper describes the development of a "calculator" that is being used to provide an initial view of the optimum set of activities to manage the performance of an application. To develop Performance Management processes and the resulting calculator, a Performance Management Integrated Project Team (IPT) team was formed with members from Operations, Engineering, Network Services, and Applications Development. The outcomes were: * Agreement on the definition of Performance Management and the ITIL- and SDLC-based Performance Management life cycle * Determination of organizational roles for the three Performance Management areas: Performance Engineering (PE), Capacity Planning (CP), and Performance Operations (PO) * Development of service levels/types, activities, organizational roles and responsibilities, and integration points for the three Performance Management areas The calculator shows Performance Management service types, activities, and roles for the full development lifecycle (SDLC). In the pilot of the calculator, recommended activities were being done in the later life cycle stages, but there was a lack of emphasis on planning and budgeting and on requirements gathering for performance and capacity. This confirmed earlier observations of a lack of Performance Management activities early in SDLC.
Presenter bio: Ellen Wingrove has over forty years of experience with infrastructure hardware and software products and services, with a personal technical emphasis on capacity planning, performance engineering, communications middleware, and systems management. Projects have ranged from mainframe channel programming to enterprise IT strategy and from carrier service development to technology contract negotiations. Ellen's experience includes many staff management positions, including Practice Lead of 50+ Implementation Engineering practice staff. Her career has encompassed managing staff, leading projects, advising clients, marketing and sales, programming, and engineering. Ellen is a magna cum laude 1974 graduate of Duke University, with majors in Computer Science and Psychology. She also has a Masters in Systems Engineering from Johns Hopkins University.
Ellen Wingrove
pdf file

Tuesday, November 7, 15:30 - 15:45

CONF: REFRESHMENT BREAKgo to top

Tuesday, November 7, 15:45 - 16:45

282 (EMT): #SpeedNOLA Hackathon Presentationsgo to top

Room: Beauregard

On November 4th and 5th we held a Hackathon where we built innovative and fast applications that could impact the New Orleans community. In 24 hours, attendees had access to free workshops on hoe to build highly-performing applications using your favorite development and performance tools and code and design to compete for $3,500 in cash prizes. This event was open to developers, entrepreneurs, marketers, and the 'code-curious' as an opportunity to learn and complete. In this session we will hear from those that competed and won as they present their solutions to the challenges.

284 (MFR): Megawhosis and Gigawhatsis?! Microprocessors Demystified, Transistors Explained and the Increased Importance of Well-written Software Discussedgo to top

Room: Pointe Coupe
3:45 Megawhosis and Gigawhatsis?! Microprocessors Demystified, Transistors Explained and the Increased Importance of Well-written Software Discussed
MFR
While many computer professionals are well versed in software and general hardware concepts, the microprocessor tends to be the least well-understood hardware component. This session is a crash course on microprocessors starting with fundamentals (e.g., What is a transistor? What is logic design? What is an instruction pipeline?) and progressing through Moore's Law and industry trends, high-frequency design considerations, and the importance of synergistic software-hardware co-optimization.
Presenter bio: David has been a part of the IBM Z Hardware Performance and Design teams for 17 years. In his current role he is a client-facing lab representative for system performance inquiries and situations worldwide and co-develops the Large Systems Performance Reference (LSPR). He co-developed five generations of System z core performance models that have helped to shape the hardware designs and supply projection data to brand/marketing. He is a co-author of ~30 patents.
David Hutton
pdf file

285 (EMT): DevOps: Reliability, Monitoring and Management with Service Asset and Configuration Managementgo to top

Room: LaFourche
3:45 DevOps: Reliability, Monitoring and Management with Service Asset and Configuration Management
EMT
Many organizations, focus on how to ensure the reliability, manageability and security of both their critical infrastructure and their core application base. To stay competitive, these organizations need to minimize the risk of service interruptions when they upgrade systems, introduce new features or retired the old one. These interruptions can affect the business and the reputation of the firm, especially when this is the 24x7x365 service. By reaching certain size of the customer base as well as infrastructure, old approaches stopped working. We are presenting in this article our journey into DevOps Service Management world and SACM (Service Asset and Configuration Management) system and processes, which we are building to monitor and control our distributed environment to deliver quality services that are aligned with the business goals and objectives.
Presenter bio: Yuri Ardulov is a Principal System Architect who works with all of the technical leaders to move forward with the new technologies, maintain current state of the art and evolve company to the new level. After spending last 20 years working in different SAAS companies, Yuri knows how to organize service with five nines availability, how to make it efficient, manageable and reliable and how to organize effective collaboration between engineering and operations. Yuri has garnered SAAS offerings in the many famous companies in Silicon Valley, including WebEx, Cisco, eBay and Ringcentral. Yuri holds a MS in computer science from Azerbaijan Technical University.
pdf file

286 (CAP): Capacity Management Essentials: a Framework for Capacity Analysisgo to top

Room: Feliciana West
3:45 Capacity Management Essentials: a Framework for Capacity Analysis
CAP
What are the essential steps of a Capacity Analysis? This is an introduction to the topic, focusing on the required elements. We begin with defining the purpose of the capacity study, analyze historical measurements, proceed to the 'what-if' phase, and report our results. Actual capacity study content is used to illustrate the principles described.
Presenter bio: Debbie Sheetz joined the Capacity Practice of MBI Solutions, LLC as a Principal Consultant in August 2015. She provides in and out-of-the-box solutions for capacity and performance questions as a Professional Service, specializing in Distributed Systems platforms and BMC Software's Capacity Management software. Previous Presentations • CMG 2006, 2007, 2008, 2009, 2010, 2012, 2013 • US regions: • Boston 2008, 2010, 2014, 2015 • Connecticut 2008, 2010, 2011, 2013, 2015 (twice) • Midwest (Chicago) 2011 • DC (National Capital) 2012 • New York 2009 • St. Louis 2008, 2010, 2016 (twice) • Southern 2009 • Southern California 2013 • International regions: • UK CMG 2007, 2009, 2010, 2011 • CMG Canada (Toronto) 2011
Debbie Sheetz

287 (PERF): PANEL: Application Performance Management in Complex Multi-Platform Environmentsgo to top

Room: Feliciana East
3:45 PANEL: Application Performance Management in Complex Multi-Platform Environments
PERF
What tools are available to track performance in complex multi-platform environments? Using Application Performance Management (APM) across multiple layers / platforms. Factoring in traditional / legacy systems. "Best of breed" vs. Integrated tools. Overheads vs. depth of insights. Costs vs. benefits. Panelists: Andreas Grabner (Dynatrace), Jonah Kowall (AppDynamics), others TBD
Presenter bio: Alex Podelko has specialized in performance since 1997, working as a performance engineer and architect for several companies. Currently he is a Consulting Member of the Technical Staff at Oracle, responsible for performance testing and optimization of Enterprise Performance Management and Business Intelligence (a.k.a. Hyperion) products. Alex periodically talks and writes about performance-related topics, advocating tearing down silo walls between different groups of performance professionals. His collection of performance-related links and documents (including his recent papers and presentations) can be found at www.alexanderpodelko.com. He blogs at http://alexanderpodelko.com/blog and can be found on Twitter as @apodelko. Alex currently serves as a director for the Computer Measurement Group (CMG, http://cmg.org), an organization of performance and capacity planning professionals.
Presenter bio: Jonah Kowall is the Vice President of Market Development and Insights at AppDynamics. Jonah has a diverse background including 15 years as an IT practitioner at several startups and larger enterprises with a focus on infrastructure and operations, security, and performance engineering. His experience includes running tactical and strategic operational initiatives and monitoring of infrastructure and application components. Jonah previously worked at Gartner as a research Vice President, specializing in availability and performance monitoring and IT operations management. His research focused on IT leaders and CIOs and he has spoken at many conferences on these topics. Jonah led Gartner's influential application performance monitoring and network performance monitoring and diagnostics magic quadrants.
Presenter bio: Andreas has been working in software quality for the past 15 years helping companies from small startup to large enterprise figuring out why their current application falls short on quality and how to prevent quality issues for future development. He is a regular speaker at international conferences, meetups & user groups. He has done DevOps Boston, Velocity Santa Clara, Agile Testing Days, Star West or STPCon in the recent years. Besides being excited about software quality he is also an enthusiastic salsa dancer
Presenter bio: I first started working in the capacity management field in 2000. Initially I was involved with a product for Unisys Mainframes, and through various roles as both a user and vendor of capacity planning and management software, I have spread my experience over everything from AS400 to VMware. For the past 10 years I've been working for Metron as a Consultant. This role continues to bring me into contact with capacity management teams from all sectors, and using a wide variety of technology.
Alexander PodelkoJonah KowallAndreas GrabnerPhillip Bell

Tuesday, November 7, 17:30 - 19:30

CMGT: Big Easy Bar Crawlgo to top

Join a group or create your own for a local (or not so local) Bar Crawl on Tuesday evening.

Wednesday, November 8

Wednesday, November 8, 08:00 - 08:30

CONF: Continental BREAKFASTgo to top

Louisiana Ballroom I

Wednesday, November 8, 08:45 - 09:45

3K1 (EMT): KEYNOTE: Large Data Interaction, Visualization, and Analysisgo to top

Room: Louisiana Ballroom II
8:45 Large Data Interaction, Visualization, and Analysis
EMT
Scientists, artists, and engineers are producing data at increasingly massive scales. However, visualization/graphics techniques for processing and interaction rarely scale as needed, leaving these users with a challenging scientific or creative process. In this talk, I will discuss how my ongoing work in designing and deploying scalable algorithms and techniques enables users to interactively explore, analyze, and process large data. In this way, my work solves a crucial need for users to understand and manipulate these big data sources. In particular, I will discuss how my work has provided scalable construction of large images as a mosaic of smaller images (panoramas in digital photography). These images can range from megapixels to hundreds of gigapixels in size. My work has brought this offline, slow, and tedious pipeline to a fully interactive and streaming user experience. I will detail a fast, light, and highly scalable approach for the computation of boundaries/seams in a mosaic. This technique has almost perfect linear scaling and provides the first, direct interaction for the editing of seams. In addition, I will describe a streaming, progressive solver for a Poisson system with application in mosaic color correction. This progressive approach allows interactive color correction on-the-fly without the need of a full solution. This technique has also been shown to scale for in-core, out-of-core, and distributed environments. I will conclude with a discussion of emerging and future needs in large data processing for visualization and graphics.
Presenter bio: Dr. Brian Summa is an assistant professor in computer science at Tulane University where he studies large data processing, analysis, and interaction. He has particular interests in visualization and computer graphics applications and has published in the top venues in both areas. Dr. Summa's work, in collaboration with his interdisciplinary partners, is helping advance several research disciplines such as geology, physics, and neurobiology. His work has supported the creation of large data processing software that scales from mobile devices to supercomputers. Applications from his research have been deployed at several university departments, Department of Energy national laboratories, a national health science laboratory, and even private corporations. He received his Ph.D. in computer science from the University of Utah where his dissertation focused on large image processing. He received his bachelor's and master's degrees from the University of Pennsylvania.

Wednesday, November 8, 09:50 - 10:50

302 (CAP): Screaming into the Voidgo to top

Room: Beauregard
9:50 Screaming into the Void
CAP
Did you ever tell someone that something was going to go wrong? Repeatedly? But they didn't act on it? Did you feel like you were screaming into a void where nobody could hear you? You know it doesn't have to be this way, right? We could just Quit, or perhaps you're fantasizing about something more drastic. I think we can come up with some better (safer) solutions, to get proactive warnings, acted upon. Hey we might even find a way to make/save your organisation a few $$$ along the way. In the past few years, I've been lucky enough to work with a couple of really exciting new clients. They have managed to put in place Capacity Managers, with a clear vision. Capacity Managers, who really got what Capacity Management is about, and that have set out to not just do it, but make sure the results of "it", are acted upon. I hope to pass on some of that experience, to you, to help you show off exactly what you do for the organisation you work for.
Presenter bio: I first started working in the capacity management field in 2000. Initially I was involved with a product for Unisys Mainframes, and through various roles as both a user and vendor of capacity planning and management software, I have spread my experience over everything from AS400 to VMware. For the past 10 years I've been working for Metron as a Consultant. This role continues to bring me into contact with capacity management teams from all sectors, and using a wide variety of technology.
Phillip Bell

304 (MFR): Best Paper CMG Brazil: z/OS 2.3 in Cloudsgo to top

Room: Pointe Coupe
9:50 z/OS 2.3 in Clouds
MFR
z/OS 2.3 in clouds In this talk we will hear the news of z / OS 2.3 focusing on the "hyper" of the moment, namely:Clouds, Big Data, and Analytics (Cognitive). Come watch, and make sure the MainFrame is ready for the future, whatever it may be ...
Presenter bio: Alvaro Guimaraes Salla is a Senior Consultant, has been working as System Engineer for all mainframe customers in Brazil and worked as Mainframe instructor in Poughkeepsie IBM USA and Lahulpe IBM Belgium. Currently teaches and develops educational material covering the mainframe platform and all the mainframe topics for young professionals throughout the banks of the Brazilian Government. Alvaro´s also Author of many technical publications and has collaborated in the development of more than forty red books (IRD, Parallel Simplex, VSAM, ABCs, WLM, RMF). He is a regular conference speaker, loves long distance running and is a savior of stray animals.
Alvaro Salla
pdf file

305 (PERF): Continuous Performance Testing: Myths and Realitiesgo to top

Room: LaFourche
9:50 Continuous Performance Testing: Myths and Realities
PERF
While development process is moving towards all things continuous, performance testing remains rather a gray area. Some continue to do it in the traditional pre-release fashion, some claim 100% automation and full integration into their continuous process. We have a full spectrum of opinions of what, when, and how should be done in regard to performance. The issue here is that context is usually not clearly specified - while context is the main factor here. Depending on context, the approach may (and probably should) be completely different. Full success in a simple (from the performance testing point of view) environment doesn't mean that you may easily replicate it in a difficult environment. The speaker will discuss the issues of making performance testing continuous in detail, illustrating them with personal experience when possible.
Presenter bio: Alex Podelko has specialized in performance since 1997, working as a performance engineer and architect for several companies. Currently he is a Consulting Member of the Technical Staff at Oracle, responsible for performance testing and optimization of Enterprise Performance Management and Business Intelligence (a.k.a. Hyperion) products. Alex periodically talks and writes about performance-related topics, advocating tearing down silo walls between different groups of performance professionals. His collection of performance-related links and documents (including his recent papers and presentations) can be found at www.alexanderpodelko.com. He blogs at http://alexanderpodelko.com/blog and can be found on Twitter as @apodelko. Alex currently serves as a director for the Computer Measurement Group (CMG, http://cmg.org), an organization of performance and capacity planning professionals.
Alexander Podelko
pdf file

306 (PERF): Performance Engineering and Testing Using Cloud Based Toolsgo to top

Room: Feliciana West
9:50 Performance Engineering and Testing Using Cloud Based Tools
PERF
Cloud Performance tools are taking over the performance engineering field. We will walk through a case study of city wide transit system card implementation application built on modern javascript libraries. The Project had a tight timeline and needed to use open-source and a cloud vendor to run the test. We will walk through the Performance characterization of the application, use cases, number of tests, APM tool used, Devops integration, test results along with Cloud vendor caveats which would be very helpful to attendees who are looking to moving to engineering and testing from the CLOUD.
Presenter bio: Performance Engineering Architect
Mohit Verma
pdf file

307 (EMT): PANEL: Can Performance Engineering Leverage Machine Learning and AI?go to top

Room: Feliciana East
9:50 PANEL: Can Performance Engineering Leverage Machine Learning and AI?
EMT
Machine Learning, Deep Learning, AI are everywhere. How are we leveraging them in Performance work? Performance testing, modeling, data analytics -- everything goes! Come hear industry experts share challenges and achievements in adopting AI.
Presenter bio: Anoush Najarian is a Software Engineering Manager at MathWorks in Natick, MA where she leads the MATLAB Performance Team. She holds Master's degrees in Computer Science and Mathematics from the University of Illinois at Urbana-Champaign, and an undergraduate degree in CS and applied math from the Yerevan State University in her native Armenia. Anoush has been serving on the CMG Board of Directors since 2014, and has served as the Social Media chair for the CMGimPACt conferences.
Presenter bio: Ramya R Moorthy carries about 14+ years of experience in application performance analysis & Capacity Planning. She is the Co-Founder of QA EliteSouls LLC (US) & Founder of EliteSouls LLP (India). She has done consulting for several clients across business domains, for solving their complex performance & capacity problems.Performance prediction analysis, Performance Modelling & predictive performance analytics are her key areas of interest. She enjoys solving technical problems for assuring the system performance, scalability & capacity. She runs an online academy for Performance Engineers. She likes teaching & mentoring to build elite professionals for future. She is an active blogger, conference speaker & writer. She is part of CMG India board of directors 2017.
Presenter bio: Boris Zibitsker is a specialist in Predictive Analytics. As CEO of BEZNext, he manages development of new technologies and consults companies on applying predictive and prescriptive analytics for optimization of business and IT. As Founder, CTO and Chairman of BEZ Systems, he was managing development of the capacity management tools for Teradata, Oracle, DB2 and SQL Server until company was sold to Compuware. As CTO of Modeling and Optimization at Compuware he was developing algorithms for detecting and predicting performance and availabilities As Adjunct Associate Professor, Boris taught graduate courses at DePaul University in Chicago and seminars at Northwestern University, University of Chicago and Relational Institute. He also taught seminars in USA, South America, Europe, Asia and Africa. He is the author of many papers and organizer of Big Data Predictive Analytics training and certification.
Presenter bio: Andreas has been working in software quality for the past 15 years helping companies from small startup to large enterprise figuring out why their current application falls short on quality and how to prevent quality issues for future development. He is a regular speaker at international conferences, meetups & user groups. He has done DevOps Boston, Velocity Santa Clara, Agile Testing Days, Star West or STPCon in the recent years. Besides being excited about software quality he is also an enthusiastic salsa dancer
Anoush NajarianRamya RamalingaMoorthyBoris ZibitskerAndreas Grabner

Wednesday, November 8, 10:50 - 11:00

CONF: BEVERAGE BREAKgo to top

Wednesday, November 8, 11:00 - 11:30

322a (EMT): Benchmarking Deep Learninggo to top

Room: Beauregard
11:00 Benchmarking Deep Learning
EMT
How to measure performance and scalability of key use cases: prediction and training from scratch, for Deep Learning frameworks: MATLAB, TensorFlow, MXNet, Caffe, and Theano. We will share performance metrics we obtained, and what we learned from this effort.
Presenter bio: Speaker Rohith Bakkannagari is a Performance Engineer at The MathWorks, Inc., focusing on performance analysis, scalability, and benchmarking, and passionate about automation of performance tools, making them accessible to a wider community of engineers. He holds a master's degree in Electrical Engineering from West Virginia University and an avid cricket player
Rohith Bakkannagari

Wednesday, November 8, 11:00 - 12:00

324 (CAP): Impact on Existing Security and Compliance when Migrating to Third-Party Hosted Cloudgo to top

Room: Pointe Coupe
11:00 Impact on Existing Security and Compliance when Migrating to Third-Party Hosted Cloud
CAP
The explosive growth of migrating to third-party hosting and cloud providers has enabled companies to expand their operational footprint and capabilities. During this conversion from in-house to hosted services, many companies are relying on and assuming the host will provide secure and compliant operational environments, but buyer beware. By default, moving to third-party environments may not be as easy, cheap or secure as advertised. For example, you must ensure all security operations and compliance requirements are addressed and ensure no critical operations or services are assumed or fall through the cracks. Contracts must be clear on SLAs, policies and procedures, security administration, logging and reporting, adherence to compliance requirements and liability limitations. This presentation will address the high-level questions: - Considerations when Migrating to Third-Party Hosted Cloud - Going Forward when Migrating to Third-Party Hosted Cloud - Overall Security Risks Across On-Premise and Hosted Cloud Environments
Presenter bio: Prior to joining Tier4 Advisors on January 9th, 2017, Tino Mantella had amassed over 25 years of experience leading three of the nation's more prestigious organizations - National Arthritis Foundation, the YMCA of Metropolitan Chicago, and, most recently, the Technology Association of Georgia. Through each opportunity, Mantella focused on creating a best-in-class experience for every client/customer to achieve quantum growth. Mantella's 12-year career at TAG provided him with a wealth of knowledge regarding the needs and opportunities of the nation's technology stakeholders. During Tino's tenure at TAG, Georgia was propelled to one of the top states in the USA for technology.
Presenter bio: Tom has 30 years in the information technology arena as a trusted advisor and consulting with global companies, critical infrastructure entities and has also worked with many government agencies and held several clearances and certifications. Highly experienced cyber & physical security, data center, critical infrastructure, risk management, data integration and information technology leader dedicated to the successful management, design, development, delivery and recovery of enterprise wide services to global companies and government agencies.
Tino MantellaTom Strickland

Wednesday, November 8, 11:00 - 11:30

325a (EMT): What I Learned about DevOps Around the World!go to top

Room: LaFourche
11:00 What I Learned about DevOps Around the World!
EMT
DevOps is one of the most abused and overrated marketing terms in the last years! That's not an alternative fact! It's just Andi's opinion! YET - it is a very real thing that allowed many software companies to transform the way they think about software engineering. DevOps can mean something totally different thought depending on who you are and what type of business your company is doing. To clarify things, Andi gives us insights on how he explains the benefits to "DevOps Newbies" and how software companies around the world implement it in their own ways. Andi will answer: What does it really mean for developers, testers and operators? What will change? How does Facebook deploy twice a day without big issues? How does DevOps work in financial, government or healthcare where you have tight regulations? Does it mean Devs are responsible for Ops? Does it only work in the cloud? Or can we apply it to "old fashioned" on premise software as well? Learn for yourself and make up your own mind on whether DevOps is just a marketing term or something that can benefit you!
Presenter bio: Andreas has been working in software quality for the past 15 years helping companies from small startup to large enterprise figuring out why their current application falls short on quality and how to prevent quality issues for future development. He is a regular speaker at international conferences, meetups & user groups. He has done DevOps Boston, Velocity Santa Clara, Agile Testing Days, Star West or STPCon in the recent years. Besides being excited about software quality he is also an enthusiastic salsa dancer
Andreas Grabner
pdf file

326a (MFR): Cost Savings While Increasing Capacitygo to top

Room: Feliciana West
11:00 Cost Savings While Increasing Capacity
MFR
IT organizations struggle with two seemingly divergent challenges within their mainframe shops: the need to control spiraling operational costs while also keeping ahead of the capacity curve to ensure their systems have the capacity to handle both planned growth and seasonal upswings. The bad news is that most organizations are faced with the undesirable requirement to "pick one": either control costs or make sure capacity needs are met. And that difficult choice applies to all IT organizations—from those concerned with just keeping the lights on, to those ready to make complete IT transformations. The good news is that, whatever your situation, you don't have to choose; there are solutions out there for any IT organization that allow for simultaneously controlling costs while managing capacity.
Presenter bio: Andrew Armstrong is the Chief Customer Officer at DataKinetics, the leader in mainframe performance and optimization solutions. As CCO, one of Andrew's most important responsibilities is to strive to understand the nuances of the Fortune 1000, to be aware of the IT challenges that they face, and the strengths and weaknesses of the solutions that are available to them. An experienced C-level executive, he has put in place the right products and partnerships, and the right sales and marketing programs, and has transformed businesses to attain continuous profitability and growth while maintaining customer satisfaction as a top-level concern.
Presenter bio: Bio: Over 25 years in IT industry as both a customer and consultant. As a customer, John designed, implemented and maintained many critical projects such as WLM Goal Mode and GDPS/Data Mirroring. He has extensive experience with many performance analysis tools and techniques at the hardware, OS, and application levels. As a consultant, John has assisted many of the world's largest datacenters with their z/OS performance challenges and held Subject Area Chair positions with CMG for both Storage and Capacity Planning for several years. John has hosted many sessions at CMG, SHARE, etc. as well as several regional user groups. In 2017, John joined forces with internationally-recognized performance specialist Peter Enrico.
Andrew ArmstrongJohn Baker

Wednesday, November 8, 11:35 - 12:05

322b (CAP): Behaviour-driven Cost Reduction for IT Hardware & Softwarego to top

Room: Beauregard
11:35 Behaviour-driven Cost Reduction for IT Hardware & Software
CAP
Using his work experience, the presenter will share a series of essentials about controlling IT hardware & software costs. He will use examples drawn primarily from mainframe systems to illustrate effective use of workload characterization; ensuring internal customers understand the cost of system resource utilization; early adoption of technology features; and use of capacity limiting mechanisms to maximize value in Development environments. I can't share a lot of detail, of course, but in this short discussion I'll review how some of those initiatives have helped us keep our mainframe cost per unit of capacity dropping over the years.
Presenter bio: A senior I/T professional, thought leader, educator, planner and team leader with over twenty years of experience in capacity management, project initiation, process development, change management and implementation, problem management and disaster recovery and business continuity management for large corporate I/T infrastructures on a variety of computing platforms.
Jonathan Gladstone
pdf file

325b (EMT): Can a Robot Read Your Performance Reports? Deep Learning and Machine Learning for Performance and Capacity Engineersgo to top

Room: LaFourche
11:35 Can a Robot Read Your Performance Reports? Deep Learning and Machine Learning for Performance and Capacity Engineers
EMT
We show how to apply deep learning and machine learning for performance performance testing, data analysis, and modeling.
Presenter bio: Anoush Najarian is a Software Engineering Manager at MathWorks in Natick, MA where she leads the MATLAB Performance Team. She holds Master's degrees in Computer Science and Mathematics from the University of Illinois at Urbana-Champaign, and an undergraduate degree in CS and applied math from the Yerevan State University in her native Armenia. Anoush has been serving on the CMG Board of Directors since 2014, and has served as the Social Media chair for the CMGimPACt conferences.
Anoush Najarian

326b (CAP): Demonstrating Return on Investment for Capacity Managementgo to top

Room: Feliciana West
11:35 Demonstrating Return on Investment for Capacity Management
CAP
As environments grow larger and more complex, problems related to a lack of capacity management continue to expand. When a company finally realized that it has a created, a large distributed and virtualized environment for which there is little visibility, it can be difficult to implement processes and software to manage its capacity. Determining when a company will reach this point is also difficult. Many times, the determining factor will be a management focus of return on investment (ROI). Is the cost of doing capacity management in these environments less than the cost of continuing with existing practices? Evaluating how effective an investment in capacity management will be starts with analyzing the environment in which you wish to establish better visibility and control. If no processes are in place then it may include everything. If some capacity management is already in place, then a scoping exercise may be required initially. Existing processes may also be included for improvement. A cost benefit analysis can then be started by taking inventory of unmanaged hardware and analyzing other extraneous and environmental costs. Reoccurring costs without capacity management can then be compared to the savings benefit produced by implementing capacity management. • Systems infrastructure sprawl • Indicators of unmanaged capacity • Scope and direction • Variables and information gathering • ROI models and cost benefit analysis
Presenter bio: Dale is a consultant at Metron-Athene with over 15 years of experience in systems performance and capacity management. Dale has broad knowledge in many aspects of capacity management and performance engineering. He has worked at some of the largest financial firms in the United States. He holds many certifications across a diverse set of technologies, and a degree in computer information systems from Excelsior college. Dale attended his first CMG conference in 2000.
Dale Feiste

327b (EMT): Emerging Workloads: Selecting the Best Execution Venuego to top

Room: Feliciana East
11:35 Emerging Workloads: Selecting the Best Execution Venue
EMT
The best execution venue for platform selection is based on the fundamental principle that one size does not fit all and that local factors matter. This session discusses these factors and examines how to choose the best fit for your emerging workloads with a focus on hybrid cloud, blockchain, and cognitive applications as examples.
Presenter bio: Elisabeth Stahl is an IBM Distinguished Engineer and has been working in IT Infrastructure Optimization for over 25 years. She is a member of the IBM Academy of Technology, IEEE Senior Member, Computer Measurement Group Program Chair and is on the Board of Directors at The Music Settlement. Elisabeth received a BA in Mathematics from the University of Pennsylvania and an MBA from NYU. Follow her on Twitter @ibmperformance .
Elisabeth Stahl

Wednesday, November 8, 12:10 - 12:15

CONF: Women in Tech Luncheon (12:15 -1:15 PM)go to top

Room: St. Landry

Women in Tech returns to imPACt. We have some great ideas about events and planning for 2018 and want to hear from you. Join us Wednesday at lunch. Get you lunch through the main buffet and meet us in St. Landry on the 9th floor.

Wednesday, November 8, 12:15 - 13:15

CONF: LUNCHgo to top

Louisiana Ballroom I

Wednesday, November 8, 13:15 - 14:15

342 (EMT): 2017 BEST PAPER: Continuous Availability: From the Shift Paradigm to Unmanned Operation. Is it Still a Dream?go to top

Room: Beauregard
1:15 Continuous Availability: From the Shift Paradigm to Unmanned Operation. Is it Still a Dream?
EMT
Since the beginning, the operational management of traditional data centers has been based on human intervention. The pervasiveness of the ICT services has brought the data centers to run operations continuously (24/7) and as a consequence there is an exponential increase of the ICT's complexity. As a matter of fact, nowadays, IT infrastructure requires a lot of hands-on maintenance. Reliance on human operators for managing data centers is the main obstacle for their evolution. This paper answers the following question: is it really possible to have an Autonomic Data Center?
Presenter bio: Silvio Orsini has been working at Banca d'Italia for around 30 years in the IT Directorate General after 7 years of experience in the ENEL ICT Department. Currently he is Deputy Head of IT Operations Directorate. S. Orsini has represented Banca d'Italia in several bodies and working groups at Eurosystem level and has taken part in international programs. In particular he was the infrastructure coordinator in the 3CB (BBk, BdF, BdI) - 4CB (BBk, BdF, BdI, BoE) cooperation for designing and implementing the TARGET2 and the TARGET2 Securities projects. At National level he was, inter alia, board member of the Computer Measurement Group ITALIA in the period 1992-1994 and board member of the Guide Share Europe in 1999-2003. Orsini graduated cum laude in Electronic Engineering and in Aerospace Engineering. He also achieved a Master in Controlling and Computing Engineering Systems. He was Teaching Assistant and Assistant Professor at the University of Rome from 1994 to 2001.
Presenter bio: Marco Capotosto has been working at Banca d'Italia since 2013 in the IT Directorate General where he works as system engineer with particular focus on the service continuity and business continuity topics. Capotosto has represented Banca d'Italia in serveral working group at Eurosystem level and has taken part in international programs. Capotosto graduated cum laude in Electronic Engineering.
Presenter bio: Pietro Tiberi has been working at Banca d'Italia since 2013 in the IT Directorate General after 13 years of experience in the Wind Telecomunicazioni Network and ICT Department. P. Tiberi has represented Banca d'Italia in several working groups at Eurosystem level and has taken part in international programs. In particular he was one of the coordinators of the technical team supporting TARGET2 and the TARGET2 Securities operations. Tiberi graduated in Electronic Engineering and Space Physics.
Silvio OrsiniMarco CapotostoPietro Tiberi
pdf file

344 (CAP): Removing Silos While Developing A Comprehensive Hybrid Cloud Resiliency Solutiongo to top

Room: Pointe Coupe
1:15 Removing Silos While Developing A Comprehensive Hybrid Cloud Resiliency Solution
CAP
Kim Eckert works under Global Technology Services which is an end user of IBM and other vendor's technologies. As such, this presentation will explain some of the components needed for a successful Enterprise Architecture Business Transformation. Kim will demonstrate the value of a technology roadmap and how analytics can produce quicker outcomes that help transform businesses to become more agile. Hybrid cloud and ITaaS empowers different Lines of Business (LOB) to be in control of their destiny rather than having IT dictate to them how to run their operation. Understanding the infrastructure and its applications are key in determining whether or not it is suitable for cloud and if it is suitable, which platform matches the business requirements. This will give the attendees a high level explanation of the steps that is needed to do an assessment and examples of recommendations to ensure reliability and stability in a hybrid environment.
Presenter bio: Kim A. Eckert is one of IBM's senior technical female leaders with the designation as Senior Technical Staff Member. With 17 years of prior professional experience, Kim joined IBM in 2001, bringing a portfolio of expertise in software development, mainframe systems programming, data center operations, and management. Kim's practical experiences span several technical domains including Enterprise Architecture, midrange virtualization, and software licensing optimization. Kim, as a Chief Architect, is an acknowledged leader who is responsible for setting the technical vision and strategy while generating important innovation and standards for delivery. Additionally, she is Open Group Master Certified and IBM Architect certified.
Kim Eckert
pdf file

345 (MFR): Understanding MultiHop FICON Performance, Management, and Configurationsgo to top

Room: LaFourche
1:15 Understanding MultiHop FICON Performance, Management, and Configurations
MFR
This paper/presentation will discuss the newly announced support by IBM for MultiHop FICON and the implications it has for performance, management, and business continuity architectures. We will go into detail on the Fabric Shortest Path First (FSPF) protocol mechanism, limitations, best practices, and specific configurations supported.
Presenter bio: Dr. Steve Guendert is z Systems Technology CTO for Brocade Communications, where he leads the mainframe-related business efforts. He was inducted into the IBM Mainframe Hall of Fame in 2017 for his contributions in the area of I/O technology. He is a senior member of the Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM), and a member of the Computer Measurement Group (CMG). He is a former member of both the SHARE and CMG boards of directors. Steve has authored over 50 papers on the subject of mainframe I/O and storage networking, as well as two books.
Stephen Guendert
pdf file

346 (PERF): Streamlined Model-Driven Performance Engineeringgo to top

Room: Feliciana West
1:15 Streamlined Model-Driven Performance Engineering
PERF
Performance anti-patterns often occur in system architecture/design that are not discovered until testing when they are difficult and costly to fix; or worse, until they fail in the market. This is especially consequential for Embedded and Cyber-Physical Systems (CPS). Software Performance Engineering (SPE) methods and models have been demonstrated to be effective at mitigating risks that threaten successful, timely completion of projects. Recent breakthroughs show promising results in automating performance modeling thus greatly reducing the expertise and effort required for performance prediction. Unlike traditional labor-intensive, custom performance simulations, an SPE model interoperability framework streamlines the evaluation of architecture and design options. This paper introduces the SPE modeling approach and the model interoperability framework. We explain the automated model-driven performance engineering process. A signal processing case study demonstrates the viability of mitigating risks of performance failures that threaten timely, successful completion of projects.
Presenter bio: Dr. Smith, a principal consultant of the Performance Engineering Services Division of L&S Computer Technology, Inc., is known for her work in defining the field of SPE and integrating SPE into the development of new software systems. Dr. Smith received the Computer Measurement Group's prestigious AA Michelson Award for technical excellence and professional contributions for her SPE work. She also authored the original SPE book: Performance Engineering of Software Systems, published in 1990 by Addison-Wesley, and approximately 100 scientific papers. She is the creator of the SPE·ED™ performance engineering tool. She has over 30 years of experience in the practice, teaching, research and development of the SPE performance prediction techniques.
Connie Smith

347 (MFR): z/OS SMT: Deciding Whether to Enablego to top

Room: Feliciana East
1:15 z/OS SMT: Deciding Whether to Enable
MFR
Should you enable Simultaneous Multithreading (SMT) on your z13 or z13s for your zIIP workloads? That is an interesting question that doesn't have an easy answer. If you do enable SMT you should be prepared to review a number of SMT-specific measurements as well as your standard application measurements. Since there is some potential work in terms of evaluating the impact of the change, you may want to first consider whether it seems like it may be beneficial. While it is impossible to predict the actual impact SMT will have on a given system, there are situations which don't lend themselves to SMT. Join Scott Chapman for this session where he'll explain some of the measurements that may be leading indicators for whether or not SMT should be explored. Existing pertinent SMF/RMF measurements will be reviewed, the SMT measurements will be briefly introduced, and a decision tree for deciding whether to stay in single-threaded mode or try SMT-2 will be introduced.
Presenter bio: Scott Chapman has over two decades of experience in the IBM mainframe environment. Much of this experience has focused on performance, from both the application and systems perspective. He's written COBOL application code and Assembler system exit code. His mainframe responsibilities have spanned application development, performance tuning, capacity planning, software cost management, system tuning, sysplex configuration, WLM configuration, and most other facets of keeping a mainframe environment running effectively. Scott has spoken extensively at user group meetings and was honored to receive the Computer Measurement Group's 2009 Mullen award, and also co-authored CMG's 2012 best paper. Scott is a founding steering committee member of the Central Ohio Mainframe User's Group.
Scott Chapman
pdf file

Wednesday, November 8, 14:20 - 14:50

362 (CAP): The Model Factory - Correlating Server and Database Utilization with Customer Activitygo to top

Room: Beauregard
2:20 The Model Factory - Correlating Server and Database Utilization with Customer Activity
CAP
This disclosure relates generally to system modeling, and more particularly to systems and methods for modeling computer resource metrics. In one embodiment, a processor-implemented computer resource metric modeling method is disclosed. The method may include detecting one or more statistical trends in aggregated interaction data for one or more interaction types, and mapping each interaction type to one or more devices facilitating the transactions. The method may further include generating one or more linear regression models of a relationship between device utilization and interaction volume, and calculating one or more diagnostic statistics for the one or more linear regression models. A subset of the linear regression models may be filtered out based on the one or more diagnostic statistics. One or more forecasts may be generated using the remaining linear regression models, using which a report may be generated and provided.
Presenter bio: I started in1979 as an IBM/370 system engineer. In 1986 got my PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for about 12 years. I published 30 papers and made several presentations for conferences related to the Robotics, Artificial Intelligent fields. In 1999 I moved to the US and worked at Capital One bank as a Capacity Planner. My first CMG paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG 2002 and was presented at UKCMG 2003 in Oxford, England. I made other tech. presentations at IBM z/Series Expo, Southern and Central Europe CMG. After working more than 2 years as the Capacity team lead for IBM, I had worked for SunTrust Bank for 3 years and then at IBM for 2+ years as Sr. IT Architect. Now I work for Capital One bank as IT Manager. Since 2015 I am the CMG director. I run my tech blog at www.Trub.in
Igor Trubin
pdf file

364 (MFR): SMF 99 - The Lost Gold of WLM Analyticsgo to top

Room: Pointe Coupe
2:20 SMF 99 - The Lost Gold of WLM Analytics
MFR
The SMF 99 records contain a wealth of information related to WLM algorithm decisions. They were originally developed to trace WLM decisions, but over the years they have been expanded to provide insights into HiperDispatch, Capping, Group Capacity Limits, machine topology, and more. Most customers have the SMF 99 WLM decision records turned off due to their high volume. However, there are many reasons to turn these records on during performance debugging and analysis. During this presentation, Peter Enrico will provide an introduction to the SMF 99 records, as well as show some very practical uses for these records and a number of performance insights these records will provide.
Presenter bio: Peter Enrico has strong and diverse experience with the IBM zArchitecture platforms, and a solid background in z/OS, Workload Manager, Parallel Sysplex, UNIX System Services, and WebSphere e-business performance. Peter also has extensive experience measuring, analyzing, and tuning the performance of z/OS systems, Sysplexes, and subsystems. Peter's abilities extend beyond just z/OS performance and capacity planning. He is considered a highly qualified and effective communicator and seminar instructor. For details of Peter's performance workshop and seminar schedule, his services, and access to past papers and presentations, please visit www.pivotor.com or www.epstrategies.com.
Peter Enrico

365 (PERF): Automated Performance Testing in Preproduction with CI and OSS Toolsgo to top

Room: LaFourche
2:20 Automated Performance Testing in Preproduction with CI and OSS Tools
PERF
The worst time to learn that a business-critical performance metric got worse is once a release is in production. The earlier you can detect a problem, the easier it is to resolve. Billy Hoffman explains how to integrate open source performance testing tools like Lighthouse, WebPagetest, and others into your build/CI systems, stopping performance regressions and providing transparency.
Presenter bio: As an entrepreneur with a decade of experience in Application Performance Management, Craig is responsible for product management, marketing, sales, and operations. Craig was previously a Solutions Consultant for Compuware/Gomez where he designed and implemented application performance solutions for the Fortune 100. Before that, he provided the Fortune 1000 and government agencies with systems to test and manage their networks while at Network Orange. Craig is active in the ATDC at Georgia Tech and can be found on the pitch with Atlanta Old White Rugby Football Club.
Craig Hyde

366 (PERF): Benchmarking ML Algorithms and Libraries for Big Data Applicationsgo to top

Room: Feliciana West
2:20 Benchmarking ML Algorithms and Libraries for Big Data Applications
PERF
Selection of ML Algorithms and ML Libraries during design of Big Data Applications affects performance, scalability, accuracy and cost of running Application. Objective of our project is to develop a methodology and create a collaborative environment between several Universities, Research Centers and several vendors in benchmarking different algorithms. Modeling technique is used to convert data collected in one environment to baseline environment. Objective is to use business requirements to new application and apply analytics to find effective solution.
Presenter bio: Boris Zibitsker is a specialist in Predictive Analytics. As CEO of BEZNext, he manages development of new technologies and consults companies on applying predictive and prescriptive analytics for optimization of business and IT. As Founder, CTO and Chairman of BEZ Systems, he was managing development of the capacity management tools for Teradata, Oracle, DB2 and SQL Server until company was sold to Compuware. As CTO of Modeling and Optimization at Compuware he was developing algorithms for detecting and predicting performance and availabilities As Adjunct Associate Professor, Boris taught graduate courses at DePaul University in Chicago and seminars at Northwestern University, University of Chicago and Relational Institute. He also taught seminars in USA, South America, Europe, Asia and Africa. He is the author of many papers and organizer of Big Data Predictive Analytics training and certification.
Boris Zibitsker

367 (MFR): Inside look of z/OS Workload Managergo to top

Room: Feliciana East
2:20 Inside look of z/OS Workload Manager
MFR
Identifying and classifying workloads in business terms is very important step to any business. On IBM z Systems, running z/OS, several workloads with competing priorities and resource requirements could run simultaneously. IBM Workload Manager allows to set importance and goal (velocity, response time) in business terms. WLM then manages the resource allocation to workloads to achieve these goals. In this presentation we will address several questions like - Why and how to classify workloads? How z/OS WLM works with your workloads?. This presentation will clear up several concepts like service definition, Service policies, Service periods etc.
Presenter bio: Hemanth Rama is a senior software engineer at BMC Software. He has 11+ years of working experience in IT. He holds 1 patent and 2 pending patent applications. He works on BMC Mainview for z/OS, CMF Monitor, Sysprog Services product lines and has lead several projects. More recently he is working on BMC Intelligent Capping for zEnterprise (iCap) product. He holds master degree in computer science from Northern Illinois University[NIU]. He learned his mainframes skills at NIU. He writes regularly on LinkedIn pulse, destionationZ, BMC communities and his personal blog. [[LINK: https://path2siliconvalley.wordpress.com/ ]]
Hemanth Rama

Wednesday, November 8, 14:50 - 15:05

CONF: REFRESHMENT BREAKgo to top

Wednesday, November 8, 15:05 - 16:05

372 (PERF): The History and Future of Monitoringgo to top

Room: Beauregard
3:05 The History and Future of Monitoring
PERF
Putting monitoring into perspective must begin with historical context. History is a great fact base which should be used to judge the evolution of technology. History helps guide us when we see specific patterns, including problems and solutions constantly repeating themselves, this is even more true within enterprises. What mistakes have we made that we should learn from and avoid? Which strategies were once relevant, but no longer meet the needs of today's technologists as we move to increasingly smaller cross-functional teams. Looking at past and future trends including infrastructure and application monitoring, event correlation, log analytics, and homegrown tooling. There will be an explanation of what APM technologies do and don't do. With a lense into the rapid evolution of open source. As monitoring continually evolves how will machine learning and artificial intelligence help solve problems? What does the future hold for understanding and measuring increasingly complex, diverse, and high-scale systems. This will only be possible by unifying monitoring stacks with the right context. We will outline the current market and segmentation according to industry analysts such as Gartner and IDC. Our discussion will focus on enterprise use cases, versus completely greenfield opportunities. Key takeaways: o Historical view of monitoring o Drilling into the reason and creation of solutions to target infrastructures, APM, event correlation, log analytics, and homegrown tools o Open source technologies to help solve monitoring, big data, and machine learning o Market understanding based on analyst research and segmentation
Presenter bio: Jonah Kowall is the Vice President of Market Development and Insights at AppDynamics. Jonah has a diverse background including 15 years as an IT practitioner at several startups and larger enterprises with a focus on infrastructure and operations, security, and performance engineering. His experience includes running tactical and strategic operational initiatives and monitoring of infrastructure and application components. Jonah previously worked at Gartner as a research Vice President, specializing in availability and performance monitoring and IT operations management. His research focused on IT leaders and CIOs and he has spoken at many conferences on these topics. Jonah led Gartner's influential application performance monitoring and network performance monitoring and diagnostics magic quadrants.
Jonah Kowall
pdf file

374 (CAP): Practical Lessons for Business-Aligned Capacity Managementgo to top

Room: Pointe Coupe
3:05 Practical Lessons for Business-Aligned Capacity Management
CAP
"In theory, theory and practice are the same. In practice, they are (not)". Join us in hearing from Sabre, leading solution provider to the travel industry, and Moviri, Professional Services in the Capacity Management and Performance Analytics. You will learn about how to apply QN Theory and Analytics tools to achieve Business-Aligned Capacity Management for IT Services in a complex production environment. During the presentation, we will walk you through the models used for several real IT Services: from dedicated to shared services, from n-n redundancy to full cross datacenter redundancy. By attending the session, as a Capacity Manager you will learn how to fence with your day-by-day challenges.
Presenter bio: Andrea is an IT Consultant for the Moviri Capacity Management Team . He has been involved in a number of high profile projects working directly with some of the largest companies in the US in a wide range of fields including financial services, healthcare, pharmaceuticals, insurance, retail sales, and logistics. Andrea received his Master's Degree in Computer Engineering from Politecnico di Milano. In his spare time he enjoys basketball, volleyball, playing guitar, and teaching himself the drums.
Presenter bio: Pat is currently a capacity planner at Sabre, a travel technology company. She received her BS Ed in Math and Computer Science from Missouri State University. Her IT career has progressed through many different roles (keypunch, computer operator, software developer, performance engineer, incident manager, team manager) in many different businesses (gray iron foundry, private college, reinsurance, shoe retailer, travel agency, airline). In her spare time she enjoys travel, attending live theatre and classical concerts, needlework, playing violin, and leading a weekly small group Bible study.
Andrea GalloPat Furrow
pdf file

375 (MFR): Best Paper Brazil: Planning and Performance Study in the Consolidation of Mainframe CECsgo to top

Room: LaFourche
3:05 Planning and Performance Study in the Consolidation of Mainframe CECs
MFR
The coexistence of LPARs in a single CEC can present advantages such as efficient use of resources (CPU and channels), reduced amount of CEC (consequently of energy, cooling and physical space), besides maintaining a single point of management. The objective of this study is to show the planning and the tools used in the process of consolidating 2 CECs in a single CEC, and what the impact is on the post consolidation performance, focusing on the use of processors and their cache structures.
Presenter bio: Gustavo F. Araujo was born on 06 June 1990 in Pocos de Caldas, Minas Gerais, Brazil. He attended to University of Sao Paulo and studied Materials Engineering. In 2015 he started working in the IT area, more specifically at the Mainframe Capacity Planning and Performance team of ITAU UNIBANCO bank, one of the biggest Brazilian banks. Since then he has worked with migration of DataCenters, WLM analysis, technology upgrade from zEC 12 to z13, among other topics related to performance and capacity. In 2017, he was speaker of the events CMG IMPACT Brazil (considered one of the best presentations) and IBM Systems Technical University.
Gustavo F. Araujo

376 (CAP): The Curse of P90: An Elegant Way to Overcome it Without Magicgo to top

Room: Feliciana West
3:05 The Curse of P90: An Elegant Way to Overcome it Without Magic
CAP
Over the decades of development of methodologies and metrics for IT capacity planning and performance analysis, percentile terminology has become the lingua franca of the field. It makes sense: percentiles are easy to interpret, not sensitive to outliers, and directly usable for approximating the distribution of the variable being measured for stochastic simulations. However, depending on which percentile is used, we can miss important information, like multimodality of the metric's distribution. Another, less obvious, downside of relying on percentiles comes into play when we size infrastructure for a high percentile of demand (e.g., p90). Given that it takes time to order, manufacture, receive, and install infrastructure, this means that we need to answer the statistically nontrivial question, "what will this percentile of demand be in one to three years?" This paper discusses the issues that arise in answering it and proposes an elegant way of resolving them.
Presenter bio: Alexander Gilgur is a Data Scientist and Systems Analyst with over 20 years of experience in a wide variety of domains - Control Systems, Chemical Industry, Aviation, Semiconductor manufacturing, Information Technologies, and Networking - and a solid track record of implementing his innovations in production. He has authored and co-authored a number of know-hows, publications, and patents. Alex enjoys applying the beauty of Math and Statistics to solving capacity and performance problems and is interested in non-stationary processes, which make the core of IT problems today. Presently, he is a Network Data Scientist at Facebook and an occasional faculty member at UC Berkeley's MIDS program. He is also a father, a husband, a skier, a soccer player, a sport psychologist, a licensed soccer coach, a licensed professional engineer (PE), and a music aficionado. Alex's technical blog is at http://alexonsimanddata.blogspot.com.
Alexander Gilgur
pdf file

377 (PERF): PANEL: Emerging Technologies: Performance Engineering Implicationsgo to top

Room: Feliciana East
3:05 PANEL: Emerging Technologies: Performance Engineering Implications
PERF
Technology is in the process of incessant change. Performance Engineers today have to keep up to do their jobs efficiently and there is question as to the future of Performance Engineering: Will the role stay the same or change with the advent of Devops, Site Reliability job titles. In this panel, we will walk through some of the key technologies below to understand their implications to Performance Engineering, and the future RoadMap for PE: Cloud, Devops, Big Data, IOT, Machine Learning/AI. Panelists Include: Todd Decapua, Andreas Graebner, Anoush Najarian
Presenter bio: Performance Engineering Architect
Presenter bio: Andreas has been working in software quality for the past 15 years helping companies from small startup to large enterprise figuring out why their current application falls short on quality and how to prevent quality issues for future development. He is a regular speaker at international conferences, meetups & user groups. He has done DevOps Boston, Velocity Santa Clara, Agile Testing Days, Star West or STPCon in the recent years. Besides being excited about software quality he is also an enthusiastic salsa dancer
Presenter bio: Anoush Najarian is a Software Engineering Manager at MathWorks in Natick, MA where she leads the MATLAB Performance Team. She holds Master's degrees in Computer Science and Mathematics from the University of Illinois at Urbana-Champaign, and an undergraduate degree in CS and applied math from the Yerevan State University in her native Armenia. Anoush has been serving on the CMG Board of Directors since 2014, and has served as the Social Media chair for the CMGimPACt conferences.
Presenter bio: Performance Engineering Architect I have been active in the web application performance industry for 17+ years. Over the years, I've used a variety of performance/load and monitoring tools. Created performance test harnesses while honing/defining performance engineering methodologies. I also enjoy taking very technical information and evangelizing the content into easy to understand analogies. I've published a variety of syndicated blogs, while making a name for myself in this niche industry. All my content is derived from hands-on experience. Held several performance engineer positions for industries spanning many verticals such as retail, financial services, insurance, gaming, and supply management. Worked for an enterprise load tool companies as sales engineer, professional services, blogger and technical evangelist.
Presenter bio: Elisabeth Stahl is an IBM Distinguished Engineer and has been working in IT Infrastructure Optimization for over 25 years. She is a member of the IBM Academy of Technology, IEEE Senior Member, Computer Measurement Group Program Chair and is on the Board of Directors at The Music Settlement. Elisabeth received a BA in Mathematics from the University of Pennsylvania and an MBA from NYU. Follow her on Twitter @ibmperformance .
Mohit VermaAndreas GrabnerAnoush NajarianRebecca ClinardElisabeth Stahl

Wednesday, November 8, 16:15 - 17:15

3K21 (INV): KEYNOTE: New Orleans: The Cradle of Civilized Drinkinggo to top

Room: Louisiana Ballroom II
4:15 New Orleans: The Cradle of Civilized Drinking
INV
Coming Soon
Presenter bio: Chris McMillian is a New Orleans bartender and a co-founder of The Museum of the American Cocktail. Imbibe Magazine mentioned McMillian as one of the top 25 most influential cocktail personalities of the last century.McMillian, a fourth generation bartender, has been the chief bartender at several New Orleans bars, including the Library Lounge at the Ritz-Carlton and Bar UnCommon. As a cocktail historian, McMillian is known for telling stories or reciting drink-themed poetry while making drinks. McMillian has been mentioned in many magazines such as the New York Times and Wall Street Journal, and has been a public speaker at institutions such as the Smithsonian.
Chris McMillian

Wednesday, November 8, 17:15 - 18:15

CONF: HAPPY HOUR Reception!go to top

Parish Hall

Day 3...Done! Spend time with our Sponsors and network with fellow attendees. Make plans for dinner with old or new friends. Be sure to come back for a nightcap at 8:30 PM.

Wednesday, November 8, 19:00 - 21:00

CONF: Dinner with Peersgo to top

Monday and Wednesday evenings after the receptions - Go Cajun! Join attendees with similar interests or create a BOF at a bar and go out for dinner at some great restaurants in the area.

Thursday, November 9

Thursday, November 9, 08:00 - 08:30

CONF: Continental BREAKFASTgo to top

Louisiana Ballroom I

Thursday, November 9, 08:45 - 09:45

4K1 (EMT): KEYNOTE: Is Capacity Management Needed in the Cloud?go to top

Room: Louisiana Ballroom II
8:45 Is Capacity Management Needed in the Cloud?
EMT
The cloud holds the promise of bottomless capacity, available instantly. Recently, Capital One has been shifting a significant portion of its workload to the public cloud. Kevin McLaughlin explores what capacity management looks like in the cloud, which old concepts still apply, which should be retired, and what new metrics become important in the process. Kevin also covers the importance of performance management and outlines what needs to be monitored as workloads transition to the cloud and what to monitor once a workload is fully in the cloud, as well as considerations for ensuring the legacy environment maintains sufficient capacity during the transition.
Presenter bio: Kevin McLaughlin has 15 years of experience in financial services technology, primarily in roles related to capacity and performance management. Kevin spent eight years on the business side of banking, where he held various analytic roles including management of Capital One's statistical model governance process. He currently serves as the director of Capital One's technology capacity management team and still serves as a model risk officer for the bank's analytic models. Kevin has a BS in ag economics from Virginia Tech and an MBA with a concentration in MIS from George Mason University.
Kevin McLaughlin

Thursday, November 9, 09:50 - 10:50

403 (PERF): Performance Evaluation of Heterogeneous Multi-Queues with Job Replicationgo to top

Room: St. Landry
9:50 Performance Evaluation of Heterogeneous Multi-Queues with Job Replication
PERF
A system composed of n heterogenous servers receives jobs to be processed. A front-end dispatcher submits d (d < n) copies of each job to the queue of each of the d chosen servers. The copy that completes first causes the cancellation of each of the other d-1 copies, which can be executing or in the queue for a server. Jobs are considered to belong to different classes. This paper evaluates job execution time statistics as a function of (1) the average job arrival rate from a Poisson process, (2) the distribution of job service times for each class, (3) the processing capacity of each server, (4) the job replication policy.
Presenter bio: Daniel Menasce is a University Professor of Computer Science at George Mason University and was the Senior Associate Dean of its School of Engineering from 2005-2012. Menasce holds a PhD in Computer Science from the University of California at Los Angeles. He is the recipient of the 2001 A.A. Michelson Award from CMG, a Fellow of the ACM and of the IEEE, a recipient of the 2017 Outstanding Faculty Award from the State Council of Higher Education of Virginia, and the author of over 250 technical papers that received over 10,500 citations. He is also the author of five books published by Prentice Hall and translated into several languages.
Daniel A Menasce
pdf file

405 (MFR): 2017 BEST PAPER: Achieving CPU (& MLC) Savings through Optimizing Processor Cachego to top

Room: LaFourche
9:50 Achieving CPU (& MLC) Savings through Optimizing Processor Cache
MFR
Customer experiences with z13 processors have confirmed that delivered capacity is more dependent than ever before on effective utilization of processor cache. Attendees will learn how to interpret the enlightening metrics available from the SMF 113 records and how to leverage those metrics to optimize their environments and reduce CPU consumption and MLC software expense. The presentation incorporates findings gleaned from reviewing detailed processor cache data from 45 sites across 5 countries. Insights into the potential impact of various tuning actions will be brought to life with data from numerous real-life case studies. This session was awarded a "CMG 2017 Best Paper" for this conference, and has also been updated to reflect considerations for the recently released z14 processor models.
Presenter bio: Todd is a Senior z/OS Performance Consultant for IntelliMagic. His primary area of interest over the course of his 39 year IT career has been z/OS systems performance. Before joining IntelliMagic, Todd spent 26 years at USAA in a variety of roles including mainframe architect and leading their highly successful mainframe software expense reduction initiative. Having been thoroughly impressed as a customer with the visibility IntellIMagic Vision provides into z/OS systems infrastructure, he joined IntelliMagic in 2016 and now helps customers leverage that visibility with particular focus on reducing their MLC software expense. Todd is a highly-regarded industry speaker and has given award winning presentations at events such as SHARE and CMG.
Todd Havekost
pdf file

Thursday, November 9, 10:50 - 11:00

CONF: BEVERAGE BREAKgo to top

Thursday, November 9, 11:00 - 12:00

412 (CAP): Incorporating Weather Data into Capacity Planning Analysisgo to top

Room: Beauregard
11:00 Incorporating Weather Data into Capacity Planning Analysis
CAP
Incorporating 'non-capacity' data into capacity planning efforts expands the capacity analysts view and understanding. Business metrics vs. hardware metrics make compelling capacity models, but external forces may have a measurable effect as well. This is a conversation about Incorporating weather data to determine if and how weather conditions affect customer interaction, employee behavior, and infrastructure utilization, then creating a workable model to predict the effects. We discovered that weather conditions did not have the impact we expected, but severe weather and cultural events are indeed a key driver for VPN (virtual private network) and VDI (virtual desktop infrastructure) demand. Our weather models helped successfully predicting an 8x prolonged spike, therefore avoiding a significant outage, in addition to the impact of several cultural events on the work from home infrastructure.
Presenter bio: Mr. Ben is a senior consultant at Movìri, the original developer of what has become BMC TrueSight Capacity Optimization. Before consulting, he was a power user of the capacity tool, presenting several sessions at BMC's Global User Conferences. His has presented at security conferences, audit conferences, was published in a military journal, and even mentioned in a tech article of the New York Times back in April 1996. He has an eclectic background in US Navy Submarines, IT security & audit, IT management and now capacity management. His presentations are entertaining and actionable with the express intent to help you suck less. @MrBenHoney
Benjamin Davies
pdf file

413 (PERF): Rules of Thumb for Response Time Percentiles: How Risky are they?go to top

Room: St. Landry
11:00 Rules of Thumb for Response Time Percentiles: How Risky are they?
PERF
Whether externally mandated or internally tracked, the enterprise relies on governance of application service response time objectives. In many cases, achieving service requirements in terms of the average response time may not deliver an experience that delights the consumer. The consumer may request a deeper level of governance. Service providers want to achieve the promised objectives and, on the other hand, avoid over-provisioning. This paper explores rules of thumb that can be applied to estimate 90th or 95th percentiles for service response times, based on the measured or predicted mean. The risk assessment behind these recommendations is described in the paper. Various types of networks were modeled and analyzed. Even though classical queueing models rely on strict assumptions (and rarely met in the real world), it was found that the classical M/M/1 model provided a useful upper bound. Another function was evaluated for tighter accuracy.
Presenter bio: Dr. Salsburg is an independent consultant. Previously, Dr. Salsburg was a Distinguished Engineer and Chief Architect for Unisys Technology Products. He was founder and president of Performance & Modeling, Inc. Dr. Salsburg has been awarded three international patents in the area of infrastructure performance modeling algorithms and software. In addition, he has published over 70 papers and has lectured world-wide on the topics of Real-Time Infrastructure, Cloud Computing and Infrastructure Optimization. In 2010, the Computer Measurement Group awarded Dr. Salsburg the A. A. Michelson Award.
Presenter bio: Co-founder and Chief Scientist, BGS Systems, 1975 - 1998
Michael SalsburgJeff Buzen
pdf file

414 (EMT): Cloud Capacity Managementgo to top

Room: Pointe Coupe
11:00 Cloud Capacity Management
EMT
Capacity Management continues to evolve as a practice with new environments in IT. The inclusion of the Cloud infrastructure within IT requires the Capacity Management discipline to be extended. There are several variables in dealing with Cloud Capacity Management. Many of them depend on where the Cloud infrastructure is hosted and the type of control a user has over the environment. On-Premise hosting, Hybrid hosting or Cloud provider hosting fit into the equation. The purpose of this presentation is to discuss the variables that one needs to consider when extending Capacity Management to the Cloud. Join this presentation to discuss the following topics: • Discussion of Capacity Management in general • Discussion of the variables introduced by the Cloud • Overview of the most prominent Cloud offerings • How to plan to move your environment into the Cloud • What metrics you need to capture for the Cloud Infrastructure • Reporting examples
Presenter bio: Charles Johnson has been in the Information Technology industry for over 30 years. This has included working at a number of Fortune 500 organizations in different business sectors, including insurance, financial and automotive. Charles has been involved in Performance and Capacity for zOS for the majority of his career, both as a technician and manager. Charles is currently a Principal Consultant with Metron-Athene, Inc., a worldwide software organization specializing in Performance and Capacity Management.
Charles W. Johnson, Jr.
pdf file

415 (MFR): Performance Insights for the Newest areas of your z/OS Infrastructurego to top

Room: LaFourche
11:00 Performance Insights for the Newest areas of your z/OS Infrastructure
MFR
IBM continues to invest in the new features in their mainframe product line, and these new capabilities are reflected as new content in the SMF and RMF records. In this presentation, we will cover some of the new measurement capabilities, and why they are important. Significant new developments that are covered in the presentation are the new CICS, IMS, and DB2 transaction support in the RMF type 72 transaction records. These were originally created to support mobile workloads, but are also very useful for transaction performance reporting. The SMT support for zIIP is a starting point for multithreading support in the mainframes, and it comes with new information for CPs and zIIPs in both processor (RMF 70/72) and job (SMF 30) records. Not related to new features, but important to be aware of is the new system-wide reporting for Locks, Latches and Enqueues. RMF has always provided ENQ reporting, and now also provides similar details for locks and latches.
Presenter bio: Dr. Gilbert Houtekamer started his career in computer performance analysis when obtaining his Ph.D. on MVS I/O from the Delft University of Technology in the 1980s. He now has over 25 years of experience in the field of z/OS and storage performance and obtained a place in the ‘Mainframe Hall of Fame'. Gilbert is founder and managing director of IntelliMagic, delivering software that applies built-in intelligence to measurement data for storage and z/OS to help customers manage performance, increase efficiency and predict availability issues.
Gilbert Houtekamer
pdf file

Thursday, November 9, 12:00 - 13:00

CONF: LUNCHgo to top

Louisiana Ballroom I

Thursday, November 9, 13:00 - 14:00

432 (CAP): Multivariate IT Capacity Modelinggo to top

Room: Beauregard
1:00 Multivariate IT Capacity Modeling
CAP
Wouldn't it be great to be able to make a capacity model of an application that has multiple components and functions that are distributed over several devices, where those devices are shared with unrelated applications? This white paper examines this very real-world situation. The results achieved with actual customer data prove that not only is multivariate capacity modeling achievable, but it is relatively easily performed with COST software, tools and techniques. In this real-life example, an IT Organization wants to model a two-fold increase in logins in the next quarter, resulting in the growth of two specific workloads, increasing usage by 350% and 450%, while the rest of the workloads remained essentially the same. The intuitive expectation was that the existing equipment would not accommodate the request, so the IT Organization needs not only to validate the expectation but also to produce options, driven by actual data and detailed analysis. The outcome of using existing tools, available data, and skillful application of modeling and analysis techniques enables the IT Organization to create 4 different scenarios based on budget, risk, IT resource and business impact.
Presenter bio: IT professional with 10+ years of experience in the industry, innovator, leader, influencer, analytics and data engineering enthusiast. My experience is characterized by a strong link between the academic and the IT industry: I started working as Politecnico di Milano intern for the Italian Space Agency and then moved into the Consulting business working with the most IT-intensive firms in the World. As Architect and Team Leader I delivered dozens of successful initiatives for several Fortune 100 companies worldwide, and mentored several young stars. As Head of Operations, I lead an international team of engineers, support the hiring process and look for innovation opportunities being the main point of contact for the most important universities in the Country. I am responsible for the fulfillment of the Company revenue and growth expected results, reporting directly to the Executive Board.
Presenter bio: Mr. Ben is a senior consultant at Movìri, the original developer of what has become BMC TrueSight Capacity Optimization. Before consulting, he was a power user of the capacity tool, presenting several sessions at BMC's Global User Conferences. His has presented at security conferences, audit conferences, was published in a military journal, and even mentioned in a tech article of the New York Times back in April 1996. He has an eclectic background in US Navy Submarines, IT security & audit, IT management and now capacity management. His presentations are entertaining and actionable with the express intent to help you suck less. @MrBenHoney
Andrea VascoBenjamin Davies
pdf file

433 (MFR): The RNI-based LSPR and the Latest z Systems Performance Briefgo to top

Room: St. Landry
1:00 The RNI-based LSPR and the Latest z Systems Performance Brief
MFR
With the introduction of z196 in 2010, the Large System Performance Reference (LSPR) was significantly enhanced to be based on a measure of a workload's intensity of use of a processor's memory hierarchy. Since then the methodology has seen much validation and continues to be used up through the latest z Systems processors. This session will discuss the theory behind the Relative Nest Intensity (RNI) metric and its basis on data gathered using CPU MF. Its application to capacity sizing will be illustrated by contrasting the performance of the latest z Systems processors.
Presenter bio: David has been a part of the IBM Z Hardware Performance and Design teams for 17 years. In his current role he is a client-facing lab representative for system performance inquiries and situations worldwide and co-develops the Large Systems Performance Reference (LSPR). He co-developed five generations of System z core performance models that have helped to shape the hardware designs and supply projection data to brand/marketing. He is a co-author of ~30 patents.
David Hutton
pdf file

434 (EMT): Dynamic Performance Management of Big Data Clustersgo to top

Room: Pointe Coupe
1:00 Dynamic Performance Management of Big Data Clusters
EMT
In a complex multi-tier, distributed, parallel processing environments like Big Data Clusters, Teradata, Oracle Exadata, and DB2 the real time and batch workloads concurrently compete for computer resources. Big Data subsystems like YARN, Spark, and Cassandra have rules controlling resource allocation and performance of each of the dynamic workloads. Rules are static and manual change of these rules is risky. The change can improve performance for one workload, but negatively affect others. This presentation shows work in progress relating to building a Recommender allowing dynamically optimize and change rules controlling resource allocation to continuously meet Service Level Goals (SLG) for critical Big Data workloads.
Presenter bio: Boris Zibitsker is a specialist in Predictive Analytics. As CEO of BEZNext, he manages development of new technologies and consults companies on applying predictive and prescriptive analytics for optimization of business and IT. As Founder, CTO and Chairman of BEZ Systems, he was managing development of the capacity management tools for Teradata, Oracle, DB2 and SQL Server until company was sold to Compuware. As CTO of Modeling and Optimization at Compuware he was developing algorithms for detecting and predicting performance and availabilities As Adjunct Associate Professor, Boris taught graduate courses at DePaul University in Chicago and seminars at Northwestern University, University of Chicago and Relational Institute. He also taught seminars in USA, South America, Europe, Asia and Africa. He is the author of many papers and organizer of Big Data Predictive Analytics training and certification.
Boris Zibitsker

435 (CAP): Performance Aware Capacity Provisioning and Managementgo to top

Room: LaFourche
1:00 Performance Aware Capacity Provisioning and Management
CAP
Currently, when a Data Center client requests storage capacity, there are no limits on the performance expectations associated with that capacity. This has resulted in numerous storage performance shortfalls in both dedicated and leveraged environments * The problem is exacerbated by the introduction of very large, slow magnetic devices, thereby significantly decreasing the performance density (IOPs/GB) of the provisioned capacity. * In dedicated installations, the available performance is oftentimes less than the needs of the client resulting in an overprovisioning of space to make up the difference, resulting in higher deployment cost. * In leveraged (shared) installations the problem is often amplified by the encroachment of performance demand on shared resources such as cache and CPUs. * Initial sizing of tiered storage devices can also be accurately achieved through the application of performance density to the workload under consideration This paper describes the use of performance density (IOPS/GB) combined with capacity (GB) as the units of allocation of client storage
Presenter bio: Chuck is a 30 year veteran of computer and storage subsystem performance analysis. He is responsible for the development and delivery of DXC storage performance training. A member of TGG, the SNIA SSI and GSI Technical Work Groups and the Storage Performance Council, He holds a BS in ME and an MS in Computer Science.
Chuck Paridon
pdf file