Page i
Case Studies in Finance Managing for Corporate Value Creation
Eighth Edition
Robert F. Bruner
Kenneth M. Eades
Michael J. Schill

Page ii
Published by McGraw-Hill Education, 2 Penn Plaza, New York, NY 10121. Copyright
© 2018 by McGraw-Hill Education. All rights reserved. Printed in the United States of
America. Previous editions © 2014, 2002, and 1989. No part of this publication may be
reproduced or distributed in any form or by any means, or stored in a database or
retrieval system, without the prior written consent of McGraw-Hill Education,
including, but not limited to, in any network or other electronic storage or transmission,
or broadcast for distance learning.
Some ancillaries, including electronic and print components, may not be available to
customers outside the United States.
This book is printed on acid-free paper.
1 2 3 4 5 6 7 8 9 LCR 21 20 19 18 17
ISBN 978-1-259-27719-1
MHID 1-259-27719-4
Portfolio Manager: Tim Vertovec
Senior Product Developer: Jennifer Upton
Marketing Manager: Trina Maurer
Content Project Managers: Melissa M. Leick, Karen Jozefowicz
Buyer: Susan K. Culbertson
Content Licensing Specialist: Beth Thole
Compositor: Aptara , Inc.
All credits appearing on page or at the end of the book are considered to be an
extension of the copyright page.
Library of Congress Cataloging-in-Publication Data
Names: Bruner, Robert F., 1949-author. | Eades, Kenneth M., author. | Schill, Michael J.,
Title: Case studies in finance: managing for corporate value creation / Robert F. Bruner,
Kenneth M. Eades,
Michael J. Schill.
Description: Eighth Edition. | Dubuque, IA : McGraw-Hill Education, [2018] | Series:
The McGraw-Hill/Irwin series in finance, insurance, and real estate | Revised edition of
the authors’ Case studies in finance, [2014]
Identifiers: LCCN 2017023496| ISBN 9781259277191 (alk. paper) | ISBN 1259277194
(alk. paper)
Subjects: LCSH: Corporations—Finance—Case studies. | International business
Finance—Case studies.
Classification: LCC HG4015.5 .B78 2017 | DDC 658.15—dc23 LC record available
The Internet addresses listed in the text were accurate at the time of publication. The
inclusion of a website does not indicate an endorsement by the authors or McGraw-Hill
Education, and McGraw-Hill Education does not guarantee the accuracy of the
information presented at these sites.
Page iii
The McGraw-Hill Education Series in Finance, Insurance, and
Real Estate
Block, Hirt, and Danielsen
Foundations of Financial Management
Sixteenth Edition
Brealey, Myers, and Allen
Principles of Corporate Finance
Twelfth Edition
Brealey, Myers, and Allen
Principles of Corporate Finance, Concise
Second Edition
Brealey, Myers, and Marcus
Fundamentals of Corporate Finance
Ninth Edition
FinGame Online 5.0
Bruner, Eades, and Schill
Case Studies in Finance: Managing for Corporate Value Creation
Eighth Edition
Cornett, Adair, and Nofsinger
Finance: Applications and Theory
Fourth Edition
Cornett, Adair, and Nofsinger
M: Finance
Fourth Edition
Cases in Finance
Third Edition
Grinblatt (editor)
Stephen A. Ross, Mentor: Influence through Generations
Grinblatt and Titman
Financial Markets and Corporate Strategy
Second Edition
Analysis for Financial Management
Twelfth Edition
Ross, Westerfield, Jaffe, and Jordan
Corporate Finance
Eleventh Edition
Ross, Westerfield, Jaffe, and Jordan
Corporate Finance: Core Principles and Applications
Fifth Edition
Ross, Westerfield, and Jordan
Essentials of Corporate Finance
Ninth Edition
Ross, Westerfield, and Jordan
Fundamentals of Corporate Finance
Twelfth Edition
Behavioral Corporate Finance: Decisions that Create Value
Second Edition
Bodie, Kane, and Marcus
Essentials of Investments
Tenth Edition
Bodie, Kane, and Marcus
Eleventh Edition
Hirt and Block
Fundamentals of Investment Management
Tenth Edition
Jordan, Miller, and Dolvin
Fundamentals of Investments: Valuation and Management
Eighth Edition
Stewart, Piros, and Heisler
Running Money: Professional Portfolio Management
First Edition
Sundaram and Das
Page iv
Derivatives: Principles and Practice
Second Edition
Financial Institutions and Markets
Rose and Hudgins
Bank Management and Financial Services
Ninth Edition
Rose and Marquis
Financial Institutions and Markets
Eleventh Edition
Saunders and Cornett
Financial Institutions Management: A Risk Management Approach
Ninth Edition
Saunders and Cornett
Financial Markets and Institutions
Seventh Edition
Eun and Resnick
International Financial Management
Eighth Edition
Brueggeman and Fisher
Real Estate Finance and Investments
Sixteenth Edition
Ling and Archer
Real Estate Principles: A Value Approach
Fifth Edition
Financial Planning and Insurance
Allen, Melone, Rosenbloom, and Mahoney
Retirement Plans: 401(k)s, IRAs, and Other Deferred Compensation Approaches
Tenth Edition
Personal Financial Planning
Second Edition
Harrington and Niehaus
Risk Management and Insurance
Second Edition
Kapoor, Dlabay, Hughes, and Hart
Focus on Personal Finance: An active approach to help you achieve financial
Sixth Edition
Kapoor, Dlabay, Hughes, and Hart
Personal Finance
Twelfth Edition
Walker and Walker
Personal Finance: Building Your Future
Second Edition
Page v
In dedication to
our wives
Barbara M. Bruner
Kathy N. Eades
And to the memory of
Mary Ann H. Schill
and to our children
Page vi
About the Authors
Robert F. Bruner is University Professor, Distinguished Professor of Business
Administration and Charles C. Abbott Professor of Business Administration and Dean
Emeritus of the Darden Graduate School of Business Administration at the University of
Virginia. He has taught and written in various areas, including corporate finance,
mergers and acquisitions, investing in emerging markets, innovation, and technology
transfer. In addition to Case Studies in Finance, his books include Finance Interactive,
multimedia tutorial software in Finance (Irwin/McGraw-Hill 1997), The Portable MBA
(Wiley 2003), Applied Mergers and Acquisitions, (Wiley, 2004), Deals from Hell:
M&A Lessons that Rise Above the Ashes (Wiley, 2005) and The Panic of 1907 (Wiley,
2007). He has been recognized in the United States and Europe for his teaching and case
writing. BusinessWeek magazine cited him as one of the “masters of the MBA
classroom.” He is the author or co-author of over 400 case studies and notes. His
research has been published in journals such as Financial Management, Journal of
Accounting and Economics, Journal of Applied Corporate Finance, Journal of
Financial Economics, Journal of Financial and Quantitative Analysis, and Journal of
Money, Credit, and Banking. Industrial corporations, financial institutions, and
government agencies have retained him for counsel and training. He has been on the
faculty of the Darden School since 1982, and has been a visiting professor at Harvard,
Columbia, INSEAD, and IESE. Formerly he was a loan officer and investment analyst
for First Chicago Corporation. He holds the B.A. degree from Yale University and the
M.B.A. and D.B.A. degrees from Harvard University. Copies of his papers and essays
Page vii
may be obtained from his website,
He may be reached via email at
Kenneth M. Eades is Professor of Business Administration and Area Coordinator of
the Finance Department of the Darden Graduate School of Business Administration at
the University of Virginia. He has taught a variety of corporate finance topics including:
capital structure, dividend policy, risk management, capital investments and firm
valuation. His research interests are in the area of corporate finance where he has
published articles in The Journal of Finance, Journal of Financial Economics,
Journal of Financial and Quantitative Analysis, and Financial Management. In
addition to Case Studies in Finance, his books include The Portable MBA (Wiley
2010) Finance Interactive, a multimedia tutorial software in Finance (Irwin/McGraw-
Hill 1997) and Case Studies in Financial Decision Making (Dryden Press, 1994). He
has authored or co-authored over 70 case studies as well as a web-based, interactive
tutorial on the pricing of financial derivatives. He has received the Wachovia Award for
Excellence in Teaching Materials and the Wachovia Award for Excellence in Research.
Mr. Eades is active in executive education programs at the Darden School and has
served as a consultant to a number of corporations and institutions; including many
commercial banks and investment banks; Fortune 500 companies and the Internal
Revenue Service. Prior to joining Darden in 1988, Professor Eades was a
member of the faculties at The University of Michigan and the Kellogg School
of Management at Northwestern University. He has a B.S. from the University of
Kentucky and Ph.D. from Purdue University. His website is
and he may be reached via email at
Michael J. Schill is Professor of Business Administration of the Darden Graduate
School of Business Administration at the University of Virginia where he teaches
corporate finance and investments. His research spans empirical questions in corporate
finance, investments, and international finance. He is the author of numerous articles
that have been published in leading finance journals such as Journal of Business,
Journal of Finance, Journal of Financial Economics, and Review of Financial
Studies, and cited by major media outlets such as The Wall Street Journal. He has been
on the faculty of the Darden School since 2001 and was previously with the University
of California, Riverside, as well as a visiting professor at Cambridge and Melbourne.
He is the current course head for Darden’s core MBA finance course. He is the author
or co-author of over 40 cases and technical notes, as well as a financial market
simulation entitled Bond Trader. Prior to his doctoral work, he was a consultant with
Marakon Associates in Stamford and London. He received a B.S. degree from Brigham
Young University, an M.B.A. from INSEAD, and a Ph.D. from University of Washington.
More details are available from his website,
He may be reached via email at
Page viii
Dedication v
About the Authors vi
Contents viii
Foreword xi
Preface xii
Note to the Student: How To Study and Discuss Cases xxiii
Ethics in Finance xxx
1 Setting Some Themes
1 Warren E. Buffett, 2015 To think like an investor 3
2 The Battle for Value, 2016: FedEx Corp. vs. United Parcel Service, Inc. Value
creation and economic profit 23
3 Larry Puglia and the T. Rowe Price Blue Chip Growth Fund Market efficiency
4 Genzyme and Relational Investors: Science and Business Collide? Value creat
ion, business strategy and activist investors 63
2 Financial Analysis and Forecasting
5 Business Performance Evaluation: Approaches for Thoughtful Forecasting Fin
ancial forecasting principles 89
Page ix
6 The Financial Detective, 2016 Financial ratio analysis 107
7 Whole Foods Market: The Deutsche Bank Report Financial performance fore
casting 113
8 Horniman Horticulture Financial forecasting and bank financing 127
9 Guna Fibres, Ltd. Forecasting seasonal financing needs 133
3 Estimating the Cost of Capital
10 “Best Practices” in Estimating the Cost Estimating the cost of capital 1
of Capital: An Update
11 Roche Holdings AG: Funding the Genentech Acquisition Cost of debt capital
12 H. J. Heinz: Estimating the Cost of Capital in Uncertain Times Cost of capita
l for the firm 189
13 Royal Mail plc: Cost of Capital Cost of capital for the firm 197
14 Chestnut Foods Cost of capital for multi-division firm 207
4 Capital Budgeting and Resource Allocation
15 Target Corporation Multifaceted capital investment decisions 219
16 The Investment Detective Investment criteria and discounted cash flow
17 Centennial Pharmaceutical Corporation Valuation of earnout plan 241
18 Worldwide Paper Company Analysis of an expansion investment 249
19 Fonderia del Piemonte S.p.A. Capital investment decision 253
20 Victoria Chemicals plc (A): The Merseyside Project Relevant cash flows
21 Victoria Chemicals plc (B): Merseyside and Rotterdam Projects Mutually ex
clusive investment opportunities 265
22 The Procter & Gamble Company: Investment in Crest Whitestrips Advanced Seal
Scenario analysis in a project decision 273
23 Jacobs Division 2010 Strategic planning 285
24 University of Virginia Health System: The Long-Term Acute Care Hospital Projec
Analysis of an investment in a not-for-profit organization 293
25 Star River Electronics Ltd. Capital project analysis and forecasting 30
5 Management of the Firm’s Equity: Dividends and R
26 Rockboro Machine Tools Corporation Dividend payout decision 313
27 EMI Group PLC Dividend policy 329
28 Autozone, Inc. Dividend and stock buyback decisions 347
6 Management of the Corporate Capital Structure
29 An Introduction to Debt Policy and Value Effects of debt tax shields 36
30 M&M Pizza Capital structure in a frictionless market 369
31 Structuring Corporate Financial Policy: Diagnosis of Problems and Evaluation of
Strategies Concepts in setting financial policy 373
32 California Pizza Kitchen Optimal leverage 391
33 Dominion Resources: Cove Point Project funding and capital structure
Page x
34 Nokia OYJ: Financing the WP Strategic Plan Corporate funding alternatives
35 Kelly Solar Debt financing negotiation 449
36 J. C. Penney Company Liquidity management 453
37 Horizon Lines, Inc. Financial distress/restructuring/bankruptcy 467
7 Analysis of Financing Tactics: Leases, Options, and
Foreign Currency
38 Baker Adhesives Hedging foreign currency cash flows 483
39 Vale SA Debt financing across borders 489
40 J&L Railroad Risk management and hedging commodity risk 501
41 WNG Capital, LLC Economics of lease financing 513
42 MoGen, Inc. Convertible bond valuation and financial engineering 52
8 Valuing the Enterprise: Acquisitions and Buyou
43 Methods of Valuation for Mergers and Acquisitions Valuation principles
44 Medfield Pharmaceuticals Valuing assets in place 559
45 American Greetings Firm valuation in stock repurchase decision 571
46 Ferrari: The 2015 Initial Public Offering Initial public offering valuation
47 Rosetta Stone: Pricing the 2009 IPO Initial public offering valuation 6
48 Sun Microsystems Valuing a takeover opportunity 623
49 Carter International Acquisition valuation and financing 645
50 DuPont Corporation: Sale of Performance Coatings Business Unit Divestitur
e 657
51 OutReach Networks: First Venture Round Valuation of early stage company
52 Sanofi-Aventis’s Tender Offer for Genzyme Corporate acquisition 687
53 Delphi Corporation Corporate bankruptcy 715
54 Flinder Valves and Controls Inc. Acquisition negotiation 731
Page xi
As I think about developing the next generation of leaders in business and finance, I
naturally reflect on my own path. My career in business has taught some profound
lessons—and so did my experience at the University of Virginia’s Darden School of
Business. Both life experience and school learning are critical components in the
development of any leader. For that reason, I have supported wholeheartedly higher
education as the path toward a promising future.
As the world keeps changing, higher education must continually adapt. Practices,
processes, and business models that were once popular have faded. At the same time,
the field of Finance has witnessed dramatic changes, including the advent of new
valuation models, the rise of new markets and institutions, the invention of new financial
instruments, the impact of new information technologies, and growing globalization. In
this environment, we must think critically about the changing world, pay attention to new
ideas, and adapt in sensible ways. Business schools play a critical role in the change
process: theory suggests new approaches, empirical research tests them, and classroom
teaching transfers knowledge. The development of new teaching materials is vital to that
Case studies in Finance have evolved markedly over the past 40 years. This shift
reflects the revolutionary changes in markets and organization, as well as the many
significant advances in theory and empirical research. Because case studies are an
invaluable teaching tool, it is critical that the body of cases grows with the practice of
and scholarship in Finance.
I am pleased to introduce the reader to the eighth edition of Case Studies in
Finance, by Robert F. Bruner, Kenneth M. Eades, and Michael J. Schill. These
professors exemplify the practice-oriented scholar who understands the economic
foundations of Finance and the extensive varieties of its practice. They translate
business phenomena into material that is accessible both to experienced practitioners
and novices in Finance.
This book is a valuable contribution to the teaching materials available in the field
of Finance. First, these cases link managerial decisions to capital markets and investor
expectations. At the core of most is a valuation task that requires students to look to
financial markets to resolve the problem. Second, these cases feature a wide range of
contemporary and relevant problems, including examples in real and financial options,
agency conflicts, financial innovation, investing in emerging markets, and corporate
control. They also cover classic topics in Finance, including dividend policy, the mix of
debt and equity financing, the estimation of future financial requirements, and the choice
between mutually exclusive investments. Finally, these cases invite students to harness
technology they will use in the workplace to develop key insights.
I am confident this collection will help students, scholars, and practitioners sharpen
their decision-making ability, and advance the development of the next generation of
leaders in Finance.
John R. Strangfeld
Chairman and Chief Executive Officer
Prudential Financial, Inc.
May 3, 2017
Newark, New Jersey
Page xii
The inexplicable is all around us. So is the incomprehensible. So is the unintelligible. Interviewing Babe Ruth
in 1928, I put it to him “People come and ask what’s your system for hitting home runs—that so?” “Yes,” said
the Babe, “and all I can tell ‘em is I pick a good one and sock it. I get back to the dugout and they ask me
what it was I hit and I tell `em I don’t know except it looked good.”
—Carl Sandburg
Managers are not confronted with problems that are independent of each other, but with dynamic situations
that consist of complex systems of changing problems that interact with each other. I call such situations messes
. . . Managers do not solve problems: they manage messes.
—Russell Ackoff
Orientation of the Book
Practitioners tell us that much in finance is inexplicable, incomprehensible, and
unintelligible. Like Babe Ruth, their explanations for their actions often amount to “I
pick a good one and sock it.” Fortunately for a rising generation of practitioners, tools
and concepts of Modern Finance provide a language and approach for excellent
performance. The aim of this book is to illustrate and exercise the application of these
tools and concepts in a messy world.
Focus on Value
The subtitle of this book is Managing for Corporate Value Creation. Economics
teaches us that value creation should be an enduring focus of concern because value is
the foundation of survival and prosperity of the enterprise. The focus on value also
Page xiii
helps managers understand the impact of the firm on the world around it. These cases
harness and exercise this economic view of the firm. It is the special province of
finance to highlight value as a legitimate concern for managers. The cases in this book
exercise valuation analysis over a wide range of assets, debt, equities, and options, and
a wide range of perspectives, such as investor, creditor, and manager.
Linkage to Capital Markets
An important premise of these cases is that managers should take cues from the capital
markets. The cases in this volume help the student learn to look at the capital markets
in four ways. First, they illustrate important players in the capital markets such as
individual exemplar Warren Buffett and institutions like investment banks,
commercial banks, rating agencies, hedge funds, merger arbitrageurs, private
equity firms, lessors of industrial equipment, and so on. Second, they exercise the
students’ abilities to interpret capital market conditions across the economic cycle.
Third, they explore the design of financial securities, and illuminate the use of exotic
instruments in support of corporate policy. Finally, they help students understand the
implications of transparency of the firm to investors, and the impact of news about the
firm in an efficient market.
Respect for the Administrative Point of View
The real world is messy. Information is incomplete, arrives late, or is reported with
error. The motivations of counterparties are ambiguous. Resources often fall short.
These cases illustrate the immense practicality of finance theory in sorting out the issues
facing managers, assessing alternatives, and illuminating the effects of any particular
choice. A number of the cases in this book present practical ethical dilemmas or moral
hazards facing managers—indeed, this edition features a chapter, “Ethics in Finance”
right at the beginning, where ethics belongs. Most of the cases (and teaching plans in the
associated instructor’s manual) call for action plans rather than mere analyses or
descriptions of a problem.
Contemporaneity and Diversity
All of the cases in this book are set in the year 2006 or after and 25 percent are set in
2015 or later. A substantial proportion (57 percent) of the cases and technical notes are
new, or significantly updated. The mix of cases reflects the global business
environment: 52 percent of the cases in this book are set outside the United States, or
have strong cross-border elements. Finally the blend of cases continues to reflect the
growing role of women in managerial ranks: 31 percent of the cases present women as
key protagonists and decision-makers. Generally, these cases reflect the increasingly
diverse world of business participants.
Plan of the Book
The cases may be taught in many different combinations. The sequence indicated by the
table of contents corresponds to course designs used at Darden. Each cluster of cases in
the Table of Contents suggests a concept module, with a particular orientation.
1. Setting Some Themes. These cases introduce basic concepts of value creation,
assessment of performance against a capital market benchmark, and capital market
efficiency that reappear throughout a case course. The numerical analysis required of
the student is relatively light. The synthesis of case facts into an important framework
or perspective is the main challenge. The case, “Warren E. Buffett, 2016,” sets the
nearly universal theme of this volume: the need to think like an investor. The updated
case entitled, “The Battle for Value, 2016: FedEx Corp. vs. United Parcel Service,
Page xiv
Inc.” explores the definition of business success and its connections to themes of
financial management. “Larry Puglia and the T. Rowe Price Blue Chip Growth Fund,”
is an updated version of cases in prior editions that explores a basic question
about performance measurement: what is the right benchmark against which to
evaluate success? And finally, “Genzyme and Relational Investors: Science and
Business Collide?”, is a case that poses the dilemma of managing a public company
when the objectives of the shareholders are not always easily aligned with the longterm
objectives of the company and an activist investor is pressuring the company for
2. Financial Analysis and Forecasting. In this section, students are introduced to the
crucial skills of financial-statement analysis, break-even analysis, ratio analysis, and
financial statement forecasting. The section starts with a note, “Business Performance
Evaluation: Approaches for Thoughtful Forecasting”, that provides a helpful
introduction to financial statement analysis and student guidance on generating rational
financial forecasts. The case, “The Financial Detective 2016”, asks students to match
financial ratios of companies with their underlying business and financial strategies.
“Whole Foods Market: The Deutsche Bank Report” provides students with the
opportunity to reassess the financial forecast of a research analyst in light of industry
dynamics. This case can also be used an opportunity for students to hone firm
valuation skills with the evaluation of the analyst’s “buy, hold, or sell”
recommendation. “Horniman Horticulture” uses a financial model to build intuition for
the relevancy of corporate cash flow and the financial effects of firm growth. The
case, “Guna Fibres” asks the students to consider a variety of working capital
decisions, including the impact of seasonal demand upon financing needs. Other cases
address issues in the analysis of working-capital management, and credit analysis.
3. Estimating the Cost of Capital. This module begins with an article that is a survey of
“best practices” among leading firms for estimating the cost of capital during the low
interest rate regime following the 2007–08 financial crisis. The cases following the
survey article expose students to the skills in estimating the cost of capital for firms
and their business segments. The cases aim to exercise and solidify students’ mastery
of the capital asset pricing model, the dividend-growth model, and the weighted
average cost of capital formula. “Roche Holdings AG: Funding the Genentech
Acquisition” is a case that invites students to estimate the appropriate cost of debt for
a massive debt offering. The case provides an introduction to the concept of estimating
required returns. Two new cases ask the student to estimate the cost of capital for the
firm. “H.J. Heinz: Estimating the Cost of Capital in Uncertain Times” gives students
the opportunity to reassess the cost of capital following share price decline. “Royal
Mail plc:Cost of Capital” affords students the challenge of critiquing a cost of capital
estimate for recently privatized British postal service. The case “Chestnut Foods”
requires students to consider arguments for and against risk-adjusted hurdle rates in a
multi-divisional firm, as well as techniques for estimating divisional-specific cost of
Page xv
4. Capital Budgeting and Resource Allocation. The focus of these cases is the
evaluation of individual investment opportunities as well as the assessment of
corporate capital budgets. The analytical challenges range from setting the entire
capital budget for a resource-constrained firm (“Target Corporation”) to basic
time value of money problems (“The Investment Detective”). Key issues in
this module include the estimation of Free Cash Flows, the comparison of various
investment criteria (NPV, IRR, payback, and equivalent annuities), the treatment of
issues in mutually exclusive investments, and capital budgeting under rationing. This
module features several new cases. The first is “Centennial Pharmaceutical
Corporation” provides an introduction to discounted cash flow principles by asking
the student to compare values of two earnout plans. “Worldwide Paper Company” is
an updated case that serves as an introduction to estimating cash flows and calculating
the NPV of an investment opportunity. “Fonderia del Piemonte S.p.A.” is a new
addition to the book. Fonderia is an Italian company considering a capital investment
in machinery that replaces existing equipment. The student must assess the incremental
value to the company of investing in the new equipment. The Victoria Chemical cases
give students cash flow estimates for a large capital investment opportunity (“Victoria
Chemical plc (A): The Merseyside Project”) as asks the student to provide a careful
critique of the DCF analysis. The sequel case, (“Victoria Chemical plc (B):
Merseyside and Rotterdam Projects”, deepens the analysis by adding a competing and
mutually exclusive investment opportunity. “The Procter and Gamble Company: Crest
Whitestrips Advanced Seal” is a case that asks the student to value a new product
launch but then consider the financial implications of a variety of alternative launch
scenarios The case, “Jacobs Division”, presents students an opportunity to consider
the implications of strategic planning processes. “UVa Hospital System: The Longterm
Acute Care Hospital Project”, is an analysis of an investment decision within a
not-for-profit environment. In addition to forecasting and valuing the project’s cash
flows, students must assess whether NPV and IRR are appropriate metrics for an
organization that does not have stockholders. “Star River Electronics Ltd” has been
updated for this edition and presents the student will a range of issues that the new
CEO of the company must address, including the determination of the company’s cost
of capital and whether to invest in new machinery. We have used this case as an exam
for the first half of the finance principles course in the MBA program.
5. Management of the Firm’s Equity: Dividends and Repurchases. This module seeks
to develop practical principles about dividend policy and share repurchases by
drawing on concepts about dividend irrelevance, signaling, investor clienteles,
Page xvi
bonding, and agency costs. The first case, “Rockboro Machine Tools Corporation”, is
set in 2015 and concerns a company that is changing its business strategy and
considering a change in its dividend policy. The case serves as a comprehensive
introduction to corporate financial policy and themes in managing the right side of the
balance sheet. The second case, “EMI Group PLC”, is new to this edition and features
a struggling music producer in the U.K. confronted with whether it should continue to
pay a dividend despite the profit pressures it is facing. And finally, “AutoZone, Inc.”
is a leading auto parts retailer that has been repurchasing shares over many years. The
case serves as an excellent example of how share repurchases impact the
balance sheet and presents the student with the challenge of assessing the
impact upon the company’s stock price.
6. Management of the Corporate Capital Structure. The problem of setting capital
structure targets is introduced in this module. Prominent issues are the use and
creation of debt tax shields, the role of industry economics and technology, the
influence of corporate competitive strategy, the tradeoffs between debt policy,
dividend policy, and investment goals, and the avoidance of costs of distress.
Following a technical note, “An Introduction to Debt Policy and Value”, is a new
case, “M&M Pizza”, which explores the debt-equity choice within a perfect capital
market environment—a capital market with full information and no costs of trading.
This case provides an engaging environment for students to confront fundamental
financial policy theory. “California Pizza Kitchen”, is a real world analog to “M&M
Pizza” as it addresses the classic dilemma entailed in optimizing the use of debt tax
shields and providing financial flexibility for a national restaurant chain. The next four
cases are all new to the book. “Dominion Resources: Cove Point” presents the student
with the challenge of financing a large new project without creating substantial
disruption to the firm’s capital structure polices. The “Nokia OYJ: Financing the WP
Strategic Plan” presents a similar theme as management has taken a new strategic
direction and must make financing decisions that are cost effective, but also preserve
financial flexibility going forward. “Kelly Solar” concerns a start-up that needs new
funds for investment, but already has a significant amount of debt on the books that
needs to be renegotiated before new investors will find their investment to be
attractive. The case, “JC Penney Company”, presents a large retail chain that is facing
widespread performance challenges and needs to raise funds to offset the steadily
declining cash balance that will eventually create a liquidity crisis for the company.
The last case is “Horizon Lines, Inc.” The case is about a company facing default on a
debt covenant that will prompt the need for either Chapter 11 protection or a voluntary
financial restructuring.
7. Analysis of Financing Tactics: Leases, Options, and Foreign Currency. While the
preceding module is concerned with setting debt targets, this module addresses a
range of tactics a firm might use to pursue those targets, hedge risk, and exploit market
opportunities. Included are domestic and international debt offerings, leases, currency
hedges, warrants, and convertibles. With these cases, students will exercise
techniques in securities valuation, including the use of option-pricing theory. For
example, the first case, “Baker Adhesives” explores the concept of exchange-rate risk
and the management of that risk with a forward-contract hedge and a money-market
hedge. “Vale SA” is new to this edition and is a Brazilian mining company that must
choose between debt financing denominated in U.S. dollars, euros or British pounds.
The case “J&L Railroad” presents a commodity risk problem for which students are
asked to propose a specific hedging strategy using financial contracts offered on the
open market or from a commercial bank. “WNG Capital, LLC” is a new case about a
company that owns older aircraft that it leases to airlines as an alternative to the
airline buying new aircraft. “MoGen, Inc” presents the pricing challenges associated
with a convertible bond as well as a complex hedging strategy to change the Page xvii
conversion price of the convertible through the purchase of options and
issuance of warrants.
8. Valuing the Enterprise: Acquisitions and Buyouts. This module begins with an
extensive introduction to firm valuation in the note “Methods of Valuation: Mergers
and Acquisitions.” The focus of the note includes valuation using DCF and multiples.
This edition features six new cases in this module and five cases from the previous
edition. The “Medfield Pharmaceuticals” introduces students to firm valuation with
the reality of considering the difference between the value of firm assets in place and
the value of firm growth opportunities in the context of a takeover offer for a
pharmaceutical company. The case also includes important ethical considerations.
“American Greetings” was in the prior edition and provides a straightforward firm
valuation in the context of a repurchase decision and is designed to be an introduction
to firm valuation. The new case “Ferrari: The 2015 Initial Public Offering”, presents
students the opportunity to value the legendary automotive company, and consider how
to determine appropriate company comparables for a firm that is both an auto
manufacturer and a luxury brand. The case, “Rosetta Stone: Pricing the 2009 IPO”,
provides an alternative IPO valuation case with additional focus on valuation with
market multiples. “Sun Microsystems” is also returning from the previous edition and
presents a traditional takeover valuation case with opportunities to evaluate merger
synergies and cost of capital implications. The next five cases are all new to this
edition. “Carter International” involves assessing the correct price to offer to acquire
another hotel company. “DuPont Corporation: Sale of Performance Coatings” asks the
student to assess the economics of divesting a business unit that is not meeting the
strategic objectives of the firm. “Sanofi-Aventis’s Tender Offer for Genzyme” is a
sequel to the “Genzyme and Relational Investors: Science and Business Collide?” in
Page xviii
This edition offers a number of cases that give insights about investing or financing
decisions in emerging markets. These include “Guna Fibres Ltd.,” “Star River
Electronics Ltd.,” and “Baker Adhesives.”
Summary of Changes for this Edition
The eighth edition represents a substantial and significant change from the seventh
This edition offers 31 new or significantly updated cases and technical notes, which
represents 57 percent of the book. In the interest of presenting a fresh and contemporary
collection, older cases have been updated and/or replaced with new case
situations such that all the cases are set in 2006 or later and 25 percent are set
in 2015 or later. Several of the favorite “classic” cases from the first seven editions are
available online from McGraw-Hill such that instructors who adopt this edition may
copy these older cases for classroom use. These materials can be find at
which Genzyme’s CEO must decide whether to accept a tender offer to acquire
Genzyme. “Delphi Corporation” features a large auto parts company that has been in C
hapter 11 bankruptcy for two years. The student must decide in the role of a nonsecured
lender whether to vote to approve the Plan of Reorganization to emerge from
Chapter 11.
And finally, the module features a merger negotiation exercise (“Flinder Valves and
Controls Inc.”) that provides an engaging venue for investigating the distribution of
value in a merger negotiation. The comprehensive nature of cases in this module
makes them excellent vehicles for end-of-course classes, student term papers, and/or
presentations by teams of students. All cases and teaching notes have been edited to sharpen
the opportunities for student analysis.
The case studies in this volume are supported by various resources that help make
student engagement a success:
A guide to the novice on case preparation, “Note to the Student: How to Study and
Discuss Cases” in this volume.
All of the cases in this book are accompanied by a full teaching note that contains
suggested student study questions, a hypothetical teaching plan, and a prototypical
finished case analysis. In addition, the cases also have spreadsheet files that support
student and instructor preparation of the cases. These materials are available to all
instructors at the book’s website at Also at the book’s
website is an instructor’s resource manual that facilitates the use of these materials in
a standard course by providing resources on how to design a case course and how the
cases fit together.
Two of the cases provide student counterparty roles for two negotiation exercises. The
teaching materials present detailed discussions of case outcomes, one of which is
designed to be used as second class period for the case. These supplemental materials
can significantly extend student learning and expand the opportunities for classroom
A companion book by Robert Bruner titled, Socrates’ Muse: Reflections on
Excellence in Case Discussion Leadership (Irwin/McGraw-Hill, 2002), is available
to instructors who adopt the book for classroom use. This book offers useful tips on
case method teaching. This title is available through Create, McGraw-Hill
Page xix
This book would not be possible without the contributions of many other people.
Colleagues at Darden who have taught, co-authored, contributed to, or commented on
these cases are Brandt Allen, Yiorgos Allayannis, Sam Bodily, Karl-Adam Bonnier,
Susan Chaplinsky, John Colley, Bob Conroy, Mark Eaker, Rich Evans, Bob Fair, Paul
Farris, Jim Freeland, Sherwood Frey, Bob Harris, Jared Harris, Mark Haskins, Michael
Ho, Marc Lipson, Elena Loutskina, Pedro Matos, Matt McBrady, Charles Meiburg, Jud
Reis, William Sihler and Robert Spekman. We are grateful for their collegiality and for
the support for our casewriting efforts from the Darden School Foundation, the Mayo
Center for Asset Management, the L. White Matthews Fund for Finance Casewriting, the
Batten Institute, Columbia Business School, INSEAD, the University of
Melbourne and the University of Virginia’s McIntire School of Commerce.
Colleagues at other schools provided worthy insights and encouragement toward the
development of the eight editions of Case Studies in Finance. We are grateful to the
following persons (listed with the schools with which they were associated at the time
of our correspondence or work with them):
Michael Adler, Columbia
Raj Aggarwal, John Carroll
Turki Alshimmiri, Kuwait Univ.
Ed Altman, NYU
James Ang, Florida State
Education’s on-demand and custom publishing system. Ask your learning technology
representative for more details.
Paul Asquith, M.I.T.
Bob Barnett, North Carolina State
Geert Bekaert, Stanford
Michael Berry, James Madison
Randy Billingsley, VPI&SU
Gary Blemaster, Georgetown
Rick Boebel, Univ. Otago, New Zealand
Oyvind Bohren, BI, Norway
John Boquist, Indiana
Michael Brennan, UCLA
Duke Bristow, UCLA
Ed Burmeister, Duke
Kirt Butler, Michigan State
Don Chance, VPI&SU
Andrew Chen, Southern Methodist
Barbara J. Childs, Univ. of Texas at Austin
C. Roland Christensen, Harvard
Thomas E. Copeland, McKinsey
Jean Dermine, INSEAD
Michael Dooley, UVA Law
Barry Doyle, University of San Francisco
Bernard Dumas, INSEAD
Craig Dunbar, Western Ontario
Peter Eisemann, Georgia State
Javier Estrada, IESE
Ben Esty, Harvard
Thomas H. Eyssell, Missouri
Pablo Fernandez, IESE
Kenneth Ferris, Thunderbird
John Finnerty, Fordham
Joseph Finnerty, Illinois
Steve Foerster, Western Ontario
Günther Franke, Konstanz
Bill Fulmer, George Mason
Louis Gagnon, Queens
Dan Galai, Jerusalem
Jim Gentry, Illinois
Stuart Gilson, Harvard
Robert Glauber, Harvard
Mustafa Gultekin, North Carolina
Benton Gup, Alabama
Jim Haltiner, William & Mary
Rob Hansen, VPI&SU
Philippe Haspeslagh, INSEAD
Gabriel Hawawini, INSEAD
Pekka Hietala, INSEAD
Page xx
Rocky Higgins, Washington
Pierre Hillion, INSEAD
Laurie Simon Hodrick, Columbia
John Hund, Texas
Daniel Indro, Kent State
Thomas Jackson, UVA Law
Pradeep Jalan, Regina
Michael Jensen, Harvard
Sreeni Kamma, Indiana
Steven Kaplan, Chicago
Andrew Karolyi, Western Ontario
James Kehr, Miami Univ. Ohio
Kathryn Kelm, Emporia State
Carl Kester, Harvard
Naveen Khanna, Michigan State
Herwig Langohr, INSEAD
Dan Laughhunn, Duke
Ken Lehn, Pittsburgh
Saul Levmore, UVA Law
Wilbur Lewellen, Purdue
Scott Linn, Oklahoma
Dennis Logue, Dartmouth
Paul Mahoney, UVA Law
Paul Malatesta, Washington
Wesley Marple, Northeastern
Felicia Marston, UVA (McIntire)
John Martin, Texas
Ronald Masulis, Vanderbilt
John McConnell, Purdue
Richard McEnally, North Carolina
Catherine McDonough, Babson
Wayne Mikkelson, Oregon
Michael Moffett, Thunderbird
Nancy Mohan, Dayton
Ed Moses, Rollins
Charles Moyer, Wake Forest
David W. Mullins, Jr., Harvard
James T. Murphy, Tulane
Chris Muscarella, Penn State
Robert Nachtmann, Pittsburgh
Tom C. Nelson, University of Colorado
Ben Nunnally, UNC-Charlotte
Robert Parrino, Texas (Austin)
Luis Pereiro, Universidad Torcuato di Tella
Pamela Peterson, Florida State
Larry Pettit, Virginia (McIntire)
Tom Piper, Harvard
Gordon Philips, Maryland
John Pringle, North Carolina
Ahmad Rahnema, IESE
Al Rappaport, Northwestern
Allen Rappaport, Northern Iowa
Raghu Rau, Purdue
David Ravenscraft, North Carolina
Henry B. Reiling, Harvard
Lee Remmers, INSEAD
Jay Ritter, Florida
Richard Ruback, Harvard
Jim Schallheim, Utah
Art Selander, Southern Methodist
Israel Shaked, Boston
Dennis Sheehan, Penn State
J.B. Silvers, Case Western
Betty Simkins, Oklahoma State
Luke Sparvero, Texas
Richard Stapleton, Lancaster
Laura Starks, Texas
Page xxi
Jerry Stevens, Richmond
John Strong, William & Mary
Marti Subrahmanyam, NYU
Anant Sundaram, Thunderbird
Rick Swasey, Northeastern
Bob Taggart, Boston College
Udin Tanuddin, Univ. Surabaya, Indonesia
Anjan Thakor, Indiana
Thomas Thibodeau, Southern Methodist
Clifford Thies, Shenandoah Univ.
James G. Tompkins, Kenesaw State
Walter Torous, UCLA
Max Torres, IESE
Nick Travlos, Boston College
Lenos Trigeorgis, Cyprus
George Tsetsekos, Drexel
Peter Tufano, Harvard
James Van Horne, Stanford
Nick Varaiya, San Diego State
Theo Vermaelen, INSEAD
Michael Vetsuypens, Southern Methodist
Claude Viallet, INSEAD
Ingo Walter, NYU
Sam Weaver, Lehigh
J.F. Weston, UCLA
Peter Williamson, Dartmouth
Brent Wilson, Brigham Young
Kent Womack, Dartmouth
Karen Wruck, Ohio State
Fred Yeager, St. Louis
Betty Yobaccio, Framingham State
Marc Zenner, North Carolina
Research Assistants working under our direction have helped gather data and
prepare drafts. Research assistants who contributed to various cases in this and
previous editions include Darren Berry, Chris Blankenship, Justin Brenner, Anna
Buchanan, Anne Campbell, Drew Chambers, Sean Carr, Jessica Chan, Jenny Craddock,
Lucas Doe, Jake Dubois, Brett Durick, David Eichler, Ali Erarac, Shachar Eyal, Rick
Green, Daniel Hake, Dennis Hall, Jerry Halpin, Peter Hennessy, Dot Kelly, Vladimir
Kolcin, Nili Mehta, Casey Opitz, Katarina Paddack, Suprajj Papireddy, Thien Pham,
Chad Rynbrandt, John Sherwood, Elizabeth Shumadine, Janelle Sirleaf, Jane Sommers-
Kelly, Don Stevenson, Carla Stiassni, Sanjay Vakharia, Larry Weatherford, and Steve
Wilus. We have supervised numerous others in the development of individual cases—
those worthy contributors are recognized in the first footnote of each case.
A busy professor soon learns the wisdom in the adage, “Many hands make work
light.” we are very grateful to the staff of the Darden School for its support in this
project. Excellent editorial assistance at Darden was provided by the staff of Darden
Business Publishing and the Darden Case Collection. We specifically thank Leslie
Page xxii
Mullin (Senior Editor) and Margaret Ebin, Lucinda Ewing and Debbie O’Brien
(Editors). Ginny Fisher gave stalwart secretarial support. Valuable library research
support was given by Karen Marsh King and Susan Norrisey. The patience, care, and
dedication of these people are richly appreciated.
At McGraw-Hill, Chuck Synovec has served as Brand Manager for this book.
Melissa Leick was the project manager, and Jennifer Upton served as Product
Developer for this edition. Our thanks extend to those who helped us on prior editions
as well, including Mike Junior, who originally recruited Bob Bruner to do this project,
and Michele Janicek.
Of all the contributors, our wives, Barbara M. Bruner, Kathy N. Eades, and Mary
Ann H. Schill as well as our children have endured great sacrifices as the result of our
work on this book. As Milton said, “They also serve who only stand and wait.”
Development of this eighth edition would not have been possible without their fond
All these acknowledgments notwithstanding, responsibility for these materials is
ours. We welcome suggestions for their enhancement. Please let us know of your
experience with these cases, either through McGraw-Hill/Irwin, or at the
coordinates given below.
Robert F. Bruner
University Professor
Distinguished Professor of Business Administration
Dean Emeritus of the Darden School of Business
Darden Graduate School of Business
University of Virginia
Kenneth M. Eades
Professor of Business Administration
Darden Graduate School of Business
University of Virginia
Michael J. Schill
Professor of Business Administration
Darden Graduate School of Business
University of Virginia
Individual copies of all the Darden cases in this and previous editions may be obtained
promptly from McGraw-Hill/Irwin’s Create ( or from
Darden Business Publishing (telephone: 800-246-3367; https://store.darden.virginia.ed
u/). Proceeds from these case sales support case writing efforts. Please respect the
copyrights on these materials.
Page xxiii
Note to the Student: How to Study and Discuss Casess
“Get a good idea and stay with it. Dog it and work at it until it’s done, and done right.”
—Walt Disney
You enroll in a “case method” course, pick up the book of case studies or the stack of
loose-leaf cases, and get ready for the first class meeting. If this is your first experience
with case discussions, the odds are that you are clueless and a little anxious about how
to prepare for this course. That’s fairly normal but something you should try to break
through quickly in order to gain the maximum from your studies. Quick breakthroughs
come from a combination of good attitude, good “infrastructure,” and good execution—
this note offers some tips.
Good Attitude
Students learn best that which they teach themselves. Passive and mindless learning is
ephemeral. Active and mindful learning simply sticks. The case method makes learning
sticky by placing you in situations that require invention of tools and concepts in your
own terms. The most successful case students share a set of characteristics that drive
1. Personal initiative, self-reliance. Case studies rarely suggest how to proceed.
Professors are more like guides on a long hike: they can’t carry you, but they can show
you the way. You must arrive at the destination under your own power. You must figure
out the case on your own. To teach yourself means that you must sort ideas out in ways
that make sense to you, personally. To teach yourself is to give yourself two gifts: the
idea you are trying to learn, and greater self-confidence in your own ability to master
the world.
Page xxiv
2. Curiosity, a zest for exploration as an end in itself. Richard P. Feynman, who won
the Nobel Prize in Physics in 1965, was once asked whether his key discovery was
worth it. He replied, “. . . [the Nobel Prize is] a pain in the . . . I don’t like honors . . .
The prize is the pleasure of finding the thing out, the kick in the discovery, the
observation that other people use it [my work]—those are the real things, the honors
are unreal to me.”1
3. A willingness to take risks. Risk-taking is at the heart of all learning. Usually one
learns more from failures than successes. The banker, Walter Wriston, once said,
“Good judgment comes from experience. Experience comes from bad judgment.”
4. Patience and persistence. Case studies are messy, a realistic reflection of the fact
that managers don’t manage problems, they manage messes. Initially, reaching a
solution will seem to be the major challenge. But once you reach a solution, you may
discover other possible solutions, and face the choice among the best alternatives.
5. An orientation to community and discussion. Much of the power of the case method
derives from a willingness to talk with others about your ideas and/or your points of
confusion. This is one of the paradoxes of the case method: you must teach yourself,
but not in a vacuum. The poet, T.S. Eliot, said, “there is no life not lived in
community.” Talking seems like such an inefficient method of sorting through the case,
but if exploration is an end in itself then talking is the only way. Furthermore, talking
is an excellent means of testing your own mastery of ideas, of rooting out points of
confusion, and generally, of preparing you for professional life.
6. Trust in the process. The learnings from a case-method course are impressive. They
arrive cumulatively over time. In many cases, the learnings continue well after the
Good Infrastructure
“Infrastructure” consists of all the resources that the case student can call upon. Some of
this is simply given to you by the professor: case studies, assignment questions,
supporting references to textbooks or articles, and computer data or models. But you can
go much farther to help yourself. Consider these steps:
course has finished. Occasionally, these learnings hit you with the force of a tsunami.
But generally, the learnings creep in quietly, but powerfully, like the tide. After the
case course, you will look back and see that your thinking, mastery, and appreciation
have changed dramatically. The key point is that you should not measure the success of
your progress on the basis of any single case discussion. Trust that in the cumulative
work over many cases you will gain the mastery you seek.
Page xxv
1. Find a quiet place to study. Spend at least 90 minutes there for each case study.
Each case has subtleties to it, which you will miss unless you can concentrate. After
two or three visits, your quiet place will take on the attributes of a habit: you
will slip into a working attitude more easily. Be sure to spend enough time in
the quiet place to give yourself a chance to really engage the case.
2. Access a business dictionary. If you are new to business and finance, some of the
terms will seem foreign; if English is not your first language, many of the terms will
seem foreign if not bizarre. Get into the habit of looking up terms that you don’t know.
The benefit of this becomes cumulative. You can find good definitions online.
3. Skim the business news each day; read a substantive business magazine or blog
regularly; follow the markets. Reading a newspaper or magazine helps build a
context for the case study you are trying to solve at the moment, and helps you make
connections between the case study and current events. The terminology of business
and finance that you see in the publications helps reinforce your use of the dictionary,
and hastens your mastery of terms you will see in the cases. Your learning by reading
business periodicals is cumulative. Some students choose to follow a good business
news website on the Internet. These have the virtue of being inexpensive and efficient,
but they tend to screen too much. Having the printed publication in your hands, and
leafing through it, helps the process of discovery, which is the whole point of the
4. Learn the basics of spreadsheet modeling on a computer. Many case studies now
have supporting data available for analysis in spreadsheet files, such as Microsoft
Excel. Analyzing the data on a computer rather than by hand both speeds up your
work, and extends your reach.
5. Form a study group. The ideas in many cases are deep; the analysis can get complex.
You will learn more, and perform better in class participation by discussing the
cases together in a learning team before you come to class. Your team should devote
an average of an hour to each case. High performance teams show a number of
common attributes:
a. Members commit to the success of the team.
b. The team plans ahead, leaving time for contingencies.
c. The team meets regularly.
d. Team members show up for meetings and are prepared to contribute.
e. There may or may not be a formal leader, but assignments are clear. Team members
meet their assigned obligations.
6. Get to know your professor. In the case method, students inevitably learn more from
one another than from the instructor. But the teacher is part of the learning
infrastructure too: a resource to be used wisely. Never troll for answers in advance of
a case discussion. Do your homework; use classmates and learning teams to clear up
Page xxvi
Good Execution
Good attitude and infrastructure must be employed properly—and one needs good
execution. The extent to which a student learns depends on how the case study is
approached. What can one do to gain the maximum from the study of these cases?
most questions so that you can focus on the meatiest issues with the teacher. Be very
organized and focused about what you would like to discuss. Remember that teachers
like to learn too: if you reveal a new insight about a case or bring a clipping about a
related issue in current events, the professor and student both gain from their time
together. Ultimately, the best payoff to the professor is the “aha” in the student’s eyes
when he or she masters an idea.
1. Reading the case. The very first time you read any case, look for the forest not the
trees. This requires that your first reading be quick. Do not begin taking notes on the
first round; instead, read the case like a magazine article. The first few paragraphs of
a well-constructed case usually say something about the problem—read those
carefully. Then quickly read the rest of the case, seeking mainly a sense of the scope of
the problems, and what information the case contains to help resolve them. Leaf
through the exhibits, looking for what information they hold, rather than for any
analytical insights. At the conclusion of the first pass, read any supporting articles or
notes that your instructor may have recommended.
2. Getting into the case situation. Develop your “awareness.” With the broader
perspective in mind, the second and more detailed reading will be more productive.
The reason is that as you now encounter details, your mind will be able to organize
them in some useful fashion rather than inventorying them randomly. Making linkages
among case details is necessary toward solving the case. At this point you can take the
notes that will set up your analysis.
The most successful students project themselves into the position of the decisionmaker
because this perspective helps them link case details as well as develop a stand
on the case problem. Assignment questions may help you do this; but it is a good idea
to get into the habit of doing it yourself. Here are the kinds of questions you might try
to answer in preparing every case:
a. Who are the protagonists in the case? Who must take action on the problem? What do
they have at stake? What pressures are they under?
b. In what business is the company? What is the nature of its product? What is the
nature of demand for that product? What is the firm’s distinctive competence? With
whom does it compete? What is the structure of the industry? Is the firm
comparatively strong or weak? In what ways?
Page xxvii
c. What are the goals of the firm? What is the firm’s strategy in pursuit of these goals?
(The goals and strategy might be explicitly stated, or they may be implicit in the way
the firm does business.) What are the firm’s apparent functional policies in
marketing (e.g., push- versus-pull strategy), production (e.g., labor
relations, use of new technology, distributed production vs. centralized),
and finance (e.g., the use of debt financing, payment of dividends)? Financial and
business strategies can be inferred from analysis of financial ratios and a sources
and uses of funds statement.
d. How well has the firm performed in pursuit of its goals? (The answer to this
question calls for simple analysis using financial ratios, such as the DuPont system,
compound growth rates, and measures of value creation.)
The larger point of this phase of your case preparation is to broaden your awareness
of issues. Perhaps the most successful investor in history, Warren Buffett, said, “Any
player unaware of the fool in the market, probably is the fool in the market.”4
Awareness is an important attribute of successful managers.
3. Defining the problem. A common trap for many executives is to assume that the issue
at hand is the real problem worthiest of their time, rather than a symptom of some
larger problem that really deserves their time. For instance, a lender is often asked to
advance funds to help tide a firm over a cash shortfall. Careful study may reveal that
the key problem is not a cash shortfall, but rather product obsolescence, unexpected
competition, or careless cost management. Even in cases where the decision is fairly
narrowly defined (such as in a capital expenditure choice), the “problem” generally
turns out to be the believability of certain key assumptions. Students who are new to
the case method tend to focus narrowly in defining problems and often overlook the
influence which the larger setting has on the problem. In doing this the student
develops narrow specialist habits, never achieving the general manager perspective. It
is useful and important for you to define the problem yourself, and in the process,
validate the problem as suggested by the protagonist in the case.
Page xxviii
4. Analysis: run the numbers and go to the heart of the matter. Virtually all finance
cases require numerical analysis. This is good because figure-work lends rigor and
structure to your thinking. But some cases, reflecting reality, invite you to explore
blind alleys. If you are new to finance, even these explorations will help you learn.
The best case students develop an instinct for where to devote their analysis. Economy
of effort is desirable. If you have invested wisely in problem definition, economical
analysis tends to follow. For instance, a student might assume that a particular case is
meant to exercise financial forecasting skills and will spend two or more hours
preparing a detailed forecast, instead of preparing a simpler forecast in one hour and
conducting a sensitivity analysis based on key assumptions in the next hour. An
executive rarely thinks of a situation as having to do with a forecasting method or
discounting or any other technique, but rather thinks of it as a problem of
judgment, deciding on which people or concepts or environmental conditions to bet.
The best case analyses get down to the key bets on which the executive is wagering
the prosperity of the firm, and his or her career. Get to the business issues quickly, and
avoid lengthy churning through relatively unimportant calculations.
5. Prepare to participate: take a stand. To develop analytical insights without making
recommendations is useless to executives, and drains the case study experience of
some of its learning power. A stand means having a point of view about the problem, a
recommendation, and an analysis to back up both of them. The lessons most worth
learning all come from taking a stand. From that truth flows the educative force of the
case method. In the typical case, the student is projected into the position of an
executive who must do something in response to a problem. It is this choice of what to
do that constitutes the executive’s “stand.” Over the course of a career, an executive
who takes stands gains wisdom. If the stand provides an effective resolution of the
problem, so much the better for all concerned. If it does not, however, the wise
executive analyzes the reasons for the failure and may learn even more than from a
success. As Theodore Roosevelt wrote:
The credit belongs to the man who is actually in the arena—whose face is
marred by dust and sweat and blood . . . who knows the great enthusiasms, the
great devotions—and spends himself in a worthy cause—who at best if he wins
knows the thrills of high achievement—and if he fails, at least fails while
daring greatly so that his place shall never be with those cold and timid souls
who know neither victory nor defeat.
6. In class: participate actively in support of your conclusions, but be open to new
insights. Of course, one can have a stand without the world being any wiser. To take a
stand in case discussions means to participate actively in the discussion and to
Conclusion: Focus on Process, and Results Will Follow
advocate your stand until new facts or analyses emerge to warrant a change. Learning
by the case method is not a spectator sport. A classic error many students make is to
bring into the case method classroom the habits of the lecture hall (i.e., passively
absorbing what other people say). These habits fail miserably in the case method
classroom because they only guarantee that one absorbs the truths and fallacies uttered
by others. The purpose of case study is to develop and exercise one’s own skills and
judgment. This takes practice and participation, just as in a sport. Here are two good
general suggestions: (1) defer significant note-taking until after class and (2) strive to
contribute to every case discussion.
Page xxix
7. Immediately after class: jot down notes, corrections and questions. Don’t
overinvest in taking notes during class—that just cannibalizes “air time” in which you
could be learning through discussing the case. But immediately after, collect
your learnings and questions in notes that will capture your thinking. Of
course, ask a fellow student or your teacher questions that will help clarify issues that
still puzzle you.
8. Once a week, flip through notes. Make a list of your questions, and pursue
answers. Take an hour each weekend to review your notes from class discussions
during the past week. This will help build your grasp of the flow of the course.
Studying a subject by the case method is like building a large picture with small
mosaic tiles. It helps to step back to see the big picture. But the main objective should
be to make an inventory of anything you are unclear about: terms, concepts, and
calculations. Work your way through this inventory with classmates, learning teams,
and ultimately the instructor. This kind of review and follow-up builds your selfconfidence
and prepares you to participate more effectively in future case discussions.
View the case method experience as a series of opportunities to test your mastery of
techniques and your business judgment. If you seek a list of axioms to be etched in stone,
you are bound to disappoint yourself. As in real life, there are virtually no “right”
answers to these cases in the sense that a scientific or engineering problem has an exact
solution. Jeff Milman has said, “The answers worth getting are never found in the back
of the book.” What matters is that you obtain a way of thinking about business situations
that you can carry from one job (or career) to the next. In the case method it is largely
true that how you learn is what you learn.8
Page xxx
Ethics in Finance
“The first thing is character, before money or anything else.”
—J. P. Morgan (in testimony before the U.S. Congress)
“The professional concerns himself with doing the right thing rather than making money, knowing that the
profit takes care of itself if the other things are attended to.”
—Edwin LeFevre, Reminiscences of a Stock Operator
Integrity is paramount for a successful career in finance and business, as practitioners
remind us. One learns, rather than inherits, integrity. And the lessons are everywhere,
even in case studies about finance. To some people, the world of finance is purely
mechanical, devoid of ethical considerations. The reality is that ethical issues are
pervasive in finance. Still, disbelief that ethics matter in finance can take many forms.
“It’s not my job,” says one person, thinking that a concern for ethics belongs to a CEO,
an ombudsperson, or a lawyer. But if you passively let someone else do your thinking,
you expose yourself to complicity in unethical decisions of others. Even worse is the
possibility that if everyone assumes that someone else owns the job of ethical
practice, then perhaps no one owns it and that therefore the enterprise has no moral
compass at all.
Another person says, “When in Rome, do as the Romans do. It’s a dog-eat-dog world:
we have to play the game their way if we mean to do business there.” Under this view,
everybody is assumed to act ethically relative to their local environment so that it is
inappropriate to challenge unethical behavior. This is moral relativism. The problem
with this view is that it presupposes that you have no identity, that you are defined like
Page xxxi
There is no escaping the fact that ethical reasoning is vital to the practice of business
and finance. Tools and concepts of ethical reasoning belong in the financial toolkit
alongside other valuable instruments of financial practice.
Ethics and economics were once tightly interwoven. The patriarch of
economics, Adam Smith, was actually a scholar of moral philosophy. Though
the two fields may have diverged in the last century, they remain strong complements.
Morality concerns norms and teachings. Ethics concerns the process of making morally
good decisions, or as Andrew Wicks wrote, “Ethics has to do with pursuing—and
achieving—laudable ends.” The Oxford English Dictionary defines “moral” as “Of
knowledge, opinions, judgments, etc.; Relating to the nature and application of the
distinction between right and wrong.” “Ethics,” on the other hand, is defined as “The
science of morals.” To see how decision-making processes in finance have ethical
implications, consider the following case studies.
a chameleon by the environment around you. Relativism is the enemy of personal
identity and character. You must have a view if you are rooted in any cultural system.
Prepare to take a stand.
A third person says, “It’s too complicated. Civilization has been arguing about ethics
for 3,000 years. You expect me to master it in my lifetime?” The response must be that
we use complicated systems dozens of times each day without full mastery of their
details. Perhaps the alternative would be to live in a cave, a simpler life but much less
rewarding. Moreover, as courts have been telling the business world for centuries,
ignorance of the law is no defense: if you want to succeed in the field of finance, you
must grasp the norms of ethical behavior.
1. Fraud. For several decades, Bernard Madoff operated a money management firm that
reported annual returns of about 10% in good years and bad, performance that was
astonishing for its regularity. Madoff claimed that he was able to v earn such reliable
returns from investing in the shares of mature companies, along with a “collar” (put
and call options that limited the risk). He marketed his services to investors on the
strength of his reported performance, his years in the investment business, and his
ethnic and social affinity in prominent clubs and communities. But in the Panic of
2008, worried investors sought to withdraw their investments from the Madoff firm.
On December 10, 2008, Bernard Madoff admitted to F.B.I. agents that his investment
fund was “one big lie,” exploiting some 13,500 individual investors and charities.
Investigation and court proceedings eventually revealed that Madoff had operated a
massive “Ponzi scheme,” in which the investments by new investors were used to pay
high returns to existing investors. The collapse of his firm cost investors some $50
billion in losses. Madoff was convicted of 11 Federal crimes and received a
sentence of 150 years in jail. A number of other individuals, charities, and firms were
investigated, indicted, convicted, and/or fined on related charges. Several analysts
over the years warned that Madoff’s performance was unrealistic and probably
fraudulent; but the SEC took no action. Afterward, the agency issued a 477-page
report of an internal investigation that resulted in disciplinary actions against eight
SEC employees, but no terminations. This was the largest Ponzi scheme in history and
generated an enormous range of accusations of negligence or complicity in the fraud.
Page xxxii
2. Negligence. In 2011, the Financial Crisis Inquiry Commission delivered a report on
the Panic of 2008 that found “a systemic breakdown in accountability and ethics. The
integrity of our financial markets and the public’s trust in those markets are essential to
the economic well-being of our nation. The soundness and the sustained prosperity of
the financial system and our economy rely on the notions of fair dealing,
responsibility, and transparency. In our economy, we expect businesses and
individuals to pursue profits, at the same time that they produce products and services
of quality and conduct themselves well. . . . Unfortunately—as has been the
case in past speculative booms and busts—we witnessed an erosion of standards of
responsibility and ethics that exacerbated the financial crisis. This was not universal,
but these breaches stretched from the ground level to the corporate suites. They
resulted not only in significant financial consequences but also in damage to the trust
of investors, businesses, and the public in the financial system. . . . This report
catalogues the corrosion of mortgage-lending standards and the securitization pipeline
that transported toxic mortgages from neighborhoods across America to investors
around the globe. Many mortgage lenders set the bar so low that lenders simply took
eager borrowers’ qualifications on faith, often with a willful disregard for a
borrower’s ability to pay. . . . These trends were not secret. As irresponsible lending,
including predatory and fraudulent practices, became more prevalent, the Federal
Reserve and other regulators and authorities heard warnings from many quarters. Yet
the Federal Reserve neglected its mission “to ensure the safety and soundness of the
nation’s banking and financial system and to protect the credit rights of consumers.” It
failed to build the retaining wall before it was too late. And the Office of the
Comptroller of the Currency and the Office of Thrift Supervision, caught up in turf
wars, preempted state regulators from reining in abuses. . . . In our inquiry, we found
dramatic breakdowns of corporate governance, profound lapses in regulatory
oversight, and near fatal flaws in our financial system.”ix
3. Incentives that distort values. From August, 2015 to May, 2016, the share prices of
Valeant Pharmaceutical fell about 90%. This destruction of share value reflected the
accumulated doubts about the adequacy of the firm’s disclosure of accounting results
and material information pertaining to its strategy and risks. The company had grown
rapidly by acquisition and by sharply raising the prices of product lines that it had
purchased. Congress began an investigation into the firm’s practices and complained
that Valeant was withholding information. Short-sellers alleged that Valeant used a
Page xxxiii
Laws and regulations often provide a “bright red line” to constrain bad behavior. But
ethics demand an even higher standard. Bernard Madoff broke the law against
fraudulent behavior. His family, friends, and associates seem to have looked the other
way over the years, rather than urging him not to proceed. Leading up to the Panic of
2008, many watchdogs grew lax and neglected their duties to the wider public. And in
the case of Valeant, managers and directors of the company fueled a toxic culture of
growth at any cost to others.
Why One Should Care about Ethics in Finance
Managing in ethical ways is not merely about avoiding bad outcomes. There are at least
five positive arguments for bringing ethics to bear on financial decision-making.
Sustainability. Unethical practices are not a foundation for enduring,
sustainable, enterprise. This first consideration focuses on the legacy one creates
through one’s financial transactions. What legacy do you want to leave? To incorporate
ethics into our finance mindset is to think about the kind of world that we would like to
live in, and that our children will inherit.
One might object that in a totally anarchic world, unethical behavior might be the
only path to life. But this only begs the point: we don’t live in such a world. Instead, our
world of norms and laws ensures a corrective process against unethical behavior.
related firm, Phillidor, to book fake sales—Valeant denied this, but then ceased its ties
to Phillidor and shut down its operations. The board of directors sacked its CEO. And
a prominent backer of Valeant, activist investor, Bill Ackman, “deeply and profoundly
apologize[d]” to investors for his support of Valeant’s management. Analysts cited
liberal option-based executive compensation as one stimulus for Valeant’s aggressive
Ethical behavior builds trust. Trust rewards. The branding of products seeks to create
a bond between producer and consumer: a signal of purity, performance, or other
attributes of quality. This bond is built by trustworthy behavior. As markets reveal,
successfully branded products command a premium price. Bonds of trust tend to pay. If
the field of finance were purely a world of one-off transactions, it would seem ripe for
opportunistic behavior. But in the case of repeated entry into financial markets and
transactions, for instance by active buyers, intermediaries, and advisors, reputation can
count for a great deal in shaping the expectations of counterparties. This implicit bond,
trust, or reputation can translate into more effective and economically attractive
financial transactions and policies.
Surely, ethical behavior should be an end in itself. If you are behaving ethically only
to get rich, then you are hardly committed to that behavior. Some might even see this as
an imperfect means by which justice expresses itself.
Ethical behavior builds teams and leadership which underpin process excellence.
Standards of global best practice emphasize that good business processes drive good
outcomes. Stronger teams and leaders result in more agile and creative responses to
problems. Ethical behavior contributes to the strength of teams and leadership by
aligning employees around shared values, and building confidence and loyalty.
An objection to this argument is that in some settings promoting ethical behavior is
no guarantee of team-building. Indeed, teams might blow apart over disagreement about
what is ethical or what action is appropriate to take. But typically, this is not the fault of
ethics, rather of team processes for handling disagreements.
Ethics sets a higher standard than laws and regulations. To a large extent, the law is
a crude instrument: it tends to trail rather than anticipate behavior; it contains gaps that
become recreational exploitation for the aggressive business person; Justice may neither
be swift nor proportional to the crime; and as Andrew Wicks said, it “puts you in an
Page xxxiv
adversarial posture with respect to others which may be counterproductive to other
objectives in facing a crisis.” To use only the law as a basis for ethical thinking is to
settle for the lowest common denominator of social norms. As former Chairman of the
Securities and Exchange Commission, Richard Breeden, said, “It is not an adequate
ethical standard to want to get through the day without being indicted.”
Some might object to this line of thinking by claiming that in a pluralistic society, the
law is the only baseline of norms on which society can agree. Therefore, isn’t the law a
“good enough” guide to ethical behavior? Lynn Paine argued that this leads to a
“compliance” mentality and that ethics takes one farther. She wrote,
“Attention to law, as an important source of managers’ rights and
responsibilities, is integral to, but not a substitute for, the ethical point of view—a point
of view that is attentive to rights, responsibilities, relationships, opportunities to
improve and enhance human well-being, and virtue and moral excellence.”
Reputation and conscience. Motivating ethical behavior only by appealing to benefits
and avoiding costs is inappropriate. By some estimates, the average annual income for a
lifetime of crime (even counting years spent in prison) is large—it seems that crime
does pay. If income were all that mattered, most of us would switch into this lucrative
field. The business world features enough cheats and scoundrels to offer any
professional the opportunity to break promises, or worse, for money. Ethical
professionals decline these opportunities for reasons having to do with the kind of
people they want to be. Amar Bhide and Howard Stevenson wrote, “The
businesspeople we interviewed set great store on the regard of their family, friends, and
the community at large. They valued their reputations, not for some nebulous financial
gain but because they took pride in their good names. Even more important, since
outsiders cannot easily judge trustworthiness, businesspeople seem guided by their
inner voices, by their consciences. . . . We keep promises because it is right to do so,
not because it is good business.”
For Whose Interests Are You Working?
Generally the financial executive or deal designer is an agent acting on behalf of others.
For whom are you the agent? Two classic schools of thought emerge.
Stockholders. Some national legal frameworks require directors and managers to
operate a company in the interests of its shareholders. The shareholder focus lends a
clear objective: do what creates shareholders’ wealth. This would seem to limit
charitable giving, “living wage” programs, voluntary reduction of pollution, and
enlargement of pension benefits for retirees—all of these loosely gather under the
umbrella of “social responsibility” movement in business. Milton Friedman (1962),
perhaps the most prominent exponent of the stockholder school of thought, argued that
the objective of business is to return value to its owners and that to divert the
objective to other ends is to expropriate shareholder value and threaten the survival of
the enterprise. Also, the stockholder view would argue that if all companies deviated,
the price system would cease to function well as a carrier of information about the
allocation of resources in the economy. The stockholder view is perhaps dominant in
the U.S., U.K, and other countries in the Anglo-Saxon sphere.
Page xxxv
Stakeholders. The alternative view admits that stockholders are an important
constituency of the firm, but that other groups such as employees, customers, suppliers,
and the community also have a stake in the activities and success of the firm. Edward
Freeman (1984) argued that the firm should be managed in the interest of the broader
spectrum of constituents. The manager would necessarily be obligated to account for
the interests and concerns of the various constituent groups in arriving at business
decisions—the aim would be to satisfy them all, or at least the most
concerned stakeholders on each issue. The complexity of this kind of
Adding complexity to the question of whose interests one serves is the fact that often
one has many allegiances—not only to the firm or client, but also to one’s community,
family, etc. Obligations that one has as an employee or professional are only a subset of
obligations one has on the whole.
What is “good”? Consequences, Duties, Virtues.
One confronts ethical issues when one must choose among alternatives on the basis of
right versus wrong. The ethical choices may be stark where one alternative is truly right
and the other truly wrong. But in professional life the alternatives typically differ more
subtly as in choosing which alternative is more right or less wrong. Ernest Hemingway
said that what is moral is what one feels good after and what is immoral is what one
feels bad after. Since feelings about an action could vary tremendously from one person
to the next, this simplistic test would seem to admit moral relativism as the only course,
an ethical “I’m OK, You’re OK” approach. Fortunately 3,000 years of moral reasoning
lend frameworks for greater definition of what is “right” and “wrong.”
“Right” and “wrong” defined by consequences. An easy point of departure is to focus
on outcomes. An action might be weighed in terms of its utility for society. Who is hurt
or helped must be taken into consideration. Utility can be assessed in terms of the
pleasure or pain for people. People choose to maximize utility. Therefore, right action is
that which produces the greatest good for the greatest number of people.
decision-making can be daunting and slow. In addition, it is not always clear which
stakeholder interests are relevant in making specific decisions. Such a definition
seems to depend highly on the specific context, which would seem to challenge the
ability to achieve equitable treatment of different stakeholder groups and across time.
But the important contribution of this view is to suggest a relational view of the firm
and to stimulate the manager to consider the diversity of those relationships.
Page xxxvi
“Utilitarianism” has proved to be controversial. Some critics feared that this
approach might endorse gross violations of norms that society holds dear including the
right to privacy, the sanctity of contracts, and property rights, when weighed in the
balance of consequences for all. And the calculation of utility might be subject to
special circumstances or open to interpretation, making the assessment rather more
situation-specific than some philosophers could accept.
Utilitarianism was the foundation for modern neoclassical economics. Utility has
proved to be difficult to measure rigorously and remains a largely theoretical idea. Yet
utility-based theories are at the core of welfare economics and underpin analyses of
phenomena varying as widely as government policies, consumer preferences, and
investor behavior.
“Right” and “wrong” defined by duty or intentions. Immoral actions are ultimately
self-defeating. A practice of writing bad checks, for instance, if practiced universally,
would result in a world without check-writing and probably very little credit. Therefore
you should act on rules which you would require to be applied universally . You
should treat a person as an end, never as a means. It is vital to ask whether an
action would show respect for other persons and whether that action was
something a rational person would do—“If everyone behaved this way, what kind of
world would we have?”
Critics of this perspective argue that its universal view is too demanding, indeed,
impossible for a businessperson to observe. For instance, the profit motive focuses on
the manager’s duty to just one company. But Norman Bowie responds, “Perhaps
focusing on issues other than profits . . . will actually enhance the bottom line . . .
.Perhaps we should view profits as a consequence of good business practices rather
than as the goal of business.”
“Right” and “wrong” defined by virtues. Finally, a third tradition in philosophy
argues that the debate over “values” is misplaced: the focus should be on virtues and
the qualities of the practitioner. The attention to consequences or duty is fundamentally a
focus on compliance. Instead, one should consider whether an action is consistent with
being a virtuous person. This view argues that personal happiness flowed from being
virtuous, and not merely from comfort (utility) or observance (duty). It acknowledges
that vices are corrupting. And it focuses on personal pride: “If I take this action would I
be proud of what I see in the mirror? If it were reported tomorrow in the newspaper,
would I be proud of myself?” Warren Buffett, CEO of Berkshire Hathaway, and one of
the most successful investors in modern history issues a letter to each of his operating
managers each year emphasizing the importance of personal integrity: He said that
Berkshire can afford financial losses, but not losses in reputation. He wrote, “Make sure
everything you do can be reported on the front page of your local newspaper written by
an unfriendly, but intelligent reporter.”
Critics of virtue-based ethics raise two objections. First, a virtue to one person may
be a vice to another. Solomon (1999) points out that Confucius and Nietzsche, two other
virtue ethicists, held radically different visions of virtue: Confucius extolled virtues
such as respect and piety. In contrast, Nietzsche extolled risk-taking, war-making, and
ingenuity. Thus, virtue ethics may be context-specific. Second, virtues can change over
time. What may have been regarded as “gentlemanly” behavior (i.e., formal politeness)
in the 19th Century, might have been seen by feminists in the late 20th Century as
insincere and manipulative.
Discrete definition of “right” and “wrong” remains a subject of ongoing discourse.
But the practical person can abstract from these and other perspectives useful guidelines
toward ethical work:
How will my action affect others? What are the consequences?
Page xxxvii
What Can You Do to Promote Ethical Behavior in Your
An important contributor to unethical business practices is the existence of a work
environment that promotes such behavior. Leaders in corporate work places need to be
proactive in shaping a high performance culture that sets high ethical expectations. The
leader can take a number of steps to shape an ethical culture.
Adopt a code of ethics. One dimension of ethical behavior is to
acknowledge some code by which one intends to live. Corporations, too, can adopt
codes of conduct that shape ethical expectations. Firms recognize the “problem of the
commons” inherent in unethical behavior by one or a few employees. In 1909, the
Supreme Court decided that a corporation could be held liable for the actions of its
employees. Since then, companies have sought to set expectations for employee
behavior, including codes of ethics. Exhibits 1 and 2 give excerpts of codes from J.P.
Morgan Chase and General Electric Company—they are clear statements that define
right behavior. Corporate codes are viewed by some critics as cynical efforts that seem
merely to respond to executive liability that might arise from white collar and other
economic crimes. Companies and their executives may be held liable for employee
behavior, even if the employee acted contrary to instructions. Mere observance of
guidelines in order to reduce liability is a legalistic approach to ethical behavior.
Instead, Lynn Paine (1994) urged firms to adopt an “integrity strategy” that uses ethics
as the driving force within a corporation. Deeply-held values would become the
foundation for decision making across the firm and would yield a frame of reference that
What are my motives and my duty here? How does this decision affect them?
Does this action serve the best that I can be?
would integrate functions and businesses. By this view, ethics defines what a firm
stands for.
In addition, an industry or professional group can organize a code of ethics. One
EXHIBIT 1 | J.P. Morgan Chase & Co. Excerpts from Code of Conduct and Code of Ethics for
Finance Professionals
JPMC Finance Officers and Finance Professionals must act honestly, promote ethical conduct and comply with the law
. . . They are specifically required to:
Carry out their responsibilities honestly, in good faith and with integrity, due care and diligence . . .
Comply with applicable government laws, rules and regulations . . .
Never . . . coerce, manipulate, mislead or fraudulently influence the firm’s independent auditors . . .
Protect the confidentiality of non-public information relating to JPMC and its clients . . .
Address actual or apparent conflicts of interest . . .
Promptly report . . . any known or suspected violation . . .
. . .
JPMC strictly prohibits intimidation or retaliation against anyone who makes a good faith report about a known or
suspected violation of this Policy, or of any law or regulation.
EXHIBIT 2 | Excerpts from “The Spirit and the Letter:” General Electric’s “Code of Conduct”
Statement of integrity
We have been ranked first for integrity and governance. But none of that matters if each of us does not make the right
decisions and take the right actions. . . . Do not allow anything—not “making the numbers,” competitive instincts or
even a direct order from a superior—to compromise your commitment to integrity. . . . Leaders must address
employees’ concerns about appropriate conduct promptly and with care and respect.
There is no conflict between excellent financial performance and high standards of governance and compliance—in
fact, the two are mutually reinforcing.
. . .
Obey the applicable laws and regulations . . .
Be honest, fair and trustworthy . . .
Avoid all conflicts of interest . . .
Foster an atmosphere [of] fair employment practices . . .
Strive to create a safe workplace and to protect the environment.
. . . Sustain a culture where ethical conduct is recognized, valued and exemplified by all employees. Source: “Integrity:
The Spirit and Letter of Our Commitment” General Electric Company, June 2005, page 3. A longer version of this
resource is also available at
example relevant for finance professionals is the Code of Ethics of the CFA Institute, the
group that confers the Chartered Financial Analyst (CFA) designation on professional
securities analysts and portfolio managers. An excerpt of the CFA Institute Code of
Ethics is given in Exhibit 3.
Talk about ethics within your team and firm. Many firms seek to reinforce a culture of
integrity with a program of seminars and training in ethical reasoning. A leader can
stimulate reflection through informal discussion of ethical developments (e.g.,
indictments, convictions, civil lawsuits) in the industry or profession or of ethical issues
that the team may be facing. This kind of discussion (without preaching) signals that it is
on the leader’s mind and is a legitimate focus of discussion. One executive regularly
raises issues such as these informally over lunch and morning coffee. Leaders believe
ethical matters are important enough to be the focus of team discussions.
Find and reflect on your dilemmas. The challenge for many finance practitioners is that
ethical dilemmas are not readily given to structured analysis, in the same way one
values a firm or balances the books. Nevertheless, one can harness the questions raised
in the field of ethics to lend some rigor to one’s reflections. Laura Nash (1981)
abstracted a list of twelve questions on which the thoughtful practitioner might reflect in
EXHIBIT 3 | CFA Institute Code of Ethics, 2014
High ethical standards are critical to maintaining the public’s trust in financial markets and in the investment
profession. . . .
Act with integrity, competence, diligence, respect . . .
Place . . . the interests of clients above their own personal interests.
Use reasonable care and exercise independent professional judgment . . .
Practice and encourage others. . . .
Promote the integrity of . . . capital markets.
Maintain and improve their professional competence . . .
Source: CFA Institute 2014. “Code of Ethics and Standard of Professional Conduct” (Charlottesville, Virginia)
grappling with an ethical dilemma:
Act on your reflections. This may be the toughest step of all. The field of ethics can
lend structure to one’s thinking but has less to say about the action to be taken.
Confronting a problem of ethics within a team or organization, one can consider a
hierarchy of responses, from questioning and coaching to “whistle blowing” (either to
an internal ombudsperson or if necessary to an outside source), and possibly, to exit
from the organization.
1. Have I defined the problem correctly and accurately?
2. If I stood on the other side of the problem, how would I define it?
3. What are the origins of this dilemma?
4. To whom and what am I loyal, as a person and as a member of a firm?
5. What is my intention in making this decision? Page xxxviii
6. How do the likely results compare with my intention?
7. Can my decision injure anyone? How?
8. Can I engage the affected parties in my decision before I decide or take action?
9. Am I confident that my decision will be valid over a long period of time as it may
seem at this moment?
10. If my boss, the CEO, the directors, my family, or community learned about this
decision, would I have misgivings?
11. What signals (or symbols) might my decision convey, if my decision were understood
correctly? If misunderstood?
12. Are there exceptions to my position, “special circumstances” under which I might
make an alternative decision?
Analysis of ethical issues in finance is vital. The cases of Bernard Madoff and other
major business scandals show that ethical issues pervade the financial environment.
Ethics is one of the pillars on which stands success in finance—it builds sustainable
enterprise, trust, organizational strength, and personal satisfaction. Therefore, the
financial decision maker must learn to identify, analyze, and act on ethical issues that
may arise. Consequences, duties, and virtues stand out as three important benchmarks
for ethical analysis. Nevertheless, the results of such analysis are rarely clear-cut. But
real business leaders will take the time to sort through the ambiguities and do “the right
thing” in the words of Edwin LeFevre. These and other ethical themes will appear
throughout finance case studies and one’s career.
References and Recommended Readings
Achampong, F., and Zemedkun, W. 1995. “An empirical and ethical analysis of factors
motivating managers’ merger decisions,” Journal of Business Ethics. 14: 855–
Bhide, A., and H. H. Stevenson, 1990. “Why be honest if honesty doesn’t pay,”
Harvard Business Review September-October, pages 121–129.
Bloomenthal, Harold S., 2002. Sarbanes-Oxley Act in Perspective St. Paul, MN:
West Group.
Boatright, J.R., 1999. Ethics in Finance, Oxford: Blackwell Publishers. Page xxxix
Bowie, N.E., “A Kantian approach to business ethics,” in R. E. Frederick, ed., A
Companion to Business Ethics, Malden, MA: Blackwell pages 3–16.
Carroll, A. B. 1999. “Ethics in management,” in R. E. Frederick, ed., A Companion to
Business Ethics, Malden, MA: Blackwell pages 141–152.
CFA Institute 2014. “Code of Ethics and Standard of Professional Conduct”
(Charlottesville, Virginia)
Frederick, R.E., A Companion to Business Ethics, Oxford: Blackwell Publishers.
Freeman, R.E., 1984. Strategic Management: A Stakeholder Approach Boston:
Friedman, M., 1962. Capitalism and Freedom, Chicago: University of Chicago Press.
General Electric Company 2005. “Integrity: The Spirit and Letter of Our
Commitment” (June 2005). Downloaded from
Jensen, M., 2005. “The agency costs of overvalued equity,” Financial Management
(Spring): 5–19.
Kidder, R., 1997. “Ethics and the bottom line: Ten reasons for businesses to do right,”
Insights on Global Ethics, Spring, pages 7–9.
Murphy, P.E. 1997. “80 Exemplary Ethics statements,” cited in L.H. Newton, “A
passport for the corporate code: from Borg Warner to the Caux Principles,” in
Robert E. Frederick, A Companion to Business Ethics, Malden, MA: Blackwell,
1999, pages 374–385.
Nash, L.L. 1981. “Ethics without the sermon,” Harvard Business Review, November-
December, pages 79–90.
Paine, L.S. 1994. “Managing for organizational integrity,” Harvard Business Review,
March-April, 106–117.
Paine, L.S., 1999. “Law, ethics, and managerial judgment,” in R. E. Frederick, ed., A
Companion to Business Ethics, Malden, MA: Blackwell pages 194–206.
Page xlii
Paine, L.S. 2003. Value Shift: Why Companies Must Merger Social and Financial
Imperatives to Achieve Superior Performance New York: McGraw-Hill.
Pulliam S., 2003. A staffer ordered to commit fraud balked, and then caved. The Wall
Street Journal June 23, 2003. Page A1.
Sen, A. 1987. On Ethics and Economics Oxford: Blackwell Publishers.
Shafer, W. 2002. “Effects of materiality, risk, and ethical perceptions on fraudulent
reporting by financial executives,” Journal of Business Ethics. 38(3): 243–263.
Page xl
Solomon, R., 1999. “Business ethics and virtue.” in R. E. Frederick, ed., A
Companion to Business Ethics, Malden, MA: Blackwell pages 30–37.
Solomon, D., 2003. WorldCom moved expenses to the balance sheet of MCI. The Wall
Street Journal March 31, 2003. Downloaded from,,SB104907054486790100,00.html.
Werhane, P. 1988. “Two ethical issues in Mergers and Acquisitions,” Journal of
Business Ethics 7, 41–45.
Werhane, P. 1990. “Mergers, acquisitions, and the market for corporate control,”
Public Affairs Quarterly 4(1): 81–96.
Werhane, P. 1997. “A note on moral imagination.” Charlottesville VA: Darden Case
Collection, catalogue number UVA-E-0114.
Werhane, P. 1999. “Business ethics and the origins of contemporary capitalism:
economics and ethics in the work of Adam Smith and Herbert Spencer” in R. E.
Frederick, ed., A Companion to Business Ethics, Malden, MA: Blackwell pages
Wicks, A., 2003. “A note on ethical decision making.” Charlottesville VA: Darden
Case Collection catalogue number UVA-E-0242.
Sen (1987) and Werhane (1999) have argued that Smith’s masterpiece, Wealth of
Nations, is incorrectly construed as a justification for self-interest, and that it speaks
more broadly about virtues such as prudence, fairness, and cooperation.
iiWicks (2003) page 5.
iiiOxford English Dictionary 1989. Vol. IX, page 1068.
ivIbid. Vol. V, page 421.
“The Con of the Century,” The Economist, December 18, 2008 at
Mark Seal, “Madoff’s World,” Vanity Fair, April, 2009 at
Henriques, Diana and Zachery Kouwe, “Prominent Trader Accused of Defrauding
Clients,” New York Times, December 11, 2008.
“Investigation of Failure of the SEC to Uncover Bernard Madoff’s Ponzi Scheme,”
U.S. Securities and Exchange Commission, Office of Investigations, August 31, 2009.
“The Financial Crisis Inquiry Report,” 2011, Philip Angelides, Chair, pages xxii,
xxiii, and xxvii–xxviii.
For further information on the Valeant case study, see Stephen Gandel, “What Caused
Valeant’s Epic 90% plunge,” Fortune, March 20, 2016, at And see Miles Johnson,
“What went wrong with Ackman and Valeant—the alternative edition” Financial Times,
March 30, 2017 at
Wicks xi (2003) page 11.
Quoted in K.V. Salwen, 1991 “SEC Chief’s criticism of ex-managers of Salomon
suggests civil action is likely,” Wall Street Journal Nov. 20, A10.
xiiiPaine (1999), pages 194–195.
xivBhide and Stevenson, 1990, pages 127–128.
The Utilitarian philosophers, Jeremy Bentham (1748–1832), James Mill (1773–
1836), and John Stuart Mill (1806–1873), argued that the utility (or usefulness) of ideas,
actions, and institutions could be measured in terms of their consequences.
The philosopher, Immanuel Kant (1724–1804) sought a foundation for ethics in the
purity of one’s motives.
xviiBowie (1999) page 13.
This view originates in ancient Greek philosophy, starting from Socrates, Plato, and
Russ Banham, “The Warren Buffett School,” Chief Executive, December 2002,
downloaded from, May 19, 2003.
xxSee New York Central v. United States, 212 US 481.
xxiMurphy (1997) compiles 80 exemplary ethics statements.
Source: J.P. Morgan Chase & Company, website:
orporate/About-JPMC/ab-code-of-ethics.htm downloaded May 18, 2017.
Page 1
PART 1 Setting Some Themes
Page 3
CASE 1 Warren E. Buffett, 2015
On August 10, 2015, Warren E. Buffett, chair and CEO of Berkshire Hathaway Inc.,
announced that Berkshire Hathaway would acquire the aerospace-parts supplier
Precision Castparts Corporation (PCP). In Buffett’s largest deal ever, Berkshire would
purchase all of PCP’s outstanding shares for $235 per share in cash, a 21% premium
over the trading price a day earlier. The bid valued PCP’s equity at $32.3 billion. The
total transaction value would be $37.2 billion, including assuming PCP’s outstanding
debt—this was what analysts called the “enterprise value.” “I’ve admired PCP’s
operation for a long time. For good reasons, it is the supplier of choice for the world’s
aerospace industry, one of the largest sources of American exports,” Buffett said. After
the announcement, Berkshire Hathaway’s Class A shares moved down 1.1% at market
open, a loss in market value of $4.05 billion. PCP’s share price jumped 19.2% at the
news ; the S&P 500 Composite Index opened up 0.2%. Exhibit 1.1 illustrates the recent
share-price performance for Berkshire Hathaway, PCP, and the S&P 500 Index.
Exhibit 1.2 presents recent consolidated financial statements for the firm.
EXHIBIT 1.1 | Relative Share Price Performance of Berkshire Hathaway Class A Share, PCP, and
the S&P 500 January 1, 2015, to August 13, 2 015
Note: PCP = Precision Castparts; BRK.A = Berkshire Hathaway Class A shares; S&P500 = Standard & Poor’s 500 Index.
Data source: Google Finance.
EXHIBIT 1.2 | Berkshire Hathaway Condensed Consolidated Financial Statements
The acquisition of PCP, Berkshire Hathaway’s largest deal ever, renewed public Page 4
interest in its sponsor, Buffett. In many ways, he was an anomaly. One of the
richest individuals in the world (with an estimated net worth of about $66.5 billion
according to Forbes), he was also respected and even beloved. Though he had
accumulated perhaps the best investment record in history (a compound annual increase
in wealth for Berkshire Hathaway of 21.6% from 1965 to 2014), Berkshire Hathaway
paid him only $100,000 per year to serve as its CEO. While Buffett and other insiders
controlled 39.5% of Berkshire Hathaway, he ran the company in the interests of all
Data source: Factset.
shareholders. “We will not take cash compensation, restricted stock, or option grants
that would make our results superior to [those of Berkshire’s investors],” Buffett said.
“I will keep well over 99% of my net worth in Berkshire. My wife and I have never
sold a share nor do we intend to.”
Buffett was the subject of numerous laudatory articles and at least eight biographies,
yet he remained an intensely private individual. Although acclaimed by many as an
intellectual genius, he shunned the company of intellectuals and preferred to affect the
manner of a down-home Nebraskan (he lived in Omaha) and a tough-minded investor. In
contrast to the investment world’s other “stars,” Buffett acknowledged his investment
failures both quickly and publicly. Although he held an MBA from Columbia University
and credited his mentor, Benjamin Graham, with developing the philosophy of valuebased
investing that had guided Buffett to his success, he chided business schools for the
irrelevance of their finance and investing theories.
Numerous writers sought to distill the essence of Buffett’s success. What were the
key principles that guided Buffett? Could those principles be applied broadly in the 21st
century, or were they unique to Buffett and his time? By understanding those principles,
analysts hoped to illuminate the acquisition of PCP. What were Buffett’s probable
motives in the acquisition? What did Buffett’s offer say about his valuation of PCP, and
how would it compare with valuations for other comparable firms? Would Berkshire’s
acquisition of PCP prove to be a success? How would Buffett define success?
Berkshire Hathaway Inc.
Berkshire Hathaway was incorporated in 1889 as Berkshire Cotton Manufacturing and
eventually grew to become one of New England’s biggest textile producers, accounting
for 25% of U.S. cotton-textile production. In 1955, Berkshire Cotton Manufacturing
merged with Hathaway Manufacturing and began a secular decline due to inflation,
Page 5
technological change, and intensifying competition from foreign rivals. In 1965, Buffett
and some partners acquired control of Berkshire Hathaway, believing that its financial
decline could be reversed.
Over the next 20 years, it became apparent that large capital investments would be
required for the company to remain competitive, and that even then the financial returns
would be mediocre. Fortunately, the textile group generated enough cash in the
early years to permit the firm to purchase two insurance companies
headquartered in Omaha: National Indemnity Company and National Fire & Marine
Insurance Company. Acquisitions of other businesses followed in the 1970s and 1980s;
Berkshire Hathaway exited the textile business in 1985.
The investment performance of a share in Berkshire Hathaway had astonished most
observers. As shown in Exhibit 1.3, a $100 investment in Berkshire Hathaway stock on
September 30, 1976, would compound to a value of $305,714 as of July 31, 2015,
approximately 39 years later. The investment would result in a 305,614% cumulative
return, 22.8% when annualized. Over the same period, a $100 investment in the S&P
500 would compound to a value of $1,999 for a cumulative return of 1,899.1% or 8.0%
EXHIBIT 1.3 | Berkshire Hathaway Class A Shares versus S&P 500 Index over 39 Years
In 2014, Berkshire Hathaway’s annual report described the firm as “a holding
company owning subsidiaries engaged in a number of diverse business activities.”
Berkshire Hathaway’s portfolio of businesses included:
Note: Period listed as 2015 represents January 1, 2015 to July 31, 2015.
Data source: Yahoo! Finance.
Insurance: Insurance and reinsurance of property and casualty risks worldwide and
with reinsurance of life, accident, and health risks worldwide in addition (e.g.,
GEICO, General Re).
Railroad: A long-lived asset with heavy regulation and high capital intensity, the
company operated one of the largest railroad systems in North America (i.e., BNSF).
Utilities and Energy: Generate, transmit, store, distribute, and supply energy through
the subsidiary Berkshire Hathaway Energy company.
Manufacturing: Numerous and diverse manufacturing businesses were grouped into
three categories: (1) industrial products, (2) building products, and (3) consumer
products (e.g., Lubrizol, PCP).
Page 6
Exhibit 1.4 gives a summary of revenues, operating profits, capital expenditures,
depreciation, and assets for Berkshire Hathaway’s various business segments. The
company’s investment portfolio also included equity interests in numerous publicly
traded companies, summarized in Exhibit 1.5 .
Service and Retailing: Providers of numerous services, including fractional aircraftownership
programs, aviation pilot training, electronic-components distribution, and
various retailing businesses, including automotive dealerships (e.g., NetJets,
Nebraska Furniture Mart).
Finance and Financial Products: Manufactured housing and related consumer
financing; transportation equipment, manufacturing, and leasing; and furniture leasing
(e.g., Clayton Homes, ULTX, XTRA).
EXHIBIT 1.4 | Business-Segment Information for Berkshire Hathaway Inc. (dollars in millions)
Source: SEC documents.
EXHIBIT 1.5 | Major Investees of Berkshire Hathaway (dollars in millions)
Buffett’s Investment Philosophy
Warren Buffett was first exposed to formal training in investing at Columbia University,
where he studied under Benjamin Graham. A coauthor of the classic text, Security
Analysis, Graham developed a method of identifying undervalued stocks (that is, stocks
whose prices were less than their intrinsic value). This became the cornerstone of
modern value investing. Graham’s approach was to focus on the value of assets, such as
cash, net working capital, and physical assets. Eventually, Buffett modified that
approach to focus also on valuable franchises that were unrecognized by the market.
Over the years, Buffett had expounded his philosophy of investing in his chair’s
letter to shareholders in Berkshire Hathaway’s annual report. By 2005, those lengthy
*Actual purchase price and tax basis; GAAP “cost” differs in a few cases because of write-ups or write-downs that have
been required under GAAP rules.
**Excludes shares held by pension funds of Berkshire subsidiaries.
***Held under contract of sale for this amount.
Source: Berkshire Hathaway Inc. letter to shareholders, 2014.
letters had acquired a broad following because of their wisdom and their humorous,
self-deprecating tone. The letters emphasized the following elements:
1. Economic reality, not accounting reality. Financial statements prepared by
accountants conformed to rules that might not adequately represent the economic
reality of a business. Buffett wrote:
Because of the limitations of conventional accounting, consolidated reported
earnings may reveal relatively little about our true economic performance.
Charlie [Munger, Buffett’s business partner] and I, both as owners and managers,
virtually ignore such consolidated numbers . . . Accounting consequences do not
influence our operating or capital-allocation process.
Accounting reality was conservative, backward looking, and governed by generally
accepted accounting principles (GAAP), even though investment decisions should be
based on the economic reality of a business. In economic reality, intangible assets
such as patents, trademarks, special managerial expertise, and reputation might be
very valuable, yet, under GAAP, they would be carried at little or no value. GAAP
measured results in terms of net profit, while in economic reality the results of a
business were its flows of cash.
A key feature of Buffett’s approach defined economic reality at the level of the
business itself, not the market, the economy, or the security—he was a fundamental
analyst of the business. His analysis sought to judge the simplicity of the business, the
consistency of its operating history, the attractiveness of its long-term prospects, the
quality of management, and the firm’s capacity to create value.
2. The cost of the lost opportunity. Buffett compared an investment opportunity against
the next-best alternative, the lost opportunity. In his business decisions, he
demonstrated a tendency to frame his choices as either/or decisions rather than yes/no
Page 7
decisions. Thus an important standard of comparison in testing the attractiveness of an
acquisition was the potential rate of return from investing in the common stocks of
other companies. Buffett held that there was no fundamental difference between buying
a business outright and buying a few shares of that business in the equity market. Thus
for him, the comparison of an investment against other returns available in the market
was an important benchmark of performance.
3. Embrace the time value of money. Buffett assessed intrinsic value as the present
value of future expected performance:
[All other methods fall short in determining whether] an investor is indeed
buying something for what it is worth and is therefore truly operating on the
principle of obtaining value for his investments . . . Irrespective of whether a
business grows or doesn’t, displays volatility or smoothness in earnings, or
carries a high price or low in relation to its current earnings and book value, the
investment shown by the discounted-flows-of-cash calculation to be the cheapest
is the one that the investor should purchase.
Enlarging on his discussion of intrinsic value, Buffett used an educational example:
We define intrinsic value as the discounted value of the cash that can be taken out
of a business during its remaining life. Anyone calculating intrinsic value
necessarily comes up with a highly subjective figure that will change both as
estimates of future cash flows are revised and as interest rates move. Despite its
fuzziness, however, intrinsic value is all important and is the only logical way to
evaluate the relative attractiveness of investments and businesses.
To see how historical input (book value) and future output (intrinsic value)
can diverge, let us look at another form of investment, a college education. Think
of the education’s cost as its “book value.” If it is to be accurate, the cost should
Page 8
include the earnings that were foregone by the student because he chose college
rather than a job. For this exercise, we will ignore the important non economic
benefits of an education and focus strictly on its economic value. First, we must
estimate the earnings that the graduate will receive over his lifetime and subtract
from that figure an estimate of what he would have earned had he lacked his
education. That gives us an excess earnings figure, which must then be
discounted, at an appropriate interest rate, back to graduation day. The dollar
result equals the intrinsic economic value of the education. Some graduates will
find that the book value of their education exceeds its intrinsic value, which
means that whoever paid for the education didn’t get his money’s worth. In other
cases, the intrinsic value of an education will far exceed its book value, a result
that proves capital was wisely deployed. In all cases, what is clear is that book
value is meaningless as an indicator of intrinsic value.
To illustrate the mechanics of this example, consider the hypothetical case
presented in Exhibit 1.6. Suppose an individual has the opportunity to invest
$50 million in a business—this is its cost or book value. This business will throw off
cash at the rate of 20% of its investment base each year. Suppose that instead of
receiving any dividends, the owner decides to reinvest all cash flow back into the
business—at this rate, the book value of the business will grow at 20% per year.
Suppose that the investor plans to sell the business for its book value at the end of the
fifth year. Does this investment create value for the individual? One determines this by
discounting the future cash flows to the present at a cost of equity of 15%. Suppose
that this is the investor’s opportunity cost, the required return that could have been
earned elsewhere at comparable risk. Dividing the present value of future cash flows
(i.e., Buffett’s intrinsic value) by the cost of the investment (i.e., Buffett’s book
value) indicates that every dollar invested buys securities worth $1.23. Value
is created.
Consider an opposing case, summarized in Exhibit 1.7. The example is similar in
all respects, except for one key difference: the annual return on the investment is 10%.
The result is that every dollar invested buys securities worth $0.80. Value is
EXHIBIT 1.6 | Hypothetical Example of Value Creation
Source: Author analysis.
EXHIBIT 1.7 | Hypothetical Example of Value Destruction
Source: Author analysis.
Comparing the two cases in Exhibits 1.6 and 1.7, the difference in value creation
and destruction is driven entirely by the relationship between the expected returns and
the discount rate: in the first case, the spread is positive; in the second case, it is
negative. Only in the instance where expected returns equal the discount rate will
book value equal intrinsic value. In short, book value or the investment outlay may not
reflect the economic reality. One needs to focus on the prospective rates of return, and
how they compare to the required rate of return.
4. Measure performance by gain in intrinsic value, not accounting profit. Buffett
Our long-term economic goal . . . is to maximize Berkshire’s average annual rate
of gain in intrinsic business value on a per-share basis. We do not measure the
economic significance or performance of Berkshire by its size; we measure by
per-share progress. We are certain that the rate of per-share progress will
diminish in the future—a greatly enlarged capital base will see to that. But we
will be disappointed if our rate does not exceed that of the average large
American corporation.
The gain in intrinsic value could be modeled as the value added by a business
above and beyond the charge for the use of capital in that business. The gain in
intrinsic value was analogous to the economic-profit and market-value-added
measures used by analysts in leading corporations to assess financial performance.
Those measures focus on the ability to earn returns in excess of the cost of capital.
5. Set a required return consistent with the risk you bear. Conventional academic and
practitioner thinking held that the more risk one took, the more one should get paid.
Thus discount rates used in determining intrinsic values should be determined by the
risk of the cash flows being valued. The conventional model for estimating the cost of
Page 9
equity capital was the capital asset pricing model (CAPM), which added a risk
premium to the long-term risk-free rate of return, such as the U.S. Treasury bond yield.
In August 2015, a weighted average of Berkshire Hathaway’s cost of equity and debt
capital was about 0.8%.
Buffett departed from conventional thinking by using the rate of return on the longterm
(e.g., 30-year) U.S. Treasury bond to discount cash flows—in August 2015, the
yield on the 30-year U.S. Treasury bond was 2.89%. Defending this practice, Buffett
argued that he avoided risk, and therefore should use a risk-free discount rate.
His firm used little debt financing. He focused on companies with predictable
and stable earnings. He or his vice chair, Charlie Munger, sat on the boards of
directors, where they obtained a candid inside view of the company and could
intervene in management decisions if necessary. Buffett once said, “I put a heavy
weight on certainty. If you do that, the whole idea of a risk factor doesn’t make sense to
me. Risk comes from not knowing what you’re doing.” He also wrote:
We define risk, using dictionary terms, as “the possibility of loss or injury.”
Academics, however, like to define “risk” differently, averring that it is the
relative volatility of a stock or a portfolio of stocks—that is, the volatility as
compared to that of a large universe of stocks. Employing databases and
statistical skills, these academics compute with precision the “beta” of a stock—
its relative volatility in the past—and then build arcane investment and capital
allocation theories around this calculation. In their hunger for a single statistic to
measure risk, however, they forget a fundamental principle: it is better to be
approximately right than precisely wrong.
6. Diversify reasonably. Berkshire Hathaway represented a diverse portfolio of
business interests. But Buffett disagreed with conventional wisdom that investors
should hold a broad portfolio of stocks in order to shed company-specific risk. In his
view, investors typically purchased far too many stocks rather than waiting for one
exceptional company. Buffett said:
Figure businesses out that you understand and concentrate. Diversification is
protection against ignorance, but if you don’t feel ignorant, the need for it goes
down drastically.18
Page 10
7. Invest based on information, analysis, and self-discipline, not on emotion or
hunch. Buffett repeatedly emphasized awareness and information as the foundation for
investing. He said, “Anyone not aware of the fool in the market probably is the fool in
the market.” Buffett was fond of repeating a parable told to him by Graham:
There was a small private business and one of the owners was a man named
Market. Every day, Mr. Market had a new opinion of what the business was
worth, and at that price stood ready to buy your interest or sell you his. As
excitable as he was opinionated, Mr. Market presented a constant distraction to
his fellow owners. “What does he know?” they would wonder, as he bid them an
extraordinarily high price or a depressingly low one. Actually, the gentleman
knew little or nothing. You may be happy to sell out to him when he quotes you a
ridiculously high price, and equally happy to buy from him when his price is
low. But the rest of the time, you will be wiser to form your own ideas of the
value of your holdings, based on full reports from the company about its
operation and financial position.
Buffett used this allegory to illustrate the irrationality of stock prices as compared
to true intrinsic value. Graham believed that an investor’s worst enemy was not the
stock market, but oneself. Superior training could not compensate for the absence of
the requisite temperament for investing. Over the long term, stock prices should have a
strong relationship with the economic progress of the business. But daily market
quotations were heavily influenced by momentary greed or fear and were an
unreliable measure of intrinsic value. Buffett said:
As far as I am concerned, the stock market doesn’t exist. It is there only as a
reference to see if anybody is offering to do anything foolish. When we invest in
stocks, we invest in businesses. You simply have to behave according to what is
rational rather than according to what is fashionable.
Accordingly, Buffett did not try to “time the market” (i.e., trade stocks based on
expectations of changes in the market cycle)—his was a strategy of patient, long-term
investing. As if in contrast to Mr. Market, Buffett expressed more contrarian goals:
“We simply attempt to be fearful when others are greedy and to be greedy only when
others are fearful.” Buffett also said, “Lethargy bordering on sloth remains the
cornerstone of our investment style,” and “The market, like the Lord, helps those
who help themselves. But unlike the Lord, the market does not forgive those who know
not what they do.”
8. Look for market inefficiencies. Buffett scorned the academic theory of capitalmarket
efficiency. The efficient-markets hypothesis (EMH) held that publicly known
information was rapidly impounded into share prices, and that as a result, stock prices
were fair in reflecting what was known about the company. Under EMH, there were
no bargains to be had, and trying to outperform the market would be futile. “It has been
helpful to me to have tens of thousands turned out of business schools that taught that it
didn’t do any good to think,” Buffett said.
I think it’s fascinating how the ruling orthodoxy can cause a lot of people to think
the earth is flat. Investing in a market where people believe in efficiency is like
Precision Castparts
“In the short run, the market is a voting machine but in the long run, it is a weighing
—Benjamin Graham
The vote was in and the market’s reaction to Berkshire Hathaway’s acquisition of PCP
indicated disapproval. The market ascribed $4.05 billion less value to Berkshire
playing bridge with someone who’s been told it doesn’t do any good to look at
the cards.26
Page 11
9. Align the interests of agents and owners. Explaining his significant ownership
interest in Berkshire Hathaway, Buffett said, “I am a better businessman because I am
an investor. And I am a better investor because I am a businessman.”
As if to illustrate this sentiment, he said:
A managerial “wish list” will not be filled at shareholder expense. We will not
diversify by purchasing entire businesses at control prices that ignore long-term
economic consequences to our shareholders. We will only do with your
money what we would do with our own, weighing fully the values you
can obtain by diversifying your own portfolios through direct purchases in the
stock market.
For four out of six Berkshire directors, more than 50% of the family net worth was
represented by shares in Berkshire Hathaway. The senior managers of Berkshire
Hathaway subsidiaries either held shares in the company or were compensated under
incentive plans that imitated the potential returns from an equity interest in their
business unit, or both.
Hathaway after the announced acquisition than before it. At the same time, the value of
PCP jumped more than $5 billion, close to 20% of the market value of the firm. The
market seemed to be saying that Buffett and Berkshire had overpaid for the business.
Buffett didn’t seem to think so. And despite his age, he didn’t appear to be slowing
down. PCP was the largest acquisition in a string of large purchases over the past
several years, including Duracell, Kraft, Heinz, and Burlington Northern Santa Fe,
totaling more than $70 billion in deal value in all. These acquisitions, along with many
more over the years, followed a similar blueprint (Exhibit 1.8). The gist of the
acquisition criteria seemed to be relatively straightforward—Berkshire Hathaway
looked for well-run businesses producing consistent results offered at a fair price. As
Berkshire Hathaway stated in its press release following the PCP acquisition:
PCP fits perfectly into the Berkshire model and will substantially increase our
normalized per-share earning power. Under CEO Mark Donegan, PCP has become
the world’s premier supplier of aerospace components (most of them destined to
EXHIBIT 1.8 | Berkshire Hathaway Acquisition Criteria
Source: Berkshire Hathaway Inc. annual report, 2014.
Page 12
be original equipment, though spares are important to the company as well).
Mark’s accomplishments remind me of the magic regularly performed by Jacob
Harpaz at IMC, our remarkable Israeli manufacturer of cutting tools. The two men
transform very ordinary raw materials into extraordinary products that are used by
major manufacturers worldwide. Each is the da Vinci of his craft. PCP’s products,
often delivered under multi year contracts, are key components in most large
PCP manufactured complex metal components and products for very specific
applications, mainly in the critical aerospace and power applications. The components
were used in products with highly complex engineering processes, such as large jetaircraft
engines. Its customer base was concentrated and sophisticated, including
General Electric, Pratt & Whitney, and Rolls-Royce, for whom they had been supplying
castings for multiple decades.
Exhibit 1.9 presents PCP’s income statement and balance sheet ending March 31,
2015. Exhibit 1.10 provides financials on comparable firms. Exhibit 1.11 provides
valuation multiples for comparable firms. The beta of PCP, measured after the
acquisition announcement, was 0.38.
EXHIBIT 1.9 | PCP Consolidated Financial Statements
*Note – Fiscal year ends March 31. Period listed as 2015 represents March 31, 2014 to March 31, 2015
Note: The market value of PCP’s equity shortly before the announcement of the acquisition by Berkshire Hathaway was
$31,208 million.
Data source: Edgar.
Excludes restructuring charges.
Excludes equity in unconsolidated investments.
Excludes noncontrolling interests.
EXHIBIT 1.10 | Comparable Firms
Note: Dollar values are in millions except for share prices and dividends per share, which are in dollar units. Shares
outstanding (O/S) are stated in millions.
ALCOA, INC., engages in lightweight metals engineering and manufacturing. Its products are used worldwide in aircraft,
automobiles, commercial transportation, packaging, oil and gas, defense, and industrial applications.
LISI SA engages in the manufacturing of multifunctional fasteners and assembly components for three business
sectors: Aerospace, Automotive, and Medical.
THYSSENKRUPP AG engages in the production of steel. The Components Technology business area offers
components for the automotive, construction, and engineering sectors.
ALLEGHENY TECHNOLOGIES, INC., engages in the manufacture of specialty materials and components for different
industries, which include aerospace and defense, oil and gas, and chemical processing, as well as electrical energy.
CARPENTER TECHNOLOGY CORP. engages in developing, manufacturing, and distributing cast/wrought and powdermetal
stainless steels. It operates through Specialty Alloys Operations and Performance Engineered Products
Data sources: Company reports; Factset.
EXHIBIT 1.11 | Valuation of PCP Based on Multiples for Comparable Firms
Data Source: Factset.
The announcement of Berkshire Hathaway’s acquisition of PCP prompted some critical
commentary. The Economist magazine wrote,
But [Buffett] is far from a model for how capitalism should be transformed. He is
a careful, largely ethical accumulator of capital invested in traditional businesses,
preferably with oligopolistic qualities, whereas what America needs right now is
more risk-taking, lower prices, higher investment and much more competition. You
won’t find much at all about these ideas in Mr. Buffett’s shareholder letters.
Conventional thinking held that it would be difficult for Warren Buffett to maintain
his record of 21.6% annual growth in shareholder wealth. Buffett acknowledged that
“a fat wallet is the enemy of superior investment results.” He stated that it was the
firm’s goal to meet a 15% annual growth rate in intrinsic value. Would the PCP
acquisition serve Berkshire Hathaway’s long-term goals? Was the bid price
appropriate? How did Berkshire Hathaway’s offer measure up against the company’s
valuation implied by the multiples for comparable firms? Did Berkshire Hathaway
overpay for PCP? Was the market’s reaction rational?
Or did Buffett pay a fair price for a great business? If so, what determines a fair
price? What makes a great business? And why would Berkshire Hathaway be interested
in buying PCP? Why would PCP be interested in selling itself to Berkshire Hathaway?
What value did Berkshire Hathaway bring to the equation?
The calculation of the implied values for PCP based on the median of the peer firms’ multiples takes the product of the
median value of the multiples of comparable firms (line 8) and multiplies it times the relevant base (revenue, EBITDA,
EBIT, net income, or book value) for PCP. The same method is used for the calculation of the implied value based on the
average or mean of the peer firms’ multiples (line 9). For instance, the implied value based on the median multiple of
EBIT ($37,755 million) is derived by multiplying 14.51 (the mean EBIT multiple for the comparable firms) times $2,602
million (the EBIT of PCP).

Page 23
Page 24
The Battle for Value, 2016: FedEx Corp. versus
United Parcel Service, Inc.
2015 was a transformative year for FedEx with outstanding financial results, more
powerful customer solutions, and actions to generate increased long-term value for
shareowners. We believe FedEx is on a dynamic trajectory that will make 2016 very
successful. Our company has never been better positioned to build shareowner value.
—FedEx CEO Frederick W. Smith, Annual Report 2015
Our 2015 results demonstrate that UPS can thrive in [today’s] challenging
environment, as shown by our continued ability to meet the expectations of customers
and investors alike. The continued execution of our proven strategies will enable
UPS to maintain positive momentum in the coming year and beyond.
—UPS CEO David Abney, Annual Report 2015
On April 29, 2016, FedEx Corp., the American courier delivery company, received
final government approval on its bid to acquire TNT Express (TNT), a Dutch logistics
and delivery firm with road and air delivery services all over the world, for $4.8
billion. Ever since FedEx made the public bid to acquire TNT, many industry insiders
expected TNT’s strong European road network to bolster FedEx’s presence in a region
and market in which it had failed to compete with its long-standing rival, United Parcel
Service, Inc. (UPS), for a bigger share of the world’s ever-increasing e-commerce
The approval came as a bitter blow to American package-delivery rival
UPS, which had tried to buy TNT in 2013, only to be blocked by European Union
regulators who viewed the potential merger as obstructing healthy competition. Still,
UPS had plenty to celebrate. The company had just announced record first-quarter sales
of $14.4 billion, up 3.2% over the same quarter the previous year, driven by growth in
both its domestic and international small-package segments. The company was starting
to see its recent investments in technology and productivity improvements pay off, with
its cost per package falling 1.9% for the same period. This was impressive for a
company whose return on equity the previous year was a whopping 210%.
Against this backdrop, industry observers wondered how the titanic struggle
between FedEx and UPS would develop, particularly for investors in the two firms.
Was the performance of the companies in recent years predictive of the future?
International reach and extensive logistics services were widely seen as the litmus test
for corporate survival of delivery companies in the new millennium. Which company
was better positioned to attract the capital necessary to win this competitive battle?
United Parcel Service, Inc.
Founded in 1907, UPS was the largest package-delivery company in the world.
Consolidated parcel delivery, both on the ground and through the air, was the primary
business of the company, although increasingly the company offered more-specialized
transportation and logistics services.
Known in the industry as “Big Brown,” UPS had its roots in Seattle, Washington,
where 19-year-old Jim Casey started a bicycle-messenger service called American
Messenger Company. After merging with a rival firm, Motorcycle Delivery Company,
the company focused on department-store deliveries, and that remained true until the
1940s. Renamed United Parcel Service of America, UPS started an air-delivery service
in 1929 by putting packages on commercial passenger planes. The company entered its
strongest period of growth during the post–World War II economic boom and, by 1975,
UPS had reached a milestone when it promised package delivery to every address in the
Page 25
continental United States. That same year the company expanded outside the country
with its first delivery to Ontario, Canada. The following year, UPS began service in
West Germany with 120 of its trademark-brown delivery vans.
The key to the success of UPS, later headquartered in Atlanta, Georgia, was
efficiency. According to BusinessWeek, “Every route is timed down to the traffic light.
Each vehicle was engineered to exacting specifications. And the drivers . . . endure a
daily routine calibrated down to the minute.” But this demand for machinelike
precision met with resistance by UPS’s heavily unionized labor force.
For most of the company’s history, UPS stock was owned solely by UPS’s
managers, their families, former employees, or charitable foundations owned by UPS.
The company acted as the market maker with its own shares, buying or selling shares at
a fair market value determined by the board of directors each quarter. By the
end of the millennium, company executives determined that UPS needed the
added flexibility of publicly traded stock in order to pursue a more aggressive
acquisition strategy.
In November 1999, UPS became a public company through a public equity offering
and corporate reorganization. Before this reorganization, the financially and
operationally conservative company had been perceived as slow and plodding.
Although much larger than FedEx, UPS had been unable to effectively compete directly
in the overnight-delivery market, largely because of the enormous cost of building an air
fleet. But after going public, UPS initiated an aggressive series of acquisitions,
beginning with a Miami-based freight carrier operating in Latin America and a
franchise-based chain of stores providing packing, shipping, and mail services called
Mail Boxes Etc. (later renamed The UPS Store) with more than 4,300 domestic and
international locations.
More assertive than ever before, the UPS of the new millennium was the product of
Page 26
extensive reengineering efforts and a revitalized business focus. Whereas the company
had traditionally been the industry’s low-cost provider, UPS now began investing
heavily in a full range of highly specialized business services. As a sign of this shift, the
company revamped its logo for the first time since 1961, emphasizing its activities in
the wider supply-chain industry. The expansive “What can brown do for you?”
campaign was also launched around this time to promote UPS’s business-facing
logistics and supply-chain services.
Another example was UPS’s extensive push into more complex industries like health
care. Health care logistic services (which were bucketed into the company’s supplychain
and freight segments) allowed pharmaceutical and medical-device companies to
outsource their logistics to UPS pharmacists, who were able to fulfill, pack, and ship
customers’ orders from UPS’s worldwide health care warehouses, even when
medications included temperature specifications or required cross-border transport. By
2015, this segment had experienced huge growth and saw no signs of slowing in the face
of the world’s aging population that increasingly wanted home delivery of health care
Alongside its health care offerings, UPS also looked to emerging markets for
growth. In 2014, CEO David Abney claimed that “growing internationally and
diversifying our customer base” across regions was a top priority for UPS. By 2015,
international package operations accounted for 21% of revenues. Exhibit 2.1 presents
segment (ground and express) and geographic (international and U.S. domestic) revenue
data for both FedEx and UPS. The company also invested in information technology to
improve its profitability. In 2013, for example, UPS launched cutting-edge
route-optimization software for its drivers that was intended to set the stage for
even more personalized service offerings and efficient deliveries when its rollout was
complete in 2017.
By 2015, UPS offered package-delivery services in more than 220 countries and
territories (with every address in the United States and Europe covered) and was
moving more than 18 million packages and documents through its network every day. Its
immense volumes in the higher-margin ground segment and aligned assets that served
both ground and express shipments gave it a margin advantage compared to FedEx. UPS
employed 440,000 people and had 104,926 vehicles and 650 jet aircraft. UPS reported
revenues of $58 billion and net profit of nearly $5 billion. Exhibit 2.2 provides recent
operating results for UPS.
EXHIBIT 2.1 | Revenues for FedEx and UPS by Business and Geography Segment (Millions)
*FedEx Services provides back-office support to FedEx’s three transportation segments and printing and retail support to
customers through FedEx Office.
Data source: Company SEC filings.
EXHIBIT 2.2 | Operating Results for UPS Inc. (period ending Dec. 31, in millions)
FedEx Corporation
FedEx first took form as Fred Smith’s undergraduate term paper for a Yale University
economics class. Smith’s strategy dictated that FedEx would purchase the planes that it
required to transport packages, whereas all other competitors used the cargo space
*Economic Profit (EVA) is calculated as EBIT *(1 − t) − CofC × (T. Debt + T. St. Eq), where t = 40% and CofC = 8%.
Data source: Capital IQ, Morningstar, company annual reports.
available on passenger airlines. In addition to using his own planes, Smith’s key
innovation was a hub-and-spoke distribution pattern, which permitted cheaper and
faster service to more locations than his competitors could offer. In 1971, Smith
invested his $4 million inheritance and raised $91 million in venture capital to launch
the firm—the largest venture-capital start-up at the time.
In 1973, on the first night of continuous operation, 389 FedEx employees delivered
186 packages overnight to 25 U.S. cities. In those early years, FedEx, then known as
Federal Express Corporation, experienced severe losses, and Smith was nearly ousted
from his chair position. By 1976, FedEx finally saw a modest profit of $3.6 million on
an average daily volume of 19,000 packages. Through the rest of the 1970s, FedEx
continued to grow by expanding services, acquiring more trucks and aircraft, and raising
capital. The formula was successful. In 1981, FedEx generated more revenue than any
other U.S. air-delivery company.
By 1981, competition in the industry had started to rise. Emery Air Freight began to
imitate FedEx’s hub system and to acquire airplanes, and UPS began to move into the
overnight air market. The United States Postal Service (USPS) positioned its overnight
letter at half the price of FedEx’s, but quality problems and FedEx’s “absolutely
positively overnight” ad campaign quelled that potential threat. In 1983, FedEx reached
$1 billion in revenues and seemed poised to own the market for express delivery.
During the 1990s, FedEx proved itself as an operational leader, even receiving the
prestigious Malcolm Baldrige National Quality Award from the president of the United
States. FedEx was the first company ever to win in the service category. Part of this
success could be attributed to deregulation and to operational strategy, but credit could
also be given to FedEx’s philosophy of “People-Service-Profit,” which reflected its
emphasis on customer focus, total quality management, and employee participation.
Extensive attitude surveying, a promote-from-within policy, effective grievance
Page 27
procedures that sometimes resulted in a chat with Fred Smith himself, and an emphasis
on personal responsibility and initiative not only earned FedEx a reputation as a
great place to work, but also helped to keep the firm largely free of unions.
FedEx’s growth occurred within the context of fundamental change in the business
environment. Deregulation of the domestic airline industry after 1977 permitted larger
planes to replace smaller ones, thereby permitting FedEx to purchase several Boeing
727s starting in 1978, which helped reduce its unit costs. Deregulation of the trucking
industry also permitted FedEx to establish an integrated regional trucking system that
lowered its unit costs on short-haul trips, enabling the company to compete more
effectively with UPS. Rising inflation and global competitiveness compelled
manufacturers to manage inventories more closely and to emulate the just-in-time supply
programs of the Japanese, creating a heightened demand for FedEx’s rapid and carefully
monitored movement of packages. And, finally, technological innovations enabled
FedEx to achieve important advances in customer ordering, package tracking, and
process monitoring.
Despite making its name as the pioneer of the overnight-delivery market, FedEx
continued to expand beyond its lower-margin express offerings throughout the first
decade of the 2000s. In addition to purchasing Kinko’s 1,200 retail stores and
eventually rebranding them as FedEx Office (a full-service print-and-ship retail chain),
in 2012, FedEx started to move its capex focus from its crown-jewel express segment
(where capital expenditures from 2013 to 2015 were mainly used to modernize its
outdated fleet) to higher-margin ground services in order to increase capacity in its U.S.
ground network. By 2015, these efforts had paid off: FedEx Ground’s revenues had
grown significantly over the past five years, and the company was providing faster
deliveries to more U.S locations than its competition, in large part due to its industryleading
automation-optimized efficiency. The ground segment’s independent operation
of drivers and trucks as separate from its parallel express-network assets, however,
gave rival UPS and its integrated asset system the margin advantage.
By the end of 2015, FedEx had net income of over $1 billion on revenues of about
$48 billion. Exhibit 2.3 provides recent operating results for FedEx. FedEx Express’s
aircraft fleet consisted of 647 aircraft, FedEx Ground had about 95,000 ground vehicles
and trailers, and FedEx Freight operated approximately 65,000 vehicles and trailers.
The company operated with more than 325,000 team members and handled more than 11
million packages daily across its ground and express services.
EXHIBIT 2.3 | Operating Results for FedEx Corp. (period ending May 31, in millions)
The U.S. Delivery Market–Changing Shape
Barclays estimated the 2015 U.S. package delivery market to be $90 billion. The
market was commonly segmented along three dimensions: weight, mode of transit, and
timeliness of service. The weight categories consisted of letters (weighing 0−2.0
*Economic Profit (EVA) is calculated as EBIT * (1 − t) − CofC x (T. Debt + T. St. Eq), where t = 40% and CofC = 8%.
Data source: Capital IQ, Morningstar, company annual reports.
pounds), packages (2.0−150 pounds), and freight (over 151 pounds). The mode Page 28
of transit categories were air (i.e., express) and ground. Time categories were
overnight, deferred delivery (second-day delivery), three-day delivery, and, lastly,
regular delivery, which occurred four or more days after pickup.
The rise of e-commerce had created a colossal shift in package-delivery density, as
low-density residential deliveries from e-commerce sales had overtaken higher-density
business-to-business package deliveries that had once driven sales at the large shipping
companies. As online retailers outpaced their brick-and-mortar peers, e-commerce
sales skyrocketed; in 2015 alone, e-commerce sales grew 14.6%, according to the U.S.
Department of Commerce.” Many believed that FedEx’s package volume was poised to
benefit most from this growth due to the numerous online retailers that employed FedEx
for timely deliveries, but recently it was UPS that had the upper hand, with market share
of 54% for U.S. e-commerce shipments in 2014, leaving FedEx with 30%, and USPS
the remaining 16%.
As the booming e-commerce market grew, many high-volume e-tailers, such as
Amazon, commanded bigger discounts from their shipping partners. In 2012, Amazon
had launched Amazon Logistics, with its own delivery-van network. A growing number
of retailers, such as Wal-Mart and Amazon, were even starting to explore unmanned
aerial vehicles as a potential alternative means of delivery. By 2014, Amazon was
upstaging its private shipping vendors by offering Sunday delivery and same-day
delivery service in various cities through the USPS. As retailers looked for downstream
solutions to managing deliveries, an expectation arose that the delivery market, already
polarized between high-value, next-day-guaranteed services and economy options,
could see the economy segment suffer at the expense of retailers’ own initiatives.
Others expected shippers to experience a potential shift in demand away from pricier
express deliveries as consumers favored free shipping on their online orders through
ground service.
Amid these mixed expectations for future demand, a closer look at the industry’s 2015
revenues in the United States revealed that the air-express segment’s revenues were
fairly evenly split across FedEx and UPS, whereas in the ground segment, UPS reaped
the majority of sales. See Figure 2.1.
Although higher-margin ground operations were attractive to shippers,
complications in the segment arose from the lower density of residential deliveries
common among ground orders. To continue to grow their ground operations without
focusing on those low-density last-mile trips, both UPS and FedEx contracted USPS’s
FIGURE 2.1 | U.S. package market revenue share (%), by segment—2015.
Data source: Brandon Oglenski, Eric Morgan, and Van Kegel, “North American Transportation and Shipping Equity
Research,” Barclays, May 2, 2016: 31.
Page 29
Parcel Select Ground service, which helped businesses move shipments at the back end
of their deliveries. Through the service, the private companies delivered packages to
the local post office, after which USPS handled the last-mile drop-off. FedEx
referred to this partnership with USPS, which launched in 2009, as SmartPost,
while UPS’s version, launched in 2011, was offered as SurePost, and the service
allowed the shipping companies to offer customers even cheaper pricing without
wasting van and driver resources.
This similarity of execution on these partnered ground operations reflected the longstanding
competition between FedEx and UPS and their frequently parallel strategies.
Exhibit 2.4 provides a detailed summary of the major events marking the competitive
rivalry between FedEx and UPS. Significant dimensions of this rivalry included the
Customer focus. Both companies emphasized their focus on the customer. This meant
listening carefully to the customer’s needs, providing customized solutions rather than
standardized products, and committing to service relationships.
Pricing. The shipping rivals always moved in lockstep on parcel-pricing fees. In the
face of e-commerce retailers adopting the frequent use of large packages for
lightweight products, however, the shippers who priced parcels by weight alone
started to take a margin hit when those poorly priced packages took up valuable space
in delivery trucks. In order to maximize the profitability of e-commerce deliveries, in
May 2014, FedEx announced that it would start using dimensional weight to calculate
the billable price for all ground packages, effective at the start of 2015. UPS quickly
followed with the same announcement the following month.
Operational reengineering. Given the intense price competition, the reduction of unit
costs became a priority. Cost reduction was achieved through the exploitation of
economies of scale, investment in technology, and business-process reengineering,
Page 30
which sought to squeeze unnecessary steps and costs out of the service process.
Information technology. Information management became central to the
operations of both UPS and FedEx. Every package handled by FedEx, for instance,
was logged into COSMOS (Customer, Operations, Service, Master Online System),
which transmitted data from package movements, customer pickups, invoices, and
deliveries to a central database at the Memphis, Tennessee, headquarters. UPS relied
on DIADs (Delivery Information Acquisition Devices), which were handheld units
that drivers used to scan package barcodes and record customer signatures.
Service expansion. FedEx and UPS increasingly pecked at each other’s service
offerings. In 2011, for example, UPS launched MyChoice, which allowed customers to
control the time of their deliveries online. FedEx quickly followed suit in 2013,
launching Delivery Manager, which allowed customers to schedule dates, times, and
locations of deliveries from their phones. FedEx even launched a repair shop for
devices like iPhones and Nooks in 2012, capitalizing on its retail space and existing
shipping capabilities.
Logistics services. The largest shipping innovations entailed offering integrated
logistics services to large corporate clients. These services were aimed at providing
total inventory control to customers, including purchase orders, receipt of goods,
order entry and warehousing, inventory accounting, shipping, and accounts receivable.
While this service line was initially developed as a model wherein the shippers
stored, tracked, and shipped across client’s brick-and-mortar stores, these services
eventually expanded to include shipping directly to consumers, as in the health care
EXHIBIT 2.4 | Timeline of Selective Competitive Developments

The impact of the fierce one-upmanship between FedEx and UPS was clearly reflected
in their respective investment expenditures. From 2010 to 2015, capital expenditures for
FedEx and UPS increased by 54% and 71%, respectively. During this period, FedEx’s
aggressive growth strategy, evident in its acquisitions and its investment in the relatively
outdated Express aircraft fleet, nearly doubled those of “Big Brown,” which benefited
from its more modern fleet.
International Package-Delivery Market
In 2015, the global parcel-shipping market was dominated by UPS, FedEx, and DHL,
with international services representing 22% and 28% of revenues for UPS and FedEx
that year, respectively. FedEx made significant investments in developing European
delivery capabilities in the 1980s before eventually relinquishing its European hub in
1992, causing it to rely on local partners to deliver to Europe for the ensuing decade. In
1995, FedEx expanded its routes in Latin America and the Caribbean, and later
introduced FedEx AsiaOne, a next-business-day service between Asian countries and
the United States via a hub in Subic Bay, Philippines.
UPS broke into the European market in earnest in 1988, with the acquisition of
10 European courier services. To enhance its international delivery systems, UPS
created a system that coded and tracked packages and automatically billed customers
for customs duties and taxes. In 2012, UPS expanded its European offerings by
purchasing Kiala, a European company that gave customers delivery options at nearby
Data source: Google Finance, Morningstar, Value Line, and Capital IQ.
shops and gas stations close to their homes, before replicating the service for Page 31
UK customers the following year. By 2015, the company planned to double its
investment in Europe to nearly $2 billion over five years.
Much like the U.S. domestic market, the international package-delivery market of the
first decade of the 2000s was given its greatest boost by the explosion of e-commerce.
Compared to same-country online shopping, cross-border shipping was only a fraction
of global e-commerce spending in 2015, but it was the piece that was growing most
quickly, at an annual rate of over 25%. Websites like Amazon Marketplace and Etsy
allowed shoppers to purchase goods from sellers all over the world, while expecting an
ease of shipping similar to that provided by domestic retailers. As a result of this
growing segment of online sales, FedEx, UPS, and others were quickly adapting their
service offerings to make cross-border shopping as smooth as possible. FedEx, for
example, purchased Bongo in 2014, later rebranded as FedEx Cross Border, which
aimed to help retailers face cross-border selling issues, including regulatory
compliance and credit-card-fraud protection, while connecting them to global
Performance Assessment
Virtually all interested observers—customers, suppliers, investors, and employees—
watched the competitive struggle between UPS and FedEx for hints about the next stage
of the drama. The conventional wisdom was that if a firm were operationally excellent,
strong financial performance would follow. Indeed, FedEx had set a goal of producing
“superior financial returns,” while UPS targeted “a long-term competitive return.”
Had the two firms achieved their goals? Moreover, did the trends in financial
performance suggest whether strong performance could be achieved in the future? In
pursuit of answers to those questions, the following exhibits afford several possible
14 15
Page 32
avenues of analysis.
Financial Success
The success of the two companies could be evaluated based on a number of financial
and market performance measures. Exhibit 2.5 presents the share prices, earnings per
share (EPS), and price–earnings ratios for the two firms. Also included are the annual
total return from holding each share (percentage gain in share price plus dividend yield)
and the economic value add, reflecting the value created or destroyed each year by
deducting a charge for capital from the firm’s net operating profit after taxes.
Exhibits 2.2 and 2.3 present a variety of analytical ratios computed from the financial
statements of each firm.
EXHIBIT 2.5 | Financial and Market Performance
Data source: Google Finance, Morningstar, Value Line, and Capital IQ.
The thinking of the several securities analysts who followed FedEx and UPS in
2015 and 2016 reflected the uncertainty surrounding the future performance for the two
arch-rival shipping companies. Exhibits 2.6 and 2.7 contain excerpts from various
equity research reports that indicate the analysts’ outlook held for UPS and FedEx.
EXHIBIT 2.6 | Recent Equity Analysts’ Outlook for UPS
William Greene, Alexander Vecchio, and Diane Huang, “Greene/United Parcel Service—The Most Important Strategic
Question,” Morgan Stanley, No. 25356723 from Investext Current Reports database (accessed Feb. 3, 2017).
Kelly Dougherty, Matt Frankel, and Cleo Zagrean, “UPS—It’s All about the Peak,” Macquarie Group, No. 27249020 from
Investext Current Reports database (accessed Feb. 3, 2017).
Keith Schoonmaker, “UPS, Inc.,” Morningstar, No. 27249776 from Investext Current Reports database (accessed Feb.
3, 2017).
Source: Analyst reports from specified sources.
EXHIBIT 2.7 | Recent Equity Analysts’ Outlook for FedEx
Operational Success
Beyond their financial performance, the rival companies’ strengths and successes could
also be examined using various measures of operational excellence:
Keith Schoonmaker, “FedEx Corp.,” Morningstar, No. 27901878 from Investext Current Reports database (accessed
Feb. 3, 2017).
Brandon Oglenski, Eric Morgan, and Van Kegel, “FedEx Corp–Some Holiday Cheer,” Barclay’s, No. 27771203 from
Investext Current Reports database (accessed Feb. 3, 2017).
Kelly Dougherty, Matt Frankel, and Cleo Zagrean, “FedEx–Still the One,” Macquarie Group, No. 27761106 from Investext
Current Reports database (accessed Feb. 3, 2017).
Source: Analyst reports from specified sources.
Marketing: In 2015, brand consultancy Interbrand’s annual ranking of top global
brands ranked UPS at number 29 and FedEx at 86, representing little change from
UPS’s rank of 27 and FedEx’s distant 92 for 2014. This favoring of UPS reflected the
payoffs of its full-service-promoting campaigns of the early years of the 2000s.
Employee satisfaction: Fortune magazine’s annual ranking of the world’s mostadmired
companies was based on nine factors related to financial performance and
corporate reputation, with four of those factors specifically relating to HR attributes
(quality of management, ability to attract and retain talented people, innovation, and
product and service quality). In 2015, Fortune awarded FedEx with the number 12
spot overall, with UPS coming in at number 24. FedEx’s apparent excellence with
regard to talent management was also reflected by the Great Place to Work Institute,
naming FedEx Express as one of the top global companies to work for for the fourth
year in a row. The more unionized UPS, where strikes were not uncommon, seemed
to lag behind its rival in its commitment to its employees.
Page 33
Holiday performance: High-volume holiday-delivery performance was always seen
as a strong test of a shipping company’s effectiveness, and FedEx and UPS
traditionally adopted different strategies approaching the peak season. Despite the
holidays of 2013 and 2014 favoring FedEx’s automation at its hubs, its independentcontractor
model (paying ground drivers by package rather than by hour) and practice
of turning down peak volumes based on quantities customers shipped during nonpeak
months, by 2015, the tide had turned. That year, UPS finally managed to prove its peak
execution capabilities; its strategy of increasing capacity to handle higher volumes
allowed it to achieve an on-time-delivery rate nearing 98% the week before
Christmas. For the same period, FedEx struggled to handle a late surge of e-commerce
shipments that were delivered after the holiday (though the company wouldn’t provide
Customer satisfaction: The American Customer Satisfaction Index (ACSI), the only
national, cross-industry measure of companies’ perceptions among consumers, ranked
shipping companies each year based on nearly 10,000 customers’ responses
concerning ease of tracking, package condition on arrival, helpfulness of in-store staff,
and other factors related to recent delivery experiences. Until 2009, FedEx was
ranked number one, but the two shippers leveled out in recent years, and by 2014 and
2015, both UPS and FedEx remained neck and neck, having identical ACSI scores of
Outlook for FedEx and UPS
Observers of the air-express package-delivery industry pondered the recent
performance of the two leading firms and their prospects. What had been the impact of
the intense competition between the two firms? Which firm was doing better? The
companies faced a watershed moment with the growth of e-commerce and FedEx’s
aggressive push into Europe. Might their past performance contain clues about the
prospects for future competition?
82, well above USPS’s scores of 73 and 75 for the same years.19
Page 43
Page 44
Larry Puglia and the T. Rowe Price Blue Chip
Growth Fund
By late 2016, Larry J. Puglia had been managing the $33 billion T. Rowe Price Blue
Chip Growth Fund (Blue Chip Growth Fund) for more than 23 years. One of the fund’s
original managers, Puglia had been the sole manager of the open-ended mutual fund
since 1997 and had generated superior returns on average for his investors over the life
of the fund.
Since inception in mid-1993 through September 30, 2016, the fund had returned an
average annual total return of 10.12%, outperforming the 9.12% return of the fund’s
benchmark, the Standard & Poor’s 500 Index (S&P 500). For most fund managers,
beating the S&P 500 in any single year was an accomplishment, yet Puglia had served
his investors well by performing better than competitor funds both in bull markets, such
as that of the late 1990s, as well as the bear markets, such as that of the first decade of
the 2000s. Exhibit 3.1 presents a summary of the Blue Chip Growth Fund. Exhibits 3.2
and 3.3 show the fund’s performance and annual return versus its benchmark and other
funds in the large-cap growth category.
EXHIBIT 3.1 | Morningstar, Inc., Report on T. Rowe Price Blue Chip Growth Fund: Summary1
Source: © 2016 Morningstar, Inc. All rights reserved. Reproduced with permission. Morningstar, T. Rowe Price Blue Chip
Growth Fund TRBCX, release date June 7, 2016: 1
© 2016 Morningstar, Inc. All rights reserved. The information contained herein: (1) is proprietary to Morningstar and/or
its content providers; (2) may not be copied or distributed; (3) does not constitute investment advice offered by
Morningstar; and (4) is not warranted to be accurate, complete, or timely. Neither Morningstar nor its content providers
are responsible for any damages or losses arising from any use of this information. Past performance is no guarantee
of future results. Use of information from Morningstar does not necessarily constitute agreement by Morningstar, Inc., of
any investment philosophy or strategy presented in this publication.
EXHIBIT 3.2 | Morningstar, Inc., Summary of T. Rowe Price Blue Chip Growth Fund: Performance1
Source: © 2016 Morningstar, Inc. All rights reserved. Reproduced with permission. Morningstar T. Rowe Price Blue Chip
Growth Fund TRBCX, release date June 7, 2016: 11.
© 2016 Morningstar, Inc. All rights reserved. The information contained herein: (1) is proprietary to 1 Morningstar and/or
While Puglia, working out of T. Rowe Price’s Baltimore, Maryland, headquarters,
rarely had the best overall performance in any given year, and other managers had
beaten his results over short-term periods, his overall long-term performance relative to
the index was truly impressive. He ranked 20th out of 558 U.S. stock mutual funds with
a single portfolio manager, and Morningstar had awarded the Blue Chip Growth Fund
its coveted five-star rating for the fund’s five-year performance, placing it in the top
10% of 1,285 mutual funds investing in large-capitalization growth stocks. Puglia had
also been nominated by Morningstar as one of five finalists for Domestic Fund Manager
of the Year in 2013. The fund had been recognized as an IBD Best Mutual Funds 2016
Awards winner by Investor’s Business Daily (IBD). In addition, Money Magazine
consistently named the fund to its annual selection of best funds, and Kiplinger’s
Personal Finance magazine included the fund on its list of 25 favorite funds.
its content providers; (2) may not be copied or distributed; (3) does not constitute investment advice offered by
Morningstar; and (4) is not warranted to be accurate, complete, or timely. Neither Morningstar nor its content providers
are responsible for any damages or losses arising from any use of this information. Past performance is no guarantee
of future results. Use of information from Morningstar does not necessarily constitute agreement by Morningstar, Inc., of
any investment philosophy or strategy presented in this publication.
EXHIBIT 3.3 | Morningstar Performance Comparison of T. Rowe Price Blue Chip Growth Fund, the
Large-Cap Growth Category, and the Broad Market Index (Average Total Returns %–Sept. 30, 2016)
Data source: Morningstar, T. Rowe Price Blue Chip Growth Fund TRBCX,
at (accessed
Nov. 21, 2016).
Note: Average total return includes changes in principal value, reinvested dividends, and capital gain distributions. For
periods of one year or longer, the returns are annualized. For periods less than one year, the return figures are not
annualized and represent total return for the period.
Page 45
Puglia’s results seemed to contradict conventional academic theories, which
suggested that, in markets characterized by high competition, easy entry, and
informational efficiency, it would be extremely difficult to beat the market on a
sustained basis. Observers wondered what might explain such consistent
outperformance by a fund manager and how it could be sustained.
The Mutual-Fund Market
The global mutual-fund market represented $37.2 trillion in worldwide assets at the end
of 2015. Investment companies in the United States accounted for almost half the global
market, with $18.1 trillion in assets; U.S. investment company assets first topped $1
trillion in 1990, growing to $5.8 trillion in 1998 and $18.1 trillion in 2015. Ninety-three
million individuals and 44% of U.S. households owned mutual funds in 2015. In 2015,
individual investors owned about 86% of the assets held by U.S. investment companies.
Mutual funds provided several benefits for individual retail investors. First, they
gave investors the ability to diversify their portfolios—that is, invest in many different
securities simultaneously, thereby reducing the risks associated with owning any single
stock. By purchasing shares in a mutual fund, investors without significant amounts of
capital could efficiently diversify their portfolios, investing as if they had the sizable
amount of capital usually necessary to achieve such efficiency. Mutual funds also
offered scale economies in trading and transaction costs, economies unavailable to the
typical individual investor. Second, mutual funds provided retail investors with
professional management and expertise devoted to analysis of securities, which in
theory could lead to higher-than-average returns. A third view was that the
mutual-fund industry provided, according to one observer, “an insulating layer
between the individual investor and the painful vicissitudes of the marketplace” :
This service, after all, allows individuals to go about their daily lives without
spending too much time on the aggravating subject of what to buy and sell and
when, and it spares them the even greater aggravation of kicking themselves for
making the wrong decision. . . . Thus, the money management industry is really
selling “more peace of mind” and “less worry,” though it rarely bothers to say so.
Between 1970 and 2015, the number of mutual funds offered in the United States
grew from 361 to 9,520. This total included many different types of funds; each pursued
a specific investment focus and could be classified in one of several categories, such as
aggressive-growth, growth, growth-and-income, international, option, balanced, or a
variety of bond or fixed-income funds. Funds could be further segmented by company
size based on the market capitalization (market cap), calculated by multiplying the
number of shares outstanding by share price. Investors could, for example, opt to invest
in large-cap, mid-cap, or small-cap growth funds. Funds whose principal focus of
investing was common stocks or equities represented the largest segment of the industry.
The growth in the number and types of mutual funds reflected a major shift in
retirement plans for U.S. workers. Prior to the 1980s, most workers were covered by
traditional defined-benefit (DB) pension plans, which were funded by employers and
managed by institutional money managers hired by the employers. Changes to the U.S.
tax code in the 1970s set the stage for a major shift, which would have broad
implications for the mutual-fund industry. First, the Employee Retirement Income
Security Act of 1974 established the self-directed Individual Retirement Account (IRA)
through which workers could save and invest individually for their retirement on a taxdeferred
basis. Second, large U.S. companies began to replace their DB pension plans
with defined-contribution (DC) plans such as 401(k) and 403(b) plans. The new plans,
named for the relevant sections of the U.S. tax code, shifted the burden and
responsibility of saving and managing retirement assets from corporate employers to
Page 46
individual employees. Exhibit 3.4 shows the growth in retirement-plan assets over the
period from 1975 to 2015. By 2015, $7.1 trillion of IRA and DC plan assets were
invested through mutual funds.
The shift into DC plans created a broader customer base for the mutual-fund
industry, as well as a deeper penetration of the total market for financial
services. With DC plans, each worker had an individual investment account that could
hold multiple mutual funds, whereas a company’s DB plan held the assets of tens of
thousands of workers in a single investment account. Funds owned in an employee’s
name after a vesting period remained in the employee’s name even if they switched
employers. By 2015, 44.1% or 54.9 million U.S. households owned mutual funds, up
from 5.7% or 4.6 million U.S. households in 1980.
The breadth of mutual-fund alternatives tended to encourage fund switching,
especially from one type of fund to another within a family of funds. The switching
behavior reflected the increased participation of growing numbers of relatively
inexperienced and unskilled retail investors; their interest in market-timing-oriented
EXHIBIT 3.4 | Retirement Plan Assets by Categories 1975 and 2015 (billions of dollars, end-ofperiod)
CAGR = compound annual growth rate.
Data source: Created by author based on data from Investment Company Institute, (accessed Dec. 16, 2016).
*Data are estimated.
investment strategies; and the greater range of mutual funds from which to choose, all of
which increased volatility in the market. In short, as the mutual-fund industry grew and
segmented, mutual-fund money became “hotter” (tended to turn over faster).
As a result of the growth in the industry, the institutional investors who managed
mutual funds, pension funds, and hedge funds on behalf of individual investors grew in
power and influence. By 2015, mutual funds owned 31% of the outstanding stock of U.S.
companies. The power and influence of institutional asset managers was apparent in
their trading muscle—their ability, coupled with their willingness, to move huge sums of
money in and out of securities. The rising role of institutional investors investing on
behalf of millions of individual account holders resulted in increases in trading volume,
average-trade size, and block trading (a single trade of more than 10,000 shares).
Mutual-Fund Basics
When individuals invested in an open-ended mutual fund, their ownership was
proportional to the number of shares purchased. The value of each share was called the
fund’s net asset value (NAV). The NAV, computed after market close each day, was the
fund’s total assets less liabilities divided by the number of mutual-fund shares
outstanding, or:
The investment performance of a mutual fund was measured as the increase or decrease
in NAV plus the fund’s income distributions during the period (i.e., dividends and
capital gains), expressed as a percentage of the fund’s NAV at the beginning of the
investment period, or:
Investors in mutual funds generally paid two types of fees for their investments: onetime
transaction fees and ongoing management fees. A fund’s transaction fee, or sales
load, covered sales commissions paid to brokers for selling the fund. The sales load
could be a front-end or back-end load. A front-end sales load shaved off as much as 6%
of an individual’s initial investment. A back-end load, in contrast, enabled investors to
invest all of their money and defer paying the sales load until they redeemed the shares.
Some companies eschewed the use of brokers and pursued a no-load strategy, selling
funds directly to investors.
In addition to any sales load imposed, investors paid fees for the ongoing
management and operation of the mutual fund. Expenses included management fees for
managing the investments, administrative costs, advertising and promotion expenses,
and distribution and service fees. Expenses were calculated as a percentage of the
fund’s total assets (the expense ratio), and were charged to all shareholders
proportionally. Expense ratios ranged from as low as 0.2% to as high as 2.0%. As seen
in Exhibit 3.5, expense ratios were lower for index funds (funds designed to replicate
the performance of a specific market index) than they were for actively managed funds
which sought to outperform a market index. Because the expense ratio was regularly
deducted from the portfolio, it reduced the fund’s NAV, thereby lowering the fund’s
gross returns. Depending on the magnitude of the fund’s expense ratio, the net effect of
loads and expense ratios on shareholder returns could be substantial.18
EXHIBIT 3.5 | Expense Ratios of Actively Managed and Index Funds in Basis Points 1996–2015
Another drag on shareholders’ returns was the tendency to keep some portion of fund
assets in cash either to invest in attractive investment opportunities or to meet
shareholder redemptions. As economist and industry observer Henry Kaufman warned
in 1994, a sudden economy-wide shock from interest rates or commodity prices could
spook investors into panic-style redemptions from mutual funds, which could force the
funds themselves to liquidate investments, sending security prices into a tailspin.
Unlike the banking industry, which enjoyed the liquidity afforded by the U.S. Federal
Reserve to respond to the effects of panic by depositors, the mutual-fund industry
enjoyed no such government-backed reserve, and thus fund managers often carried a
certain amount of cash to meet redemptions.
A final drag on shareholders’ returns was taxes. Mutual funds collected taxable
dividends for the shares they held and generated taxable capital gains whenever they
sold securities at a profit. Dividends received and capital gains and losses were
reflected in the daily NAV. The funds could avoid paying corporate taxes on dividends
earned and capital gains realized during the year if they distributed the investment
Note: Expense ratios are measured as asset-weighted averages. Data exclude mutual funds available as investment
choices in variable annuities and mutual funds that invest primarily in other mutual funds.
Data source: Created by author based on data from Investment Company Institute and Lipper, (accessed Dec. 16, 2016).
income to shareholders prior to year-end. The distribution shifted the tax liability from
the investment company to individual shareholders.
Mutual funds generally distributed the year’s realized capital gains and dividend
income to shareholders in December. Dividends and capital gains had, of course, been
collected and realized throughout the year, and were reflected in the daily NAV as they
occurred. On the day of the distribution, the NAV was reduced by the amount of the
distribution. As an example, imagine a mutual fund with an NAV of $10 per share that
had realized capital gains of $1.12 per share during the year. In December, the mutual
fund would distribute $1.12 per share to its investors and the new NAV would be $8.88.
Thus an investor with 100 shares who chose to receive the distribution in cash would
have $112 in cash plus 100 shares worth $888 for a total investment value of $1,000.
An investor who held 100 shares with an NAV of $10 prior to a distribution that he
chose to reinvest would hold the original 100 shares with a new NAV of $8.88 plus
12.612 new shares ($1.12 × 100 shares/8.88 per share), for a total of 112.612 shares
worth $1,000 ($8.88 × 12.612 shares). When funds were held in taxable rather than taxdeferred
accounts, capital gains distributions triggered both unexpected and unwanted
tax liabilities for investors, and reduced the net returns to investors.
Most mutual-fund managers relied on some variation of the two classic schools of
securities analysis for choosing investments
Technical analysis: This approach involved the identification of profitable
investment opportunities based on trends in stock prices, volume, market
sentiment, and the like.
Fundamental analysis: This approach relied on insights afforded by an analysis
of the economic fundamentals of a company and its industry: supply and demand,
costs, growth prospects, and the like.
While variations on those approaches produced above-average returns in certain years,
there was no guarantee that they would produce such returns consistently over time.
Performance of the Mutual-Fund Industry
The two most frequently used measures of mutual-fund performance were (1) the annual
growth rate of NAV assuming reinvestment of all dividend and capital-gain distributions
(the total return on investment) and (2) the absolute dollar value today of an investment
made at some time in the past. Those measures were then compared with the
performance of a benchmark portfolio such as the Russell 2000 Index or the S&P 500
Composite Index. Exhibit 3.6 provides performance data on a range of mutual-fund
categories. The Russell, S&P 500, Dow Jones, and Value Line indices offered
benchmarks for the investment performance of hypothetical stock portfolios.
Academicians criticized those performance measures for failing to adjust for the
relative risk of the mutual fund. Over long periods, as Exhibit 3.7 shows, different types
of securities yielded different levels of total return, and each type of security was
associated with differing degrees of risk (measured as the standard deviation of
EXHIBIT 3.6 | Morningstar Performance Comparison of U.S. Mutual-Fund Categories
Note: Data through November 11, 2016. Returns are simple averages. For periods of one year or longer, the returns are
annualized. For periods of less than one year, the return figures are not annualized and represent total return for the
Data source: Morningstar.
returns). Thus the relationship between risk and return was reliable both on average and
over time. For instance, it would be expected that a conservatively managed mutual fund
would yield a lower return—precisely because it took fewer risks.
After adjusting for the risk of the fund, academic research indicated that mutual
funds had the ability to perform up to the market on a gross-return basis, but when all
expenses were factored in, the funds underperformed the market benchmarks. In a paper
first published in 1968, Michael Jensen reported that gross risk-adjusted returns were
−0.4% and that net risk-adjusted returns (i.e., net of expenses) were −1.1%. Later
studies found that, in a sample of 70 mutual funds, net risk-adjusted returns were
essentially zero, and some analysts attributed this general result to the average 1.3%
expense ratio of mutual funds and their tendency to hold cash.
In his best-selling book, A Random Walk Down Wall Street, a classic investment
tome first published in 1973, Burton Malkiel, an academic researcher, concluded that a
passive buy-and-hold strategy (of a large, diversified portfolio) would do as well for
the investor as the average mutual fund. Malkiel wrote:
Even a dart-throwing chimpanzee can select a portfolio that performs as well as
one carefully selected by the experts. This, in essence, is the practical application
of the theory of efficient markets. . . . The theory holds that the market appears to
EXHIBIT 3.7 | Mean and Standard Deviation of Annual Returns by Major U.S. Asset Category
Data source: Stocks, Bonds, Bills, and Inflation 2014 Yearb ook (Chicago: Ibbotson Associates, 2015): 34.
adjust so quickly to information about individual stocks and the economy as a
whole, that no technique of selecting a portfolio—neither technical nor
fundamental analysis—can consistently outperform a strategy of simply buying and
holding a diversified group of securities such as those that make up the popular
market averages. . . . [o]ne has to be impressed with the substantial volume of
evidence suggesting that stock prices display a remarkable degree of efficiency. . .
. If some degree of mispricing exists, it does not persist for long. “True value will
always out” in the stock market.
Many scholars accepted and espoused Malkiel’s view that the stock market followed a
“random walk,” where the price movements of the future were uncorrelated with the
price movements of the past or present. This view denied the possibility that there could
be momentum in the movements of common stock prices. According to this view,
technical analysis was the modern-day equivalent of alchemy. Academics also
dismissed the value and effectiveness of fundamental analysis. They argued that capital
markets’ information was efficient–that the data, information, and analytical conclusions
available to any one market participant were bound to be reflected quickly in share
The belief that capital markets incorporated all the relevant information into existing
securities’ prices was known as the efficient market hypothesis (EMH), and was
widely, though not universally, accepted by financial economists. If EMH were correct
and all current prices reflected the true value of the underlying securities, then arguably
it would be impossible to beat the market with superior skill or intellect.
Economists defined three levels of market efficiency, which were distinguished by
the degree of information believed to be reflected in current securities’ prices. The
weak form of efficiency maintained that all past prices for a stock were incorporated
into today’s price; prices today simply followed a random walk with no correlation
with past patterns. Semistrong efficiency held that today’s prices reflected not only all
past prices, but also all publicly available information. Finally, the strong form of
market efficiency held that today’s stock price reflected all the information that could be
acquired through a close analysis of the company and the economy. “In such a market,”
as one economist said, “we would observe lucky and unlucky investors, but we
wouldn’t find any superior investment managers who can consistently beat the market.”
Proponents of EMH were both skeptical and highly critical of the services provided
by active mutual-fund managers. Paul Samuelson, the Nobel Prize–winning economist,
[E]xisting stock prices already have discounted in them an allowance for their
future prospects. Hence . . . one stock [is] about as good or bad a buy as another.
To [the] passive investor, chance alone would be as good a method of selection as
anything else.
Tests supported Samuelson’s view. For example, in June 1967, Forbes magazine
established an equally weighted portfolio of 28 stocks selected by throwing darts at a
dartboard. By 1984, when the magazine retired the feature article, the initial $28,000
portfolio with $1,000 invested in each stock was worth $131,698, a 9.5% compound
rate of return. This beat the broad market averages and almost all mutual funds. Forbes
concluded, “It would seem that a combination of luck and sloth beats brains.”
Despite the teachings of EMH and the results of such tests, some money managers—
such as Larry Puglia—had significantly outperformed the market over long periods. In
reply, Malkiel suggested that beating the market was much like participating in a cointossing
contest where those who consistently flip heads are the winners. In a cointossing
game with 1,000 contestants, half will be eliminated on the first flip. On the
second flip, half of those surviving contestants are eliminated. And so on, until, on the
seventh flip, only eight contestants remain. To the naïve observer, the ability to flip
heads consistently looks like extraordinary skill. By analogy, Malkiel suggested that the
success of a few superstar portfolio managers could be explained as luck.
Not surprisingly, the community of professional asset managers viewed those
scholarly theories with disdain. Dissension also grew in the ranks of academicians as
research exposed anomalies inconsistent with the EMH. For example, evidence
suggested that stocks with low price-to-earnings (P/E) multiples tended to outperform
those with high P/E multiples. Other evidence indicated positive serial correlation (i.e.,
momentum) in stock returns from week to week or from month to month. The evidence
of these anomalies was inconsistent with a random walk of prices and returns.
The most vocal academic criticism came from the burgeoning field of “behavioral
finance,” which suggested that greed, fear, and panic could be much more significant
factors in determining stock prices than mainstream theories would suggest. Critics of
EMH argued that events such as the stock-market crash of October 1987 were
inconsistent with the view of markets as fundamentally rational and efficient. Lawrence
Summers, economist and past president of Harvard University, argued that the 1987
crash was a “clear gap with the theory. If anyone did seriously believe that price
movements are determined by changes in information about economic fundamentals,
they’ve got to be disabused of that notion by [the] 500-point drop” which erased more
than 22% of market value in a single day. Following the 1987 crash, Yale University
economist Robert Shiller concluded: “The efficient market hypothesis is the most
remarkable error in the history of economic theory. This is just another nail in its
Market events such as the Internet bubble of the late 1990s and the global financial
crisis of 2007–2009 further added to the belief that market participants were not always
rational and the EMH was flawed. Yet, despite the mounting evidence of its
shortcomings, the EMH remained the dominant model in the academic community.
The Rise of Passive Investing
More than 20 years after graduating from Princeton University in 1951, where he wrote
his senior thesis on “The Economic Role of the Investment Company,” John C. Bogle
founded the Vanguard Group and established a fund whose investment goal was to match
—not beat—the performance of a market index. Bogle’s First Index Investment Trust
launched on December 31, 1975, and was quickly dismissed as folly by many.
Investors, critics proclaimed, would not be satisfied with receiving average returns.
Over time, Bogle’s fund, which tracked the S&P 500, and was eventually renamed
the Vanguard 500 Index Fund, proved critics wrong. Without expensive portfolio
managers or research analysts to compensate, the fund charged a low expense ratio of
0.16%. Without portfolio managers trading in and out of securities, the fund’s turnover
rate was 3%, meaning that year-end capital-gains distributions were negligible, making
the fund extremely tax efficient for taxable investors.
At least some investors decided that the benefits of being average outweighed the
costs of trying to be above average. From approximately $11 million in assets in 1975,
the fund grew to $262.80 billion in assets on September 30, 2016. Vanguard also
grew. By December 2015, the company employed 14,000 individuals and offered more
than 300 U.S. and non-U.S. funds, serving more than 20 million investors from
approximately 170 countries.
Vanguard’s success was noticed. In particular, other investment companies
developed and offered index funds, and by 2015, $2.2 trillion was invested in indexbased
mutual funds. Exhibit 3.8 shows the growing percentage of assets invested in
equity index funds from 2000 to 2015, and Exhibit 3.9 shows how outflows from
actively managed funds matched the inflows to passively managed investment funds
from 2009 to 2015.44
EXHIBIT 3.8 | Percentage of Equity Mutual Funds’ Total Net Assets Invested in Index Funds 2000–
Data source: Created by author based on data from Investment Company
Institute, (accessed Dec. 16, 2016).
Note: Equity mutual fund flows include net new cash flow and reinvested dividends. Data exclude mutual funds
that invest primarily in other mutual funds.
Data source: Created by author based on data from Investment Company
Institute, (accessed Dec. 16, 2016).
Larry Puglia and the T. Rowe Price Blue Chip Growth
At a time when many investors were eschewing actively managed funds such as Puglia’s
in favor of passive investments designed to track stock-market indices, Puglia’s
investment performance stood out. Morningstar, the well-known statistical service for
the investment community, gave the Blue Chip Growth Fund its second-highest rating,
four stars for overall performance, placing it in the top 32.5% of 1,482 funds in its
category. Morningstar rated funds with at least a three-year history based on riskadjusted
return (including the effects of transaction fees such as sales loads and
redemption fees) with emphasis on downward variations and consistent performance.
According to Morningstar, a high rating could reflect above-average returns, belowaverage
risk, or both.
Puglia graduated summa cum laude from the University of Notre Dame and went
one to earn an MBA from the Darden School of Business, where he graduated with
highest honors. A Certified Public Accountant (CPA), Puglia also held the Chartered
Financial Analyst (CFA) designation. Puglia learned his first lessons about investing
from his father, a traditional buy-and-hold investor. “He would buy good companies and
literally hold them for 15 or 20 years.”
Puglia, 56, joined T. Rowe Price in 1990 as an analyst following the financial
services and pharmaceutical industries. He worked closely with portfolio manager Tom
Broadus (co-manager of the Blue Chip Growth Fund from mid-1993 until leaving the
fund on May 1, 1997), who provided additional lessons about investing. Broadus
warned the young analyst that his investment style would sometimes be out of sync with
EXHIBIT 3.9 | Monthly Cumulative Flows to and Net-Share Issuance of U.S. Equity Mutual Funds and
Index Exchange-Traded Funds (ETFs) January 2007–December 2015 (in billions of dollars)
the market. Part of the portfolio manager’s job, he told Puglia, was to recognize that and
lose as little as possible.
When the Blue Chip Growth Fund launched in 1993, its managers engaged in
considerable debate over “what constituted a ‘blue-chip growth company.’ Some people
felt we should own the old Dow Jones smokestack companies; others said we needed to
own the Ciscos and the Microsofts. After giving it a lot of thought, we decided that it
was durable, sustainable earnings-per-share growth that confers blue-chip status on a
company. That’s what allows it to garner an above-average price-earnings ratio, and
that’s what allows you to really hold such an investment for the long term and allows
your wealth to compound. So that’s basically what we’re trying to do—we’re trying to
find companies with durable, sustainable earnings-per-share growth, and we want to
hold those companies for the long term.”
The fund’s objective was long-term capital growth, with income only a secondary
consideration. Consequently, Puglia invested in well-established large and mediumsized
companies that he believed had potential for above-average earnings growth.
More specifically, Puglia looked for companies with leading market positions, seasoned
management that allocated capital effectively, and strong returns on invested capital.
To be included in his portfolio, a company needed several things :
1. Growing market share and market size. In Puglia’s view, a leading market position
conferred both cost advantages and pricing advantages. A company with superior
market share generally made its products more cheaply, and also enjoyed more pricing
flexibility. As important as growing market share was growing market size. Superior
market share in a declining marketplace was not a good indicator, so Puglia also
evaluated the market for a company’s products and how large the total addressable
market could grow over time.
2. Competitive advantage(s): Puglia used Harvard Business School professor Michael
Puglia was assisted in this process by a very highly regarded global research team that
included more than 250 industry analysts, as well as portfolio managers responsible for
other funds. Together, members of the research team covered more than 2,300 public
companies around the globe, almost two-thirds of global markets by market
capitalization. The firm’s recruiting and internal mentoring programs allowed it to
attract and develop talented investment analysts, who formed a pool of well-trained and
experienced candidates for portfolio manager positions.
T. Rowe Price’s culture and structure encouraged and facilitated close and frequent
collaboration between managers and analysts and equity and fixed-income
professionals. Its performance evaluation and compensation practices rewarded
collaboration and focused on long-term, rather than short-term, results. Management
regularly promoted the strength and contributions of the research team to clients,
directly or through its website, and shared research supporting its approach to active
Puglia, like most of the firm’s portfolio managers, had initially served as an analyst,
and considered analyst recommendations and insights from the research team to be
Porter’s competitive analysis to identify companies with sustainable competitive
advantage, what legendary value investor Warren Buffett referred to as “economic
castles protected by unbreachable moats.”51
3. Strong fundamentals including above-average earnings growth, stable-to-improving
margins, strong free cash flow, and return on equity.
4. Seasoned management with a demonstrated track record: Puglia looked for evidence
of management’s ability to allocate capital to the highest-return businesses and pare
away low-returning businesses; and to manage expenses aggressively. He compared a
company that generates superior return and has strong free-cash flow but lacks strong
management to a fast ship without a rudder. Sooner or later, it will run aground.
instrumental to the stock-selection process. With assistance from the research team and
robust firm resources, he focused on identifying companies with durable free-cash-flow
Although most investment candidates were identified through analyst
recommendations, Puglia also employed other identification methods, including
screening databases for various characteristics, such as steady earnings growth and
return on equity over one, three, and five years. Puglia explained, “We’ll look under
every stone,” searching news reports, economic data, and even rivals’ portfolios for
investment ideas. “There are plenty of other managers out there with excellent track
records,” he said, “and we’re willing to learn from others where possible.”
Identifying a potential investment through screening was only one quantitative aspect
of the investment research and decision-making process. For each company of interest,
Puglia calculated the company’s “self-sustaining growth rate,” multiplying return on
equity by 1 minus the payout ratio (percentage of earnings paid out in dividends). A
company with a 25% return on equity paying out 10% of earnings in dividends, would,
for example, have a self-sustaining growth rate of 22.5%. Recognizing the limitations
of return on equity or other measures based upon GAAP or book accounting, Puglia and
the research team also used free cash flow extensively in quantitative analysis and stock
selection. If a company met Puglia’s quantitative criteria, he and the research team
would do further qualitative research, including meeting with corporate management
and corroborating their assertions and other data with customers, suppliers, and
According to Morningstar, $10,000 invested in the Blue Chip Growth Fund at its
inception in mid-1993 would have grown to $94,021 in assets on September 30, 2016.
Puglia’s fund significantly outperformed the average growth for the large-cap-growth
category of $56,185 and growth from investing in the S&P 500, which returned $76,100.
As news of Puglia’s performance record spread, more and more investors moved their
money to the Blue Chip Growth Fund, such that over the life of the fund, more than $15
billion of new money was added to the fund’s assets under management. Even so, Puglia
remained modest; he knew his investing style would not always be in sync with the
markets and that the fund’s returns could vary quite a bit at times from the S&P 500.
During those times, Puglia would recall the advice of his former co-manager to
recognize the shift and lose as little as possible.
Judged from an historical perspective, Puglia’s investment success seemed exceptional.
His long-run, market-beating performance defied conventional academic theories.
Investors, scholars, and market observers wondered about the sources of such superior
performance and about its sustainability. At of the end of 2016, was it rational for an
equity investor to buy shares in the Blue Chip Growth Fund, or for that matter any
actively managed fund? Investors and other observers wondered whether and for how
long Puglia could continue to outperform the market. In particular, they wondered
whether he would be able to sustain his performance under the weight of having
$30 billion in assets to invest.
Page 63
Genzyme and Relational Investors: Science and
Business Collide?
For Marblehead Neck, Massachusetts, it was an unusually warm morning in April 2009,
so Henri Termeer decided to take a leisurely walk on the beach. Termeer had some
serious issues to consider and often found that the fresh sea air and solitude did
wonders for his thought process. For more than 20 years, Termeer had been the
chairman and CEO of Genzyme Corporation, based in Cambridge, Massachusetts.
Under his watch, Genzyme had grown from an entrepreneurial venture to one the
country’s top-five biotechnology firms (Exhibit 4.1 shows Genzyme’s financial
EXHIBIT 4.1 | Income Statements

There were bumps along the way accompanying Termeer’s achievements, and a
recent event was one of them. The week before, Termeer had sat in a presentation by
Ralph Whitworth, cofounder and principal of a large activist investment fund,
Relational Investors (RI). Whitworth’s company now had a 2.6% stake in Genzyme
(Exhibit 4.2 shows Genzyme’s top 10 shareholders). Whitworth had a history of
engagements with the board of directors of numerous companies, and in several
instances, the CEO had been forced to resign. In January, when RI had announced its
initial 1% investment in Termeer’s company, the two men had met for a meeting at the JP
Morgan Healthcare Conference, and the discussion had been amicable. Whitworth and
his team then traveled in April to Genzyme’s headquarters and talked about Genzyme’s
core business, value creation, and the lack of transparency in some of the company’s
Data source: Genzyme Corporation, 10-K filing, 2008.
Data source: Genzyme Corporation, 10-K filings, 2007 and 2008
EXHIBIT 4.2 | Top 10 Shareholders, March 31, 2009
Page 64
Termeer was proud of his company’s accomplishments, shown by the number of
people with rare diseases who had been successfully treated with Genzyme’s products.
He was also pleased with the long-term growth in the price of Genzyme’s stock, which
had easily outperformed the market over the last several years. In fact, the company had
just posted record revenues of $4.6 billion for 2008. Although the 2007–08
financial crisis had affected the stock market overall, Genzyme, along with the
biotechnology industry, was faring better than most (see Exhibit 4.3 for charts on
Genzyme’s stock performance).
Data source: Forms 13F filed by investors.
EXHIBIT 4.3 | Genzyme (GENZ) vs. S&P 500 (S&P) and NASDAQ Biotechnology Index (NBI),
Weekly Close—Base = 1/1/2003
But a bigger blow came about a month after Termeer’s first introduction to
Whitworth. An operational problem surfaced in the company’s plant in Allston,
Data source: Bloomberg.
Massachusetts, followed by an official warning letter from the U.S. Food and Drug
Administration (FDA) on February 27, 2009. The company responded to the FDA by
publicly disclosing its manufacturing issues. Genzyme began conducting a quality
assessment of its system, and Whitworth had expressed his confidence in the company’s
actions to address the issues. Recent news on the impending health care reform bill also
hit companies in the health care sector hard. Genzyme’s stock price, which had declined
by 21% over five trading days, had yet to recover.
On top of handling Whitworth’s demands, Termeer had to prepare for the
shareholders’ annual meeting scheduled for May 21. As Termeer mulled over the
sequence of past events, the name of Whitworth’s RI fund suggested to him that
relationship building was its modus operandi and that perhaps Whitworth genuinely
wanted to help Genzyme increase its performance. Up to this time, Termeer had not
considered RI to be a threat, but if there were other corporate activists or hedge funds
monitoring his company and looking to set its corporate policy, then maybe he should
take note that Genzyme now had an “activist” investor. What should he do?
Cheeses, beer, and wine had at least one thing in common: the application of biological
science in the form of bacteria processing. The use of living organisms to stimulate
chemical reactions had been taking place for thousands of years. But since the mid-20th
century, when revolutionary research in genetics led to the description of the structure of
DNA, molecular biology had been transformed into a thriving industry. Products among
the 1,200 plus biotechnology companies in 2008 included innovations in the treatment of
multiple sclerosis, rheumatoid arthritis, cancer, autoimmune disorders, and diabetes.
Biotechnology drugs were normally far more complex to produce than the chemicalbased
blockbuster drugs developed by Big Pharma companies. The U.S. Supreme Court
Page 65
recognized patent rights on genetically altered life forms in the early 1980s, and the U.S.
Congress passed the Orphan Drug Act in 1983. Intended to attract investment for
research and development (R&D) in the treatment of rare diseases (those affecting less
than 200,000 people), the act gave companies that brought successful drugs to market a
seven-year monopoly on sales.
This exclusive sales incentive was not a free lunch, however; its purpose was to
offset the numerous uncertainties in biotechnology development. Many of these
uncertainties pertained to the U.S. drug approval process itself, one of the most rigorous
in the world. In addition to the extremely high cost of R&D, a lengthy process was
required to get new products to market. After a particular disease was targeted, its
treatment went through a series of chemical tests to determine therapeutic
effectiveness and to uncover potential side effects. Preclinical studies were then
done by testing animals over a period of years. Only then could the company submit an
investigational new drug application to the FDA to begin clinical testing on humans.
Clinical trials on humans consisted of three phases: (1) testing the drug’s safety by
giving small doses to relatively healthy people; (2) administering the drug to patients
suffering from the targeted disease or condition; and (3) employing random doubleblind
tests to eliminate bias in the process. Typically, one group of patients was given
the potential drug, and the other group was given an inert substance or placebo. Due to
the rigorous nature of the clinical trials, only about 5% to 10% of drugs that reached the
testing stage ultimately received approval for marketing. Not surprisingly, the
biotechnology industry’s R&D spending as a percentage of revenues was among the
highest of any U.S. industry group.
The level of R&D expenditures made it crucial to get new drugs to market quickly.
The FDA’s Center for Drug Evaluation and Research was responsible for reviewing
therapeutic biological products and chemical-based drugs. Unfortunately, inadequate
funding and staffing of the FDA resulted in missed deadlines and a low level of final
approvals. In 2008, the regulator approved 24 new drugs, out of which only 6 were
biologic. By 2009, it was estimated that, on average, new products took more than
eight years to get through the clinical development and regulatory process.
The industry weathered the financial storms in 2007–08 relatively well, as demand
for biotechnology products depended more on the population’s health than the economy
(see Exhibit 4.4 for financial metrics for Genzyme and its major competitors). This was
particularly true for large-cap companies with strong cash flows that did not need to
access capital markets. Of more importance to some industry observers was that strong
biotechnology companies might come under increased merger and acquisition (M&A)
pressure from Big Pharma because these companies faced patent expirations on key
blockbuster drugs in the coming years.
EXHIBIT 4.4 | Biotechnology Financial Metrics as of December 2008
Genzyme Corporation
Henry Blair, a Tufts University scientist, and Sheridan Snyder founded Genzyme in 1981
to develop products based on enzyme technologies. Using venture capital funding, they
purchased a small company, Whatman Biochemicals Ltd., which was absorbed into
Genzyme. In 1983 (the same year that the Orphan Drug Act was passed), they recruited
Henri Termeer to be president, joining the other 10 employees. Termeer had spent the
previous 10 years with Baxter Travenol (later Baxter International), including several
years running its German subsidiary. He left his lucrative position at Baxter to join the
start-up. Shortly after Termeer became CEO, Genzyme raised $28.5 million in its 1986
Notes: (a) Share buybacks for Genzyme and Cephalon represent purchases to satisfy option exercises.
Data Sources: Company 10-K filings, 2008; Silver, “Biotechnology” exhibits.
Page 66
IPO and began trading on the NASDAQ (ticker: GENZ).
An accidental meeting between Termeer and a former Baxter colleague
turned into a masterful acquisition for Genzyme. On a return flight from Chicago
to Boston in 1989, Termeer and Robert Carpenter, chairman and CEO of Integrated
Genetics (IG), based in Framingham, Massachusetts, discussed the businesses and
finances of the two companies. Several months later, Genzyme purchased IG with its
own stock for the equivalent of $31.5 million or less than $3 per share. Overnight
Genzyme’s expertise received a considerable boost in several areas of biotechnology:
molecular biology, protein and nuclear acid chemistry, and enzymology. Carpenter
served as executive vice president of Genzyme for the next two years and was elected
to the board of directors in 1994 (Exhibit 4.5 lists Genzyme board members).
EXHIBIT 4.5 | Board of Directors, March 31, 2009
Avoiding the glamorous blockbuster drug industry, Termeer established Genzyme’s
footprint in the treatment of genetic disorders. His goal was to create targeted drugs
to completely cure these diseases, despite the statistically small populations that were
afflicted. In the company’s formative years, Termeer focused R&D on lysosomal storage
Note: Date in parentheses is the first year elected to the board.
Data source: Genzyme Corporation, 14A filing, April 13, 2009.
disorders (LSDs). Commonalities among LSD patients were inherited life-threatening
enzyme deficiencies that allowed the buildup of harmful substances. Cures were aimed
at creating the genetic material to generate the deficient enzymes naturally in these
Genzyme’s most rewarding product was the first effective long-term enzyme
replacement therapy for patients with a confirmed diagnosis of Type I Gaucher’s
disease. This inherited disease was caused by deficiency of an enzyme necessary for the
body to metabolize certain fatty substances. The deficiency produced several crippling
conditions such as bone disease, enlarged liver or spleen, anemia, or thrombocytopenia
(low blood platelet count).
Initially, the product was known as Ceredase and received a great deal of attention
for its life-saving treatment. It was approved by the FDA in 1991 and protected by the
Orphan Drug Act, but its success was not without controversy. The price for Ceredase
was $150,000 per patient, per year, making it one of the most expensive drugs sold at
the time. Genzyme argued that the price reflected the extraordinary expense of
production; a year’s supply for a single patient required enzyme extraction from
approximately 20,000 protein-rich placentas drawn from a multitude of hospitals around
the world. By 1994, however, Genzyme’s laboratories had developed Cerezyme, a
genetically engineered replacement for Ceredase that was administered via intravenous
infusion. Cerezyme was approved by the FDA in 1995 and also qualified for protection
under the Orphan Drug Act.
Further successes against LSDs included Fabrazyme (to treat Fabry disease) and
Myozyme (to treat Pompe disease). Fabry disease was caused by GL-3, a substance in
cells lining the blood vessels of the kidney. Pompe disease shrank a patient’s muscles,
eventually affecting the lungs and heart. These two drugs, along with Cerezyme, formed
the core business of the company and were developed and sold by its genetic disease
Page 67
segment (GD).
Termeer was particularly proud of Genzyme’s scientific team for developing
Myozyme. Pompe disease was a debilitating illness that affected both infants
and adults. The symptoms for adults included a gradual loss of muscle strength and
ability to breathe. Depending on the individual, the rate of decline varied, but patients
eventually needed a wheelchair and ultimately died prematurely most often because of
respiratory failure. The symptoms were similar for infants, but progressed at a faster
rate, so death from cardiac or respiratory failure occurred within the first year of life.
The first human trials for Myozyme were conducted on a small sample of newborns and
resulted in 100% of the infants surviving their first year. This success was so dramatic
that the European regulators approved the drug for infants and for adults.
Concurrent with the company’s focus on genetic disorders, it also invested in the
development of hyaluronic acid-based drugs to reduce the formation of postoperative
adhesions. Initially, it raised funds in 1989 through a secondary stock offering and an
R&D limited partnership. The research the company conducted was significantly
advanced by the acquisition of Biomatrix, Inc., in 2000, forming the biosurgery segment
Termeer also searched for nascent biotechnology research companies that had good
products but limited capital or marketing capabilities. As a result, he created numerous
alliances and joint ventures, providing funding in exchange for a share of future revenue
streams. As one example, Genzyme formed a joint venture in 1997 with GelTex
Pharmaceuticals, which specialized in the treatment of conditions in the gastrointestinal
tract. GelTex’s first drug, RenaGel, bound dietary phosphates in patients with chronic
kidney dysfunction.
After 1997, Termeer completed a host of acquisitions. To some extent, the
opportunity for these acquisitions resulted from the economic woes of other
biotechnology firms whose clinical failures affected their funding abilities, resulting in
research cuts and layoffs. Smaller start-up firms were vulnerable to economic stress if
their flagship drug failed to succeed in time. These conditions suited Termeer, who had
begun a broad strategy to diversify. But his strategy was not without risks because even
drugs acquired in late-stage development had not yet been approved by the FDA.
Many of Genzyme’s acquisitions were new drugs in various stages of development
(Exhibit 4.6 shows Genzyme’s major acquisitions). They were generally considered to
be incomplete biotechnologies that required additional research, development, and
testing before reaching technological feasibility. Given the risk that eventual regulatory
approval might not be obtained, the technology may not have been considered to have
any alternative future use. In those cases, Genzyme calculated the fair value of the
technology and expensed it on the acquisition date as in-process research and
development (IPR&D).
Over time, Genzyme reorganized or added business segments based on its own
EXHIBIT 4.6 | Acquisitions: 1997–2007 (in millions of dollars)
Data Sources: LexisNexis, “Genzyme Corporation” Mergers and Acquisitions; Genzyme Corporation 10-K filings, 2000–
07; Montgomery, 165.
Page 68
R&D results and the addition of acquired firms. By December 2008, the company was
organized into four major segments: GD, cardiometabolic and renal (CR), BI, and
hematologic oncology (HO). (Exhibit 4.7 displays segment product offerings and the
fraction of 2008 revenues generated by each product).
In its presentation, RI had analyzed the performance of Genzyme’s business
segments using a metric called cash flow return on investment or CFROI. The idea was
to quantify the profit generated with respect to the capital that was invested in
EXHIBIT 4.7 | Main Products by Segment
Data source: Genzyme Corporation, 10-K filings, 2008 and 2009.
each business line (Exhibit 4.8 shows the CFROI estimates by RI for 2008). Termeer
asked Genzyme’s CFO to review the analysis. He believed the performance of the GD
division was correct, but he was not sure about the low performance of the other
The goal of Termeer’s diversification strategy was to create solutions for curing
more common diseases and to broaden the groups of patients who benefited. Termeer
was also a member of the board of directors of Project HOPE, an international
nonprofit health education and humanitarian assistance organization. Through a
partnership with Project HOPE, Genzyme provided life-saving treatment at no cost to
patients in developing countries, particularly those with inadequate health care services
or medical plans.
Like most biotechnology firms, Genzyme did not pay dividends to its shareholders.
EXHIBIT 4.8 | Genzyme—Estimates of CFROI by Segment (2008)
Note: Cash ROIC = Adjusted Cash Profits/Average Invested Capital.
Source: Relational Investors.
As it stated, “We have never paid a cash dividend on our shares of stock. We currently
intend to retain our earnings to finance future growth and do not anticipate paying any
cash dividends on our stock in the foreseeable future.” The company had repurchased
shares of its common stock amounting to $231.5 million in 2006 and $143 million in
2007, but these were offset by issuances of shares to honor option exercises. There was
no open market share repurchase program.
In terms of operations, the $200 million manufacturing facility Genzyme had built in
Allston produced the company’s primary genetic drugs, Cerezyme, Fabrazyme, and
Myozyme. A new facility was being constructed in Framingham, and major international
facilities were located in England, Ireland, and Belgium. Administrative activities,
sales, and marketing were all centered in Cambridge and Framingham. All was well
until the first quarter of 2009, when Termeer received the FDA warning letter in
February outlining deficiencies in the Allston plant. The “significant objectionable
conditions” fell into four categories: maintenance of equipment, computerized systems,
production controls, and failure to follow procedures regarding the prevention of
microbiological contamination. The problems in the Allston plant could be traced
back to Termeer’s decision to stretch the production capacity of the plant to meet an
unanticipated demand for Myozyme. Production had increased, but the strain placed on
the complex processes eventually led to the problems cited by the FDA. Anything that
disrupted the production of the plant concerned Termeer because it produced Genzyme’s
best-selling products, and those medications were critical to the well-being of the
patients who used them.
Relational Investors
If only one word were used to describe 52-year-old Ralph Whitworth, cofounder of
Relational Investors, it would be “performance.” While attending high school in
Page 69
Nevada, he raced his red 1965 Pontiac GTO against friends on the desert roads near
his home town of Winnemucca, outperforming them all. After obtaining a JD from
Georgetown University Law Center, Whitworth accepted a job with T. Boone
Pickens, the famous “corporate raider” of the 1980s, and gained what he called
“a PhD in capitalism” in the process. He left Pickens in 1996 to found RI with David
Batchelder whom he had met while working for Pickens. The largest initial investment
was the $200 million that came from the California Public Employees’ Retirement
System (CalPERS). In recognition of RI’s performance, CalPERS had invested a total of
$1.3 billion in RI by 2008. (Exhibit 4.9 illustrates RI’s annual performance.)
RI was commonly classified by observers as an “activist” investment fund. The
typical target firm was a company whose discounted cash flow analysis provided a
higher valuation than the company’s market price. Whitworth trained his executives to
view the gap between a company’s intrinsic value and its market price as the result of
an entrenched management taking care of itself at the expense of its shareholders.
Specifically, Whitworth felt the value gap came primarily from two sources: (1) money
not being spent efficiently enough to earn adequate returns, and/or (2) the company
suffered from major corporate governance issues. Common causes of
underperformance were firm diversification strategies that were not providing an
adequate return to shareholders, poor integration results with a merger partner or
EXHIBIT 4.9 | Relational Investors—Calendar Year Performance (%)
Note: RI was not required to disclose publicly its performance results. CalPERS disclosed its investment returns in RI’s
Corporate Governance Fund, and this serves as a good proxy for RI’s performance.
Data source: “Performance Analysis for the California Public Employers’ Retirement System,” Wilshire Consulting (Santa
Monica, CA), September 30, 2010.
acquisition, or the misalignment of management incentives.
Once a firm was targeted, RI typically took a 1% to 10% stake in it and then
engaged management with questions backed up by a RI detailed analysis. Depending
upon the particular responses from executives and directors, Whitworth would follow
one of several paths. For example, he might request certain changes or consider making
criticisms public. Resistance might result in isolated pressure on one or more
executives or board members. In other instances, Whitworth might request a seat on the
board, suggest a change in executive management or board composition, or initiate a
proxy fight. Management and board compensation was a favorite target of RI criticism
—one that was never well received by the target firm. Similar to most people’s view of
an athlete, Whitworth had no objections regarding high compensation for executives, so
long as they performed. (Exhibit 4.10 illustrates some of RI’s major corporate
governance engagements in the past.)
EXHIBIT 4.10 | Relational Investors—High-Profile Corporate Governance Engagements
Page 70
As one example, in late 2006, Whitworth and Batchelder contacted the board of
Home Depot requesting changes in the company’s strategy. By then, RI had purchased $1
billion of Home Depot stock. Specifically, they criticized CEO Robert Nardelli’s
decision to shift the company’s focus to a lower-margin commercial supply business,
which Nardelli considered a growth opportunity. This proved to be commercially
unsuccessful. As a result, Nardelli had increased revenues, which was in keeping with
his board-approved incentive contract, but earnings suffered. After the engagement of
RI, Batchelder joined the board, and Nardelli was ousted.
In another instance, this time with Sovereign Bancorp, corporate governance
was the key issue. One director was found to have executed private transactions in
branch offices. Another had an undisclosed ownership in a landscaping company that
the bank hired. Instead of the more normal compensation of $80,000 paid to board
members of similarly sized banks, Sovereign Bancorp’s board members received
$320,000 a year. After uncovering these events and fighting with the board, Whitworth
succeeded in being elected to it, and the CEO Jay Sidhu was ousted.
At its peak, RI’s engagements comprised a total portfolio of $8.4 billion at the end
of third quarter 2007. Given the drop in share prices following the financial crisis and
the impact of several redemptions from investors, RI’s portfolio value had been reduced
to $4.3 billion by the end of March 2009. (Exhibit 4.11 lists the amount of RI’s
engagements as of September 30 for each year since 2001 as well as the active
engagements that RI had as of March 31, 2009).
NOTES: (a) Represents end-of-quarter periods until the time of the case (3/2009);
(b) Represents the MV when RI held its maximum % in the company. RI’s position in $ may have been higher at another
Sources: Relational Investors, 13F filings to March 31, 2009.
Jonathan R. Laing, “Insider’s Look Out,” Barron’s, February 19, 2007.
Aaron Bernstein and Jeffrey M. Cunningham, “Whitworth: The Alchemist in the Boardroom,” Directorship, June/July 2007.
Which Path to Follow?
When Termeer finished his walk on the beach, he returned to the office, where he
reviewed Whitworth’s presentation slides. The main slide illustrated RI’s calculation of
the present value of each of Genzyme’s divisions plus its R&D pipeline. The sum of
these, representing RI’s valuation of Genzyme, is compared to the company’s current
EXHIBIT 4.11 | Relational Investors—Portfolio Investments
March 31, 2009.
Data source: Relational Investors, Form 13F.
stock price (Exhibit 4.12 shows RI’s valuation analysis of Genzyme). It showed that
Genzyme’s share price was trading at $34 below its fundamental value—a significant
discount. RI then offered recommendations as to how Genzyme could address this:
Termeer reflected on the first two items on the RI list. During his presentation,
Whitworth stated how impressed he was with Genzyme’s growth and complemented
EXHIBIT 4.12 | Relational Investors’ Fundamental Valuation of Genzyme
Source: Relational Investors.
1. Improve capital allocation decision making to ensure that spending would be focused
on the investment with the highest expected return.
2. Implement a share-buyback or dividend program.
3. Improve board composition by adding more members with financial expertise.
4. Focus executive compensation on the achievement of performance metrics.
Page 71
Termeer on how well he had been able to create significant shareholder value. But
Whitworth anticipated that the years of successful growth were about to lead to high
positive cash flow for several years. (Exhibit 4.13 shows how RI expected Genzyme to
generate significant cash flow in the coming years.) That positive cash flow would
create new challenges for Termeer. Whitworth explained that CEOs often failed to
realize that value-adding investment opportunities were not available at the level of the
cash flows being produced. As the CEOs continued to invest the large cash flows into
lower-return investments, the market would eventually react negatively to the
overinvestment problem and cause the share price to decline. Whitworth argued that it
was better for management to distribute the newly found cash flow as part of a
share repurchase program. Moreover, he thought Genzyme could leverage its
share repurchases by obtaining external funding because Genzyme’s balance sheet could
support a significant increase in debt.
EXHIBIT 4.13 | Relational Investors’ Estimates of Genzyme’s Free Cash Flow
Source: Relational Investors.
Termeer realized it would be difficult for him to change his conservative views
about leverage, particularly in light of the fact that he had been so successful in building
the company without relying on debt. The thought of using debt to enhance a share
repurchase program was doubly difficult for him to accept. But even more important
was his opinion that one had to take a long-term view to succeed in biotechnology.
Whitworth seemed to see investments as simply a use of cash, whereas Termeer saw
investments as being critical to the business model and survival of Genzyme. In fact, the
higher cash flow level would make it easier to fund the investments because it would
reduce or eliminate the need to access capital markets. Termeer had always envisioned
a future where diagnostics and therapeutics would be closer together, and now he
recognized that this future would require Genzyme to pursue a variety of technologies on
an on-going basis.
Then Termeer’s eyes caught the third item on the list about adding board members
with financial expertise. This brought to mind the earlier demands by another activist
investor, Carl Icahn, who had purchased 1.5 million shares of Genzyme during third
quarter 2007. Termeer had strongly protested Icahn’s involvement, and with the
support of the board made a public plea to shareholders that ultimately led Icahn to sell
his Genzyme shares and turn his attention to Biogen Idec, another major biotechnology
In Termeer’s mind, Icahn was more than just an activist investor. During his long
career, Icahn had earned the title of “corporate raider” by taking large stakes in
companies that often culminated in a takeover or, at a minimum, in a contentious proxy
fight. Earlier in the year, Icahn had taken a large position in MedImmune, Inc., and
helped arrange the sale of the company to AstraZeneca PLC. Were the current
circumstances such that Icahn would see another opportunity to target Genzyme again?
Where would Whitworth stand on this? “After all, at the end of the day, both Icahn and
Page 72
Whitworth are just after the cash flow,” said Termeer.
Other recent events were on Termeer’s mind as well. Genentech, the second-largest
U.S. biotechnology firm and one of Genzyme’s competitors, had just lost a bitterly
contested hostile takeover from Roche Holding AG at the start of 2009. This takeover
reminded Termeer of the possibility that some Big Pharma companies were looking to
expand their operations into biotechnology.
As Termeer reflected on the last 26 years spent creating and building Genzyme, he
realized that Whitworth’s RI fund had been a shareholder for less than a year and held
only 2.6% of the shares. It was no surprise these two men held such different
viewpoints of what Genzyme had to offer to its owners and to society. Termeer, aware
that he needed a strategy for dealing with Whitworth, had identified three
different approaches he could take:
He had arranged for a phone call with Whitworth in the following week. Regardless of
his approach, Termeer expected that Whitworth would probably request a hearing at the
board meeting, which was scheduled two days before the annual shareholders’ meeting
on May 21. The prospect of such a meeting with the board only served to emphasize the
importance of Termeer’s having a strategy for the upcoming call with Whitworth and
making decisions that would be in the best interest of his company.
1. Fight Whitworth as he had fought Icahn. To do this, he would need to enlist the board
to join him in what would be a public relations battle for shareholder support.
2. Welcome Whitworth onto the board to reap the benefits of his experience in how to
create shareholder value. In this regard, he could think of Whitworth as a free
3. Manage Whitworth by giving him some items on his list of demands but nothing that
would compromise the core mission of Genzyme.

Page 87
PART 2 Financial Analysis and Forecasting
Page 89
Business Performance Evaluation: Approaches for
Thoughtful Forecasting
Every day, fortunes are won and lost on the backs of business performance assessments
and forecasts. Because of the uncertainty surrounding business performance, the
manager should appreciate that forecasting is not the same as fortune-telling;
unanticipated events have a way of making certain that specific forecasts are never
exactly correct. This note purports, however, that thoughtful forecasts greatly aid
managers in understanding the implications of various outcomes (including the most
probable outcome) and identify the key bets associated with a forecast. Such forecasts
provide the manager with an appreciation of the odds of business success.
This note examines principles in the art and science of thoughtful financial
forecasting for the business manager. In particular, it reviews the importance of (1)
understanding the financial relationships of a business enterprise, (2) grounding
business forecasts in the reality of the industry and macroenvironment, (3) modeling a
forecast that embeds the implications of business strategy, and (4) recognizing the
potential for cognitive bias in the forecasting process. The note closes with a detailed
example of financial forecasting based on the example of the Swiss food and nutrition
company Nestle.
Understanding the Financial Relationships of the
Business Enterprise
Financial statements provide information on the financial activities of an enterprise.
Much like the performance statistics from an athletic contest, financial statements
Page 90
provide an array of identifying data on various historical strengths and weaknesses
across a broad spectrum of business activities. The income statement (also
known as the profit-and-loss statement) measures flows of costs, revenue, and
profits over a defined period of time, such as a year. The balance sheet provides a
snapshot of business investment and financing at a particular point in time, such as the
end of a year. Both statements combine to provide a rich picture of a business’s
financial performance. The analysis of financial statements is one important way of
understanding the mechanics of the systems that make up business operations.
Interpreting Financial Ratios
Financial ratios provide a useful way to identify and compare relationships across
financial statement line items. Trends in the relationships captured by financial ratios
are particularly helpful in modeling a financial forecast. The comparison of ratios
across time or with similar firms provides diagnostic tools for assessing the health of
the various systems in the enterprise. These tools and the assessments obtained with
them provide the foundation for financial forecasting.
We review common financial ratios for examining business operating performance.
It is worth noting that there is wide variation in the definition of financial ratios. A
measure such as return on assets is computed many different ways in the business world.
Although the precise definitions may vary, there is greater consensus on the
interpretation and implication of each ratio. This note presents one such definition and
reviews the interpretation.
Growth rates: Growth rates capture the year-on-year percentage change in a
particular line item. For example, if total revenue for a business increases from $1.8
million to $2.0 million, the total revenue growth for the business is said to be 11.1%
[(2.0 − 1.8)/1.8]. Total revenue growth can be further decomposed into two other
Page 91
growth measures: unit growth (the growth in revenue due to an increase in units sold)
and price growth (the growth in revenue due to an increase in the price of each unit). In
the above example, if unit growth for the business is 5.0%, the remaining 6.1% of total
growth can be attributed to increases in prices or price growth.
Margins: Margin ratios capture the percentage of revenue that flows into profit or,
alternatively, the percentage of revenue not consumed by business costs. Business
profits can be defined in many ways. Gross profit reports the gains to revenue after
subtracting the direct expenses. Operating profit reports the gains to revenue after
subtracting all associated operating expenses. Operating profit is also commonly
referred to as earnings before interest and taxes (EBIT). Net profit reports the gains to
revenue after subtracting all associated expenses, including financing expenses and
taxes. Each of these measures of profits have an associated margin. For example,
if operating profit is $0.2 million and total revenue is $2.0 million, the operating margin
is 10% (0.2/2.0). Thus, for each revenue dollar, an operating profit of $0.10 is
generated and $0.90 is consumed by operating expenses. The margin provides
the analyst with a sense of the cost structure of the business. Common definitions of
margin include the following:
Gross margin = Gross profit/Total revenue
where gross profit equals total revenue less the cost of goods sold.
Operating margin = Operating profit/Total revenue
where operating profit equals total revenue less all operating expenses (EBIT).
NOPAT margin = Net operating profit after tax (NOPAT)/Total revenue
where NOPAT equals EBIT multiplied by (1 − t), where t is the prevailing marginal
income tax rate. NOPAT measures the operating profits on an after-tax basis without
accounting for tax effects associated with business financing.
Net profit margin = Net income/Total revenue
Page 92
where net income or net profit equals total revenue less all expenses for the period. A
business that has a high gross margin and low operating margin has a cost structure that
maintains high indirect operating expenses such as the costs associated advertising or
with property, plant, or equipment (PPE).
Turnover: Turnover ratios measure the productivity, or efficiency, of business
assets. The turnover ratio is constructed by dividing a measure of volume from the
income statement (i.e., total revenue) by a related measure of investment from the
balance sheet (i.e., total assets). Turnover provides a measure of how much business
flow is generated per unit of investment. Productive or efficient assets produce high
levels of asset turnover. For example, if total revenue is $2.0 million and total assets
are $2.5 million, the asset-turnover measure is 0.8 times (2.0/2.5). Thus, each dollar of
total asset investment is producing $0.80 in revenue or, alternatively, total assets are
turning over 0.8 times a year through the operations of the business. Common measures
of turnover include the following:
Accounts receivable turnover = Total revenue/Accounts receivable
Accounts receivable turnover measures how quickly sales on credit are collected.
Businesses that take a long time to collect their bills have low receivable turnover
because of their large receivable levels.
Inventory turnover = Cost of goods sold/Inventory
Inventory turnover measures how inventory is working in the business, and whether the
business is generating its revenue on large levels or small levels of inventory. For
inventory turnover (as well as payable turnover) it is customary to use cost of sales as
the volume measure because inventory and purchases are on the books at cost rather
than at the expected selling price.
PPE turnover = Total revenue/Net PPE
PPE turnover measures the operating efficiency of the fixed assets of the
business. Businesses with high PPE turnover are able to generate large amounts of
revenue on relatively small amounts of PPE, suggesting high productivity or asset
Asset turnover = Total revenue/Total assets
Total capital turnover = Total revenue/Total capital
Total capital is the amount of capital that investors have put into the business and is
defined as total debt plus total equity. Since investors require a return on the total
capital they have invested, total capital turnover provides a good measure of the
productivity of that investment.
Accounts payable turnover = Cost of goods sold/Accounts payable
Accounts payable turnover measures how quickly purchases on credit are paid.
Businesses that are able to take a long time to pay their bills have low payable turnover
because of their large payables levels.
An alternative and equally informative measure of asset productivity is a “days”
measure, which is computed as the investment amount divided by the volume amount
multiplied by 365 days. This measure captures the average number of days in a year that
an investment item is held by the business. For example, if total revenue is $2.0 million
and accounts receivable is $0.22 million, the accounts receivable days measure is
calculated as 40.2 days (0.22/2.0 × 365). The days measure can be interpreted as that
the average receivable is held by the business for 40.2 days before being collected. The
lower the days measure, the more efficient is the investment item. If the accounts
receivable balance equals the total revenue for the year, the accounts receivable days
measure is equal to 365 days as the business has 365 days of receivables on their
books. This means it takes the business 365 days, on average, to collect their accounts
receivable. While the days measure does not actually provide any information that is not
already contained in the respective turnover ratio (as it is simply the inverse of the
Page 93
turnover measure multiplied by 365 days), many managers find the days measure to be
more intuitive than the turnover measure. Common days measures include the following:
Accounts receivable days = Accounts receivable/Total revenue × 365 days
Inventory days = Inventory/Cost of goods sold × 365 days
Accounts payable days = Accounts payable/Cost of goods sold × 365 days
Return on investment: Return on investment captures the profit generated per dollar
of investment. For example, if operating profit is $0.2 million and total assets are $2.5
million, pretax return on assets is calculated as operating profit divided by total assets
(0.2/2.5), or 8%. Thus, the total dollars invested in business assets are generating pretax
operating-profit returns of 8%. Common measures of return on investment include the
Return on equity (ROE) = Net income/Shareholders’ equity
where shareholders’ equity is the amount of money that shareholders have put into the
business. Since net income is the money that is available to be distributed back to equity
investors, ROE provides a measure of the return the business is generating for
the equity investors.
Return on assets (ROA) = NOPAT/Total assets
where NOPAT equals EBIT × (1 − t), EBIT is the earnings before interest and taxes, and
t is the prevailing marginal income tax rate. Like many of these ratios, there are many
other common definitions. One common alternative definition of ROA is the following:
Return on assets (ROA) = Net income/Total assets
and, lastly,
Return on capital (ROC) = NOPAT/Total capital
Since NOPAT is the money that can be distributed back to both debt and equity investors
and total capital measures the amount of capital invested by both debt and equity
investors, ROC provides a measure of the return the business is generating for all
investors (both debt and equity). It is important to observe that return on investment can
be decomposed into a margin effect and a turnover effect. That relationship means that
the same level of business profitability can be attained by a business with high margins
and low turnover, such as Nordstrom, as by a business with low margins and high
turnover, such as Wal-Mart. This decomposition can be shown algebraically for the
Notice that the equality holds because the quantity for total revenue cancels out across
the two right-hand ratios. ROE can be decomposed into three components:
This decomposition shows that changes in ROE can be achieved in three ways: changes
in net profit margin, changes in total capital productivity, and changes in total capital
leverage. This last measure is not an operating mechanism but rather a financing
mechanism. Businesses financed with less equity and more debt generate higher ROE
but also have higher financial risk.
Using Financial Ratios in Financial Models
Financial ratios provide the foundation for forecasting financial statements because
financial ratios capture relationships across financial statement line items that tend to be
preserved over time. For example, one could forecast the dollar amount of gross profit
for next year through an explicit independent forecast. However, a better approach is to
forecast two ratios: a revenue growth rate and a gross margin. Using these two ratios in
combination one can apply the growth rate to the current year’s revenue, and Page 94
then use the gross margin rate to yield an implicit dollar forecast for gross profit. As an
example, if we estimate revenue growth at 5% and operating margin at 24%, we can
apply those ratios to last year’s total revenue of $2.0 million to derive an implicit gross
profit forecast of $0.5 million [2.0 × (1 + 0.05) × 0.24]. Given some familiarity with the
financial ratios of a business, the ratios are generally easier to forecast with accuracy
than are the expected dollar values. The approach to forecasting is thus to model future
financial statements based on assumptions about future financial ratios.
Financial models based on financial ratios can be helpful in identifying the impact
of particular assumptions on the forecast. For example, models can easily allow one to
see the financial impact on dollar profits of a difference of one percentage point in
operating margin. To facilitate such a scenario analysis, financial models are commonly
built in electronic spreadsheet packages such as Microsoft Excel. Good financial
forecast models make the forecast assumptions highly transparent. To achieve
transparency, assumption cells for the forecast should be prominently displayed in the
spreadsheet (e.g., total revenue growth rate assumption cell, operating margin
assumption cell), and then those cells should be referenced in the generation of the
forecast. In this way, it becomes easy not only to vary the assumptions for different
forecast scenarios, but also to scrutinize the forecast assumptions.
Grounding Business Forecasts in the Reality of the
Industry and Macroenvironment
Good financial forecasts recognize the impact of the business environment on the
performance of the business. Financial forecasting should be grounded in an
appreciation for industry- and economy-wide pressures. Because business performance
tends to be correlated across the economy, information regarding macroeconomic
Page 95
business trends should be incorporated into a business’s financial forecast. If, for
example, price increases for a business are highly correlated with economy-wide
inflation trends, the financial forecast should incorporate price growth assumptions that
capture the available information on expected inflation. If the economy is in a recession,
then the forecast should be consistent with that economic reality.
Thoughtful forecasts should also recognize the industry reality. Business prospects
are dependent on the structure of the industry in which the business operates. Some
industries tend to be more profitable than others. Microeconomic theory provides some
explanations for the variation in industry profitability. Profitability within an industry is
likely to be greater if (1) barriers to entry discourage industry entrants, (2) ease of
industry exit facilitates redeployment of assets for unprofitable players, (3) industry
participants exert bargaining power over buyers and suppliers, or (4) industry
consolidation reduces price competition. Table 5.1 shows the five most and the five
least profitable industries in the United States based on median pretax ROAs for all
public firms from 2005 to 2014. Based on the evidence, firms operating in the
apparel and accessory retail industry should have systematically generated more
profitable financial forecasts over that period than did firms in the metal-mining
industry. One explanation for the differences in industry profitability is the ease of
industry exit. In the retail industry, unprofitable businesses are able to sell their assets
easily for redeployment elsewhere. In the mining industries, where asset redeployment
is much more costly, industry capacity may have dragged down industry profitability.
TABLE 5.1 | Most profitable and least profitable U.S. industries: 2005–2014.
Page 96
Being within a profitable industry, however, does not ensure superior business
performance. Business performance also depends on the competitive position of the
firm within the industry. Table 5.2 shows the variation of profitability for firms within
the U.S. apparel and accessory stores industry from 2005 to 2014. Despite being one of
the most profitable industries as shown in Table 5.1, there is large variation in
profitability within the industry. All five firms at the bottom of the profitability list
generated median ROAs that were actually negative (Delia’s, Frederick’s, Bakers
Footwear, Pacific Sunwear, and Coldwater Creek). Good forecasting considers the
ability of a business to sustain performance given the structure of its industry and its
competitive position within that industry.
Abnormal profitability is difficult to sustain over time. Competitive pressure tends
to bring abnormal performance toward the mean. To show that effect, we can sort all
U.S. public companies for each year from 2005 to 2015 into five groups (group 1 with
low profits through group 5 with high profits) based on their annual ROAs and sales
growth. We then follow what happened to the composition of those groups over the next
three years. The results of this exercise are captured in Figure 5.1. The ROA
graph shows the mean group rankings for firms in subsequent years. For
example, firms that ranked in group 5 with the top ROA at year 0 tend to have a mean
group ranking of 4.5 in year 1, 4.3 in year 2, and 3.7 in year 3. Firms that ranked in
TABLE 5.2 | Most and least profitable firms within the apparel and accessory stores retail industry:
2005–2014. Rankings in Tables 5.1 and 5.2 are based on all firms from Compustat organized into
industries by 2-digit SIC codes.
group 1 with the lowest ROA at year 0 tend to have a mean group ranking of 1.5 in year
1, 1.7 in year 2, and 2.2 in year 3. There is a systematic drift toward average
performance (3.0) over time. The effect is even stronger vis-à-vis sales growth.
Figure 5.1 provides the transition matrix for average groups sorted by sales growth.
Here we see that, by year 2, the average sales growth ranking for the high-growth group
is virtually indistinguishable from that of the low-growth group.
Figure 5.1 illustrates that business is fiercely competitive. It is naïve to assume that
superior business profitability or growth can continue unabated for an extended period.
Abnormally high profits attract competitive responses that eventually return profits to
their normal levels.
Modeling a Base-Case Forecast that Incorporates
Expectations for Business Strategy
FIGURE 5.1 | Firm-ranking annual transitions by profitability and sales growth. Firms are sorted for
each year into five groups by either annual pretax ROA or sales growth. For example, in the ROA
panel, group 1 comprises the firms with the lowest 20% of ROA for the year; group 5 comprises the
firms with the highest 20% of ROA for the year. The figure plots the mean ranking number for all U.S.
public firms in the Compustat database from 2005 to 2015.
Page 97
With a solid understanding of the business’s historical financial mechanics and of the
environment in which the business operates, the forecaster can incorporate the firm’s
operating strategy into the forecast in a meaningful way. All initiatives to improve
revenue growth, profit margin, and asset efficiency should be explicitly reflected in the
financial forecast. The forecast should recognize, however, that business
strategy does not play out in isolation. Competitors do not stand still. A good
forecast recognizes that business strategy also begets competitive response. All
modeling of the effects of business strategy should be tempered with an appreciation for
the effects of aggressive competition.
One helpful way of tempering the modeling of business strategy’s effects is to
complement the traditional bottom-up approach to financial forecasting with a top-down
approach. The top-down approach starts with a forecast of industry sales and then
works back to the particular business of interest. The forecaster models firm sales by
modeling market share within the industry. Such a forecast makes more explicit the
challenge that sales growth must come from either overall industry growth or market
share gain. A forecast that explicitly demands a market share gain of, say, 20% to 24%,
is easier to scrutinize from a competitive perspective than a forecast that simply
projects sales growth without any context (e.g., at an 8% rate).
Another helpful forecasting technique is to articulate business perspectives into a
coherent qualitative view on business performance. This performance view encourages
the forecaster to ground the forecast in a qualitative vision of how the future will play
out. In blending qualitative and quantitative analyses into a coherent story, the forecaster
develops a richer understanding of the relationships between the financial forecast and
the qualitative trends and developments in the enterprise and its industry.
Forecasters can better understand their models by identifying the forecast’s value
drivers, which are those assumptions that strongly affect the overall outcome. For
Page 98
example, in some businesses the operating margin assumption may have a dramatic
impact on overall business profitability, whereas the assumption for inventory turnover
may make little difference. For other businesses, the inventory turnover may have a
tremendous impact and thus becomes a value driver. In varying the assumptions, the
forecaster can better appreciate which assumptions matter and thus channel resources to
improve the forecast’s precision by shoring up a particular assumption or altering the
business strategy to improve the performance of a particular line item.
Lastly, good forecasters understand that it is more useful to think of forecasts as
ranges of possible outcomes rather than as precise predictions. A common term in
forecasting is the “base-case forecast.” A base-case forecast represents the best guess
outcome or the expected value of the forecast’s line items. In generating forecasts, it is
also important to have an unbiased appreciation for the range of possible outcomes,
which is commonly done by estimating a high-side and a low-side scenario. In this way,
the forecaster can bound the forecast with a relevant range of outcomes and can best
appreciate the key bets of the financial forecast.
Recognizing the Potential for Cognitive Bias in the
Forecasting Process
A substantial amount of research suggests that human decision making can be
systematically biased. Bias in financial forecasts creates systematic problems in
managing and investing in the business. Two elements of cognitive bias that play a role
in financial forecasting are optimism bias and overconfidence bias. This note defines
optimism bias as a systematic positive error in the expected value of an
unknown quantity, and defines the overconfidence bias as a systematic negative
error in the expected variance of an unknown quantity. The definitions of those two
terms are shown graphically in Figure 5.2. The dark curve shows the true distribution of
the sales growth rate. The realization of the growth rate is uncertain, with a higher
probability of its being in the central part of the distribution. The expected value for the
sales growth rate is g*; thus, the proper base-case forecast for the sales growth rate is
precisely g*. The light curve shows the distribution expected by the average forecaster.
This distribution is biased for two reasons. First, the expected value is too high. The
forecaster expects the base-case sales growth rate to be g’, rather than g*. Such positive
bias for expected value is termed optimistic. Second, the dispersion of the distribution
is too tight. This dispersion is captured by the variance (or standard deviation) statistic.
Because the forecast dispersion is tighter than the true dispersion, the forecaster exhibits
negative variance bias, or overconfidence—the forecaster believes that the forecast is
more precise than it really is.
A description and the implications of an experiment on forecasting bias among MBA
students is provided in an Appendix to this note.
Nestle: An Example
FIGURE 5.2 | Optimism and overconfidence biases in forecasting the sales growth rate.
In 2013, Nestle was one of the world’s largest food and health companies.
Headquartered in Switzerland, the company was truly a multinational organization with
factories in 86 countries around the world. Suppose that in early 2014, we needed to
forecast the financial performance of Nestle for the end of 2014. We suspected that one
sensible place to start was to look at the company’s performance over the past few
years. Exhibit 5.1 provides Nestle’s income statement and balance sheet for 2012 and
EXHIBIT 5.1 | Financial Statements for Nestle SA (in billions of Swiss francs)
One approach to forecasting the financial statements for 2014 is to forecast Page 99
each line item from the income statement and balance sheet independently. Such
an approach, however, ignores the important relationships among the different line items
(e.g., costs and revenues tend to grow together). To gain an appreciation for those
Note: Although including both turnover and days ratios is redundant, doing so illustrates the two perspectives.
relationships, we calculate a variety of ratios (Exhibit 5.1). In calculating the ratios, we
notice some interesting patterns. First, sales growth declined sharply in 2013, from
7.4% to 2.7%. The sales decline was also accompanied by much smaller decline in
profitability margins; operating margin declined from 14.9% to 14.1%. Meanwhile, the
asset ratios showed modest improvement; total asset turnover improved only slightly,
from 0.7× to 0.8×. Asset efficiency improved across the various classes of assets (e.g.,
accounts receivable days improved in 2013, from 53.0 days to 48.2 days; PPE turnover
also improved, from 2.8× to 3.0×). Overall in 2013 Nestle’s declines in sales growth
and margins were counteracted with improvements in asset efficiency such that return on
assets improved from 6.9% to 7.1%. Because return on assets comprises both a margin
effect and an asset-productivity effect, we can attribute the 2013 improvement in return
on assets to a denominator effect—Nestle’s asset efficiency improvement. The
historical ratio analysis gives us some sense of the trends in business performance.
A common way to begin a financial forecast is to extrapolate current ratios into the
future. For example, a simple starting point would be to assume that the 2013 financial
ratios hold in 2014. If we make that simplifying assumption, we generate the financial
forecast presented in Exhibit 5.2 . We recognize this forecast as naïve, but it provides a
straw-man forecast through which the relationships captured in the financial ratios can
be scrutinized. In generating the forecast, all the line-item figures are built on the ratios
used in the forecast. The financial line-item forecasts are computed as referenced to the
right of each figure based on the ratios below. Such a forecast is known as a financial
model. The design of the model is thoughtful. By linking the dollar figures with the
financial ratios, the model preserves the existing relationships across line items and can
be easily adjusted to accommodate different ratio assumptions.
EXHIBIT 5.2 | Naïve Financial Forecast for Nestle SA (in billions of Swiss francs)
Based on the naïve model, we can now augment the model with qualitative and
quantitative research on the company, its industry, and the overall economy. In early
2014, Nestle was engaged in important efforts to expand the company product line in
foods with all-natural ingredients as well the company presence in the Pacific Asian
region. These initiatives required investment in new facilities. It was hoped that the
Page 100
initiatives would make up for ongoing declines in some of Nestle’s important product
offerings, particularly prepared dishes. Nestle was made up of seven major business
units: powdered and liquid beverages (22% of total sales), water (8%), milk products
and ice cream (18%), nutrition and health science (14%), prepared dishes and cooking
aids (15%), confectionary (11%), and pet care (12%). The food processing industry had
recently seen a substantial decline in demand for its products in the developing world.
Important macroeconomic factors had led to sizable declines in demand from this part
of the world. The softening of growth had led to increased competitive pressures within
the industry that included such food giants as Mondelez, Tyson, and Unilever.
Based on this simple business and industry assessment, we take the view
that Nestle will maintain its position in a deteriorating industry. We can adjust
the naïve 2014 forecast based on that assessment (Exhibit 5.3). We suspect that the
softening of demand in developing markets and the prepared dishes line will lead to
zero sales growth for Nestle in 2014. We also expect the increased competition within
the industry will increase amount spent on operating expenses to an operating expenseto-
sales ratio of 35%. Those assumptions give us an operating margin estimate of
12.9%. We expect the increased competition to reduce Nestle’s ability to work its
inventory such that inventory turnover returns to the average between 2012 and 2013 of
5.53. We project PPE turnover to decline to 2.8× with the increased investment in new
facilities that are not yet operational. Those assumptions lead to an implied financial
forecast. The resulting projected after-tax ROA is 6.3%. The forecast is thoughtful. It
captures a coherent view of Nestle based on the company’s historical financial
relationships, a grounding in the macroeconomic and industry reality, and the
incorporation of Nestle’s specific business strategy.
EXHIBIT 5.3 | Revised Financial Forecast for Nestle SA (in billions of Swiss francs)
We recognize that we cannot anticipate all the events of 2014. Our forecast will
inevitably be wrong. Nevertheless, we suspect that, by being thoughtful in our analysis,
our forecast will provide a reasonable, unbiased expectation of future performance.
Exhibit 5.4 gives the actual 2014 results for Nestle. The big surprise was that the effect
of competition was worse than anticipated. Nestle’s realized sales growth was actually
negative, and its operating margin dropped from 14.9% and 14.1% in 2012 and 2013,
respectively, to 11.9% in 2014. Our asset assumptions were fairly close to the outcome,
although the inventory turnover and PPE turnover were a little worse than we had
expected. Overall, the ROA for Nestle dropped from 7.1% in 2013 to 5.3% in 2014.
Although we did not complete a high-side and a low-side scenario in this simple
example, we can hope that, had we done so, we could have appropriately assessed the
sources and level of uncertainty of our forecast.
EXHIBIT 5.4 | Actual Financial Performance for Nestle SA (in billions of Swiss francs)
To test for forecasting bias among business school forecasters, an experiment was
performed in 2005 with the 300 first-year MBA students at the Darden School of
Business at the University of Virginia. Each student was randomly assigned to both a
Page 101
U.S. public company and a year between 1980 and 2000. Some students were assigned
the same company, but no students were assigned the same company and the same year.
The students were asked to forecast sales growth and operating margin for their
assigned company for the subsequent three years. The students based their forecasts on
the following information: industry name, firm sales growth and operating
margin for the previous three years, historical and three-year prospective
industry average growth and margins, and certain macroeconomic historical and threeyear
forecast data (real gross national product [GNP] growth, inflation rates, and the
prevailing Treasury bill yield). To avoid biasing the forecasts based on subsequent
known outcomes, students were given the name of their firm’s industry but not the firm’s
name. For the same reason, the students were not given the identity of the current year.
The responses were submitted electronically and anonymously. Forecast data from
students who agreed to allow their responses to be used for research purposes were
aggregated and analyzed. Summary statistics from the responses are presented in
Figure 5.3.
The median values for the base-case forecast of expected sales growth and
FIGURE 5.3 | Median expected and actual financial forecast values for a random sample of U.S.
companies. This figure plots the median forecast and actual company realization for sales growth
and operating margin over the three-year historical period and the three-year forecast period based on
the responses from MBA students in an experiment.
Page 102
operating margin are plotted in Figure 5.3. The sales growth panel suggests that students
tended to expect growth to continue to improve over the forecast horizon (years 1
through 3). The operating margin panel suggests that students expected near-term
performance to be constant, followed by later-term improvement. To benchmark the
forecast, we can compare the students’ forecasts with the actual growth rates and
operating margins realized by the companies. We expect that if students were unbiased
in their forecasting, the distribution of the forecasts should be similar to the distribution
of the actual results. Figure 5.3 also plots the median value for the actual realizations.
We observe that sales growth for these randomly selected firms did not improve but
stayed fairly constant, whereas operating margins tended to decline over the extended
term. The gap between the two lines represents the systematic bias in the students’
forecasts. Because the bias in both cases is positive, the results are consistent with
systematic optimism in the students’ forecasts. By the third year, the optimism bias is a
large 4 percentage points for the sales growth forecast and almost 2 percentage points
for the margin forecast.
Although the average student tended to exhibit an optimistic bias, there was
variation in the bias across different groups of students. The forecast bias was
further examined across two characteristics: gender and professional training. For both
sales growth and operating margin, the test results revealed that males and those whose
professional backgrounds were outside finance exhibited the most optimistic bias. For
example, the bias in the third-year margin forecast was 0.7% for those with professional
finance backgrounds and 1.9% for those outside finance; and 2.6% for the male students
and just 0.8% for the female students.
In generating forecasts, it is also important to have an unbiased appreciation for the
precision of the forecast, which is commonly done by estimating a high-side and a lowside
scenario. To determine whether students were unbiased in appreciating the risk in
forecast outcomes, they were asked to provide a high-side and a low-side scenario. The
high-side scenario was defined explicitly as the 80th percentile level. The low-side
scenario was defined as the 20th percentile level. Figure 5.4 plots the median high-side
and low-side scenarios, as well as the expected base-case forecast presented in
Figure 5.3. For the three-year horizon, the median high-side forecast was 4 percentage
points above the base case and the low-side forecast was 4 percentage points below the
base case. The actual 80th percentile performance was 8 percentage points above the
base case and the actual 20th percentile was 12 percentage points below the base case.
The results suggest that the true variance in sales growth is substantially greater than that
estimated by the students. The same is also true of the operating margin. The estimates
provided by the students are consistent with strong overconfidence (negative variance
bias) in the forecast.
FIGURE 5.4 | Median base-case, high-side, and low-side forecasts versus the actual 20th and 80th
performance percentiles for sales growth and operating margin. This figure plots the median basecase,
high-side, and low-side forecasts for sales growth and operating margin over the three-year
forecast period based on the responses from MBA students in an experiment. The low-side and highside
performance levels were defined as the students’ estimate of the 20th and 80th percentile levels.
The actual company 20th and 80th performance percentiles for sales growth and operating margin
are also plotted.

Page 107
CASE 6 The Financial Detective, 2016
Financial characteristics of companies vary for many reasons. The two most prominent
drivers are industry economics and firm strategy.
Each industry has a financial norm around which companies within the industry tend
to operate. An airline, for example, would naturally be expected to have a high
proportion of fixed assets (airplanes), while a consulting firm would not. A steel
manufacturer would be expected to have a lower gross margin than a pharmaceutical
manufacturer because commodities such as steel are subject to strong price competition,
while highly differentiated products like patented drugs enjoy much more pricing
freedom. Because of each industry’s unique economic features, average financial
statements will vary from one industry to the next.
Similarly, companies within industries have different financial characteristics, in
part because of the diverse strategies that can be employed. Executives choose
strategies that will position their company favorably in the competitive jockeying within
an industry. Strategies typically entail making important choices in how a product is
made (e.g., capital intensive versus labor intensive), how it is marketed (e.g., direct
sales versus the use of distributors), and how the company is financed (e.g., the use of
debt or equity). Strategies among companies in the same industry can differ
dramatically. Different strategies can produce striking differences in financial results for
firms in the same industry.
The following paragraphs describe pairs of participants in a number of different
industries. Their strategies and market niches provide clues as to the financial condition
and performance that one would expect of them. The companies’ common-sized
financial statements and operating data, as of early 2016, are presented in a
Page 108
standardized format in Exhibit 6.1 . It is up to you to match the financial data with the
company descriptions. Also, try to explain the differences in financial results across

Companies A and B are airline companies. One firm is a major airline that flies both
domestically and internationally and offers additional services including travel
packages and airplane repair. The company owns a refinery to supply its own jet fuel as
a hedge to fuel-price volatility. In 2008, this company merged with one of the largest
airline carriers in the United States.
The other company operates primarily in the United States, with some routes to the
Caribbean and Latin America. It is the leading low-cost carrier in the United States.
One source of operating efficiency is the fact that the company carries only three
different aircraft in its fleet, making maintenance much simpler than for legacy airlines
that might need to service 20 or 30 different aircraft models. This company’s growth has
been mostly organic—it expands its routes by purchasing new aircraft and the rights to
fly into new airports.
Of the beer companies, C and D, one is a national brewer of mass-market consumer
beers sold under a variety of brand names. This company operates an extensive network
of breweries and distribution systems. The firm also owns a number of beer-related
businesses—such as snack-food and aluminum-container manufacturing companies—
and several major theme parks. Over the past 12 years, it has acquired several large
brewers from around the globe.
The other company is the largest craft brewer in the United States. Like most craft
brewers, this company produces higher-quality beers than the mass-market brands, but
EXHIBIT 6.1 | Common-Sized Financial Data and Ratios
nmf = not a meaningful figure
Data sources: S&P Research Insight, Capital IQ, and Value Line Investment Survey.
Page 109
production is at a lower volume and the beers carry premium prices. The firm is
financially conservative.
Companies E and F sell computers and related equipment. One company sells highperformance
computing systems (“supercomputers”) to government agencies,
universities, and commercial businesses. It has experienced considerable growth due to
an increasing customer base. The company is financially conservative.
The other company sells personal computers as well as handheld devices and
software. The firm has been able to differentiate itself by using its own operating system
for its computers and by creating new and innovative designs for all its products. These
products carry premium prices domestically and globally. The company follows a
vertical integration strategy starting with owning chip manufacturers and ending with
owning its own retail stores.
Companies G and H are both in the hospitality business. One company operates hotels
and residential complexes. Rather than owning the hotels, this firm chooses to manage
or franchise its hotels. The company receives its revenues each month based on longterm
contracts with the hotel owners, who pay a percentage of the hotel
revenues as a management fee or franchise fee. Much of this company’s growth
is inorganic—the company buys the rights to manage existing hotel chains and also the
rights to use the hotel’s brand name. This company has also pursued a strategy of
repurchasing a significant percentage of the shares of its own common stock.
The other company owns and operates several chains of upscale, full-service hotels
and resorts. The firm’s strategy is to maintain market presence by owning all of its
properties, which contributes to the high recognition of its industry-leading brands.
Companies I and J are newspaper companies. One company owns and operates two
newspapers in the southwestern United States. Due to the transition of customer
preference from print to digital, the company has begun offering marketing and digitaladvertising
services and acquiring firms in more profitable industries. The company has
introduced cost controls to address cost-structure issues such as personnel expenses.
Founded in 1851, the other company is renowned for its highly circulated
newspaper offered both in print and online formats. This paper is sold and distributed
domestically as well as around the world. Because the company is focused largely on
one product, it has strong central controls that have allowed it to remain profitable
despite the fierce competition for subscribers and advertising revenues.
Companies K and L manufacture and market pharmaceuticals. One firm is a diversified
company that sells both human pharmaceuticals as well as health products for animals.
This company’s strategy is to stay ahead of the competition by investing in the discovery
and development of new and innovative drugs.
The other company focuses on generic pharmaceuticals and medical devices. Most
of this company’s growth has been inorganic—the growth strategy has been to engage in
highly leveraged acquisitions, and it has participated in more than 100 during the past
eight years. The goal of acquiring new businesses is to enhance the value of the proven
drugs in the company’s portfolio rather than gamble on discoveries of new drugs for the
Page 110
Companies M and N are in the power-generation industry. One company focuses on
solar power. This includes the manufacturing and selling of power systems as well as
maintenance services for those systems.
The other company owns large, mostly coal-powered electric-power-generation
plants in countries around the world. Most of its revenues result from power-purchase
agreements with a country’s government that buy the power generated. Some of its U.S.
assets include regulated public utilities.
Companies O and P are retailers. One is a leading e-commerce company that sells a
broad range of products, including media (books, music, and videos) and electronics,
which together account for 92% of revenues. One-third of revenues are international and
20% of sales come from third-party sellers (i.e., sellers who transact through the
company’s website to sell their own products rather than those owned by the company).
A growing portion of operating profit comes from the company’s cloud-computing
business. With its desire to focus on customer satisfaction, this company has invested
considerably in improving its online technologies.
The other company is a leading retailer in apparel and fashion accessories for men,
women, and children. The company sells mostly through its upscale brick-and-mortar
department stores.
Page 113
CASE 7 Whole Foods Market: The Deutsche Bank Report
The latest numbers coming out of Whole Foods Market, Inc. (Whole Foods) in May
2014 took Deutsche Bank research analyst Karen Short and her team by surprise. On
May 6, Whole Foods reported just $0.38 per share in its quarterly earnings report,
missing Wall Street’s consensus of $0.41 and cutting earnings guidance for the
remainder of the year. The company’s share price fell 19% to $38.93 the next day as
Whole Foods’ management acknowledged that it faced an increasingly competitive
environment that could compress margins and slow expansion. The only upbeat news
was the 20% increase in the company’s quarterly dividend, up from $0.10 to $0.12 per
share. Short and her team knew this was not the first time the market believed Whole
Foods had gone stale. In 2006, Whole Foods’ stock had also declined 20% over fears of
slowing growth and increasing competition, but had since bounced back and
outperformed both its competition and the broader market (see Exhibit 7.1 for stock
price performance). Nevertheless, it was time for Short and her team to discuss how the
news altered their outlook for the company in a revised analyst report. The main point
of discussion would certainly be whether Whole Foods still had a recipe for success.
EXHIBIT 7.1 | Share Price Performance of Whole Foods Market Indexed to S&P 500 Index (January
2005 to April 2014)
Page 114
The Grocery Industry
The U.S. grocery industry as a whole had historically been a low-growth industry, and,
as a result of fierce competition, had typically maintained low margins. In 2012, the
industry recorded over $600 billion in sales, a 3% increase from the previous year.
Real demand growth was strongly tied to population growth, and consensus estimates
for nominal long-term growth rate were between 2% and 3%. Key segments included
conventional grocers such as Kroger, Publix, Safeway, and Albertsons;
supercenters such as Wal-Mart and Target; natural grocers such as Whole
Data source: Yahoo! Finance, author analysis.
Foods, Sprouts Farmers Market (Sprouts), and The Fresh Market (Fresh Market); and
wholesalers such as Costco and Sam’s Club. Conventional grocers remained the
primary destination for shoppers, but competition from Wal-Mart, wholesalers, and
other low-price vendors had driven down conventional grocers’ share of food dollars
for over a decade; for example, Wal-Mart was the largest food retailer in the United
States in 2014, with 25% market share. Exhibit 7.2 provides market share information
for the U.S. grocery market. The narrow margins and limited growth opportunities
favored large competitors that could leverage efficiencies in purchasing and distribution
to pass savings on to the consumer. As a result, many small competitors had been
acquired or forced to close. Consumers were extremely price conscious and came to
expect promotions (which were largely funded by manufacturers), and most shoppers
did not have strong attachments to particular retail outlets.
EXHIBIT 7.2 | Select Market Share Data
Given this environment, companies relentlessly searched for opportunities to
achieve growth and improve margins. Many grocers had implemented loyalty programs
to reward repeat shoppers, and most were trying to improve the in-store customer
experience, for instance by using self-checkout lines and other operational adjustments
to reduce checkout times, a source of frequent complaints. Given the high percentage of
Source:Market Share Reporter (Farmington Hills, MI: Gale, 2014) and author analysis.
Page 115
perishable goods in the industry, supply chain management was essential, and
companies were using improved technology to more efficiently plan their inventories.
Grocers also began promoting prepared foods, which could command higher margins
and reach consumers who did not regularly cook their own meals. Finally, most major
grocers offered private-label products, which allowed them to offer low prices while
still capturing sufficient margins.
Despite operating in a competitive and low-growth industry, natural grocers had
grown rapidly over the past two decades. Increasingly health-conscious consumers
were concerned about the source and content of their food, which fueled natural grocers’
sustained growth (over 20% per year since 1990) despite their comparatively higher
prices. In 2012, natural and organic products accounted for $81 billion in total sales in
the United States, a 10% increase from the previous year. Organic products, which
were more narrowly defined than natural products, accounted for about $28 billion of
these sales and were expected to top $35 billion by the end of 2014. Exhibit 7.3
provides growth forecast and share data on the natural and organic segments. As of
2014, 45% of Americans explicitly sought to include organic food in their meals, and
more than half of the country’s 18–29-year-old population sought it out. By specializing
in such products, natural grocers were able to carve out a profitable niche: the three
leading natural grocers (Whole Foods, Sprouts, and Fresh Market) had EBITDA
margins of 9.5%, 7.7%, and 9.1% respectively, whereas Kroger, the leading
conventional supermarket, had an EBITDA margin of only 4.5%. Exhibits 7.4
and 7.5 contain operating and financial information for selected companies in the U.S.
grocery industry.
EXHIBIT 7.3 | U.S. Store Count Forecast—Natural and Organic Share versus Total Industry
Data source: Deutsche Bank Research; Food Marketing Institute.
EXHIBIT 7.4 | Selected Operating Data for Comparable Companies
Note: “Other natural & organic” is composed of Sprouts and Fresh Market. “Conventional grocer” is
composed of Kroger, Safeway, and SuperValu. “Supercenters and wholesalers” is composed of Wal-
Mart and Costco.
Source: Company SEC filings, 2003–2013.
As expected, the segment’s attractiveness sparked increasing competition from both
new entrants and established players from the other competing segments. Wal-Mart,
Kroger, and others launched organic offerings targeted at health-conscious consumers,
often at a much lower price point than similar products at natural grocers. While Whole
Foods, other natural grocers, independent retailers, and food cooperatives were the
primary source of organic products in the 1990s, by 2006, half of the country’s organic
food was sold through conventional supermarkets. By 2014, organic products were
available in over 20,000 natural food stores and nearly three out of four conventional
Even in the face of this competition, Whole Foods maintained a position as the
market leader for the natural and organic industry. As many grocers joined the natural
and organic bandwagon, Whole Foods defended against misrepresentative claims.
Whole Foods had recently introduced a system to rate fresh produce on a number of
criteria, including sustainability and other characteristics important to natural and
EXHIBIT 7.5 | Selected Financial Data for Comparable Companies (in millions of USD, except
percentages, ratios, and per share data; financial statement data as of fiscal year 2013)
Data source: Company SEC filings; Deutsche Bank Research; Food Marketing Institute.
Page 116
organic customers. The company’s website listed over 75 substances that were
prohibited in all of its products and published additional measures for meat, seafood,
and produce selection to ensure consumers had insight into the quality of their food.
Whole Foods was the only U.S. retailer that labeled genetically modified foods, an area
of some concern to health-conscious consumers.
Despite its remarkable growth, the natural and organic industry was not without its
critics. Several academic and government studies had concluded that organic products
were not significantly more nutritious than nonorganic goods and claimed that the
inefficiency of organic production could harm the environment. Moreover, the
continuing lack of official legal definitions of terms such as “natural” arguably made
them effectively meaningless: one botanist argued the segment was “99% marketing and
public perception.”
Whole Foods Market
Whole Foods traced its roots to 1978, when John Mackey and Renee Lawson opened a
small organic grocer called SaferWay in Austin, Texas. Two years later, it partnered
with Craig Weller and Mark Skiles of Clarksville Natural Grocery to launch the first
Whole Foods Market, one of the country’s first natural and organic supermarkets. In
1984, the company began expanding within Texas and in 1988 made its first move
across state lines by acquiring the Louisiana-based Whole Foods Company; the next
year it launched its first store in California. The company went public in 1992
and grew rapidly during the 1990s through both new store openings and
acquisitions. Whole Foods launched its first international store in Canada in 2002 and
acquired a natural supermarket chain in the United Kingdom in 2004. The company
had consistently maintained high growth throughout the new century by increasing samestore
sales and expanding its store count; same-store sales grew more than 5% in every
year except 2008 and 2009, when the global financial crisis brought America into a
severe recession. By 2013, the company’s growth strategy had moved away from
acquisitions, and management saw improving same-store sales and continued new
openings as its primary growth opportunities. Same-store sales—the most important
growth criteria Wall Street used to evaluate retailers—had grown by at least 7% every
year since 2010, far above other established grocers’ growth rates even after it began
expanding its natural and organic offerings. The company had done all of this with no
debt financing. Looking forward, Whole Foods management planned to eventually
operate over 1,000 stores, up from the 362 it operated as of the end of fiscal year 2013.
Exhibit 7.6 contains store count and same-store sales growth history for Whole Foods
and other industry players.
EXHIBIT 7.6 | Store Growth Statistics for Whole Foods and Other Industry Comparables
Whole Foods positioned itself as “the leading retailer of natural and organic foods”
and defined its mission as promoting “the vitality and well-being of all individuals by
supplying the highest quality, most wholesome foods available.” The company’s sole
operating segment was its natural and organic markets and nearly 97% of its revenues
came from the United States. By 2013, the average Whole Foods store carried 21,000
SKUs and approximately 30% of sales outside the bakery and prepared-food segments
were organic. Whole Foods reported $551 million in net income on $12.9 billion in
sales in 2013, making it the clear leader of natural and organic grocers even though its
numbers were still rather small compared to Kroger’s net income of $1.5 billion on
Data source: Company SEC filings; Deutsche Bank Research; Food Marketing Institute.
Page 117
more than $98 billion in sales.
Facing increased competition in the segment, many analysts believed that Whole
Foods’ biggest challenge was its reputation for high prices. For instance, Whole Foods
charged $2.99 for a pound of organic apples, compared to $1.99 at Sprouts and even
less at Costco. Indeed, many consumers derisively described the store as “Whole
Paycheck,” and the company had historically opened its stores in high-income areas. In
response to this image, the company had already begun marketing private labels (365
and 365 Everyday Value), begun competitive price matching and promotional sales, and
launched a printed value guide (The Whole Deal) that featured coupons, low-budget
recipes, and other tips for price-conscious consumers. Additionally, many Whole
Foods supporters often pointed out that they were willing to pay a premium price for a
premium product.
The Research Report
The recent collapse of Whole Foods’ stock price had caught Short and her team
flatfooted. After all, heated competition in the grocery space was nothing new, even for
Whole Foods, but the company had nonetheless maintained both its favorable margins
and high growth rate for years. Short, along with many other analysts across Wall Street,
had been strongly in the bull camp prior to the recent earnings report. Short’s report
from the past month recommended to investors that Whole Foods stock was a “buy” and
worth $60 per share. This argument was based on ongoing gains to expected revenue
growth and EBITDA margins in the coming year (the report built in expectations of
revenue growth of 11% and 14%, respectively, in 2014 and 2015; and EBITDA margins
of 9.4% and 9.8%, respectively, in 2014 and 2015). The main question now facing the
team was whether to adjust its financial forecast for Whole Foods in light of recent
news. Exhibit 7.7 contains a version of the forecast model with the assumptions used
for Short’s previous report. As an additional benchmark, Exhibit 7.8 reports prevailing
capital market information. As Short reconsidered her position, her team fleshed out the
case for both a bearish and bullish view on Whole Foods.
EXHIBIT 7.7 | Deutsche Bank Model (in millions of USD, except per share figures)
Data source: Company financial reports, Deutsche Bank research, and author estimates.
EXHIBIT 7.8 | Demographic and Capital Markets Data
From the bears’ perspective, the natural and organic market was becoming saturated
as more companies offered organic products at lower cost. This competition would
soon compress Whole Foods’ margins, while at the same time stealing market share and
causing same-store sales to slow or even decline. Several analysts had downgraded
Whole Foods after the company issued its disappointing quarterly results. A report put
out the previous week by another bank noted that 85% of Whole Foods’ stores were
within three miles of a Trader Joe’s—a privately owned natural grocer—up from 44%
in 2005; similar overlap with Sprouts had grown from 3% to 16% and with Fresh
Market from 1% to 14%. Moreover, Whole Foods was running out of dense, highly
educated, high-income neighborhoods to open new stores in, which could either force
the company to rely more on low-price offerings or slow its rapid expansion. Such a
shift in strategy could take the company into uncharted territory and risk its reputation as
a premium brand. Finally, the bears were concerned that the new competitive reality
Data sources: Bloomberg and U.S. Census Bureau.
Page 118
would cause the market to fundamentally revalue Whole Foods. The company had long
traded at a substantial premium, at times exceeding Kroger’s market value, despite the
latter company’s substantial size advantage (compared to Whole Foods, Kroger had 7.3
times as many stores that generated 7.6 times as many sales and 3.6 times as much
EBITDA). Such a premium could only be justified if Whole Foods could continue
growing, both at its existing stores and in terms of its overall footprint. The team noted
that even if it cut the price target from $60 to $40, Whole Foods would still
trade at a premium to its competitors in the conventional grocers’ segment.
The bulls believed the combination of Whole Foods’ leadership in natural and
organic offerings, shifting consumer preferences, and organic food’s small but rapidly
growing market provided ample runway for sustained growth at high margins. As the
clear leader in the segment, Whole Foods was well positioned to benefit from
consumers’ increasingly health-conscious decision making. Moreover, Whole Foods
was not just another retailer that offered natural products; it was the standard bearer and
thought leader for the industry, making it top of mind for anyone interested in the type of
healthy products Whole Foods brought into the mainstream. Its competitors were merely
imitating what Whole Foods pioneered and continued to lead, giving the company a
sustainable advantage. While competition could put downward pressure on some of
Whole Foods’ prices, the company had the stature to maintain its margin targets even
with competitive price cuts by driving sales toward higher-margin categories like
prepared foods, where the grocer could more readily differentiate its products.
Moreover, the company’s high prices gave it more room to adjust prices on goods
where it directly competed with lower-cost retailers; past work by Short’s team had
shown that Whole Foods could match Kroger on 10,000 SKUs–equivalent to all the non
private-label nonperishable products the company offered—and still maintain nearly a
35% gross margin, which was within Whole Foods’ target range. Similar analyses
Page 119
against other competitors also suggested ample room to selectively compete on prices
while maintaining its overall margin targets. Additionally, Whole Foods had
opportunities to reduce operating expenses, which the bulls thought would offset decline
in revenue from pricing pressure over the next few years. While some analysts were
concerned that Whole Foods’ expansion would take it into lower-income areas that
were distinct from the company’s historical target market, the bulls believed that Whole
Foods private-label products offered a chance to provide similar, high-quality products
at a more accessible price point while protecting margins and providing a promising
new avenue for growth. While the bulls acknowledged that Whole Foods traded at a
premium, they thought the company’s higher growth rates, attractive margins, and
position as a market leader provided ample justification for its higher valuation.
Whole Foods’ CEO John Mackey was firmly in the bull camp. While he
acknowledged that Whole Foods’ best-in-the-industry sales per square foot and margins
would attract competition, he claimed: “We are and will be able to compete
successfully” and that the pricing gap between Whole Foods and the competition would
not disappear. More importantly, he claimed that no competitor offered the quality of
products that Whole Foods could, regardless of how these competitors chose to market
their products. Alluding to the lack of a clear legal definition for natural foods, he
alleged that many competitors marketed standard commercial meat and other
perishable goods under misleading labels, and said that Whole Foods could
more aggressively advertise its superior quality to maintain its differentiation from the
competition. Similarly, the company was making investments to improve the customer
experience, already seen by many as one of its stronger points, by shortening wait times
and offering higher-quality self-service food. Behind the scenes, it was reallocating
support personnel on a regional rather than store-by-store basis in an effort to cut costs.
After hinting at several projects in the pipeline that would help Whole Foods thrive in
the new reality of stronger competition, he said that Whole Foods “is not sitting still. We
are still very innovative!”
Page 127
CASE 8 Horniman Horticulture
Bob Brown hummed along to a seasonal carol on the van radio as he made his way over
the dark and icy roads of Amherst County, Virginia. He and his crew had just finished
securing their nursery against some unexpected chilly weather. It was Christmas Eve
2015, and Bob, the father of four boys ranging in age from 5 to 10, was anxious to be
home. Despite the late hour, he fully anticipated the hoopla that would greet him on his
return and knew that it would be some time before even the youngest would be asleep.
He regretted that the boys’ holiday gifts would not be substantial; money was again tight
this year. Nonetheless, Bob was delighted with what his company had accomplished.
Business was booming. Revenue for 2015 was 15% ahead of 2014, and operating
profits were up even more.
Bob had been brought up to value a strong work ethic. His father had worked his
way up through the ranks to become foreman of a lumber mill in Southwest Virginia. At
a young age, Bob began working for his father at the mill. After earning a degree in
agricultural economics at Virginia Tech, he married Maggie Horniman in 2003. Upon
his return to the mill, Bob was made a supervisor. He excelled at his job and was highly
respected by everyone at the mill. In 2010, facing the financial needs of an expanding
family, he and Maggie began exploring employment alternatives. In late 2012, Maggie’s
father offered to sell the couple his wholesalenursery business, Horniman Horticulture,
near Lynchburg, Virginia. The business and the opportunity to be near Maggie’s family
appealed to both Maggie and Bob. Pooling their savings, the proceeds from the sale of
their house, a minority-business-development grant, and a sizable personal loan from
Maggie’s father, the Browns purchased the business for $999,000. It was agreed that
Bob would run the nursery’s operations and Maggie would oversee its finances.
Page 128
Bob thoroughly enjoyed running his own business and was proud of its growth over
the previous three years. The nursery’s operations filled 52 greenhouses and 40 acres of
productive fields and employed 12 full-time and 15 seasonal employees. Sales
were primarily to retail nurseries throughout the mid-Atlantic region. The
company specialized in such woody shrubs as azaleas, camellias, hydrangeas, and
rhododendrons, but also grew and sold a wide variety of annuals, perennials, and trees.
Over the previous two years, Bob had increased the number of plant species grown at
the nursery by more than 40%.
Bob was a “people person.” His warm personality had endeared him to customers
and employees alike. With Maggie’s help, he had kept a tight rein on costs. The effect on
the business’s profits was obvious, as its profit margin had increased from 3.1% in
2013 to an expected 5.8% in 2015. Bob was confident that the nursery’s overall
prospects were robust.
With Bob running the business full time, Maggie primarily focused on attending to
the needs of her active family. With the help of two clerks, she oversaw the company’s
books. Bob knew that Maggie was concerned about the recent decline in the firm’s cash
balance to below $10,000. Such a cash level was well under her operating target of 8%
of annual revenue. But Maggie had shown determination to maintain financial
responsibility by avoiding bank borrowing and by paying suppliers early enough to
obtain any trade discounts. Her aversion to debt financing stemmed from her concern
about inventory risk. She believed that interest payments might be impossible to meet if
adverse weather wiped out their inventory.
Maggie was happy with the steady margin improvements the business had
experienced. Some of the gains were due to Bob’s response to a growing demand for
moremature plants. Nurseries were willing to pay premium prices for plants that
delivered “instant landscape,” and Bob was increasingly shifting the product mix to that
line. Maggie had recently prepared what she expected to be the end-of-year financial
summary (Exhibit 8.1). To benchmark the company’s performance, Maggie used
available data for the few publicly traded horticultural producers (Exhibit 8.2).
EXHIBIT 8.1 | Projected Financial Summary for Horniman Horticulture (in thousands of dollars)
Inventory investment was valued at the lower of cost or market. The cost of inventory was determined by accumulating
the costs associated with preparing the plants for sale. Costs that were typically capitalized as inventory included direct
labor, materials (soil, water, containers, stakes, labels, chemicals), scrap, and overhead.
Other current assets included consigned inventory, prepaid expenses, and assets held for sale.
NFAs included land, buildings and improvements, equipment, and software.
Purchases represented the annual amount paid to suppliers.
EXHIBIT 8.2 | Financial Ratio Analysis and Benchmarking
Page 129
Across almost any dimension of profitability and growth, Bob and Maggie agreed
that the business appeared to be strong. They also knew that expectations could change
quickly. Increases in interest rates, for example, could substantially slow market
demand. The company’s margins relied heavily on the hourly wage rate of $10.32,
currently required for H2A-certified nonimmigrant foreign agricultural workers. There
was some debate within the U.S. Congress about the merits of raising this rate.
Bob was optimistic about the coming year. Given the ongoing strength of the local
economy, he expected to have plenty of demand to continue to grow the business.
Because much of the inventory took two to five years to mature sufficiently to
sell, his top-line expansion efforts had been in the works for some time. Bob
was sure that 2016 would be a banner year, with expected revenue hitting a record 30%
growth rate. In addition, he looked forward to ensuring long-term-growth opportunities
with the expected closing next month on a neighboring 12-acre parcel of farmland. But
for now, it was Christmas Eve, and Bob was looking forward to taking off work for the
entire week. He would enjoy spending time with Maggie and the boys. They had much
to celebrate for 2015 and much to look forward to in 2016.
Benchmark figures were based on 2014 financial ratios of publicly traded horticultural 1 producers.

Page 133
CASE 9 Guna Fibres, Ltd.
Surabhi Kumar, managing director and principal owner of Guna Fibres, Ltd. (Guna),
discovered the problem when she arrived at the parking lot of the company’s plant one
morning in early January 2012. Customers for whom rolls of fiber yarn were intended
had been badgering Kumar to fill their orders in a timely manner, yet trucks that had
been loaded just the night before were being unloaded because the government tax
inspector, stationed at the company’s warehouse, would not clear the trucks for
departure. The excise tax had not been paid; the inspector required a cash payment, but
in seeking to draw funds that morning, Vikram Malik, the bookkeeper, discovered that
the company had overdrawn its bank account—the third time in as many weeks. The
truck drivers, independent contractors,cursed loudly as they unloaded the trucks,
refusing to wait while the company and government settled their accounts.
This shipment would not leave for at least another two days, and angry customers
would no doubt require an explanation. Before granting a loan with which to pay the
excise tax, the branch manager of the All-India Bank & Trust Company had requested a
meeting with Kumar for the next day to discuss Guna’s financial condition and its plans
for restoring the firm’s liquidity.
Kumar told Malik, “This cash problem is most vexing. I don’t understand it. We’re a
very profitable enterprise, yet we seem to have to depend increasingly on the bank. Why
do we need more loans just as our heavy selling season begins? We can’t repeat this
Company Background
Guna was founded in 1972 to produce nylon fiber at its only plant in Guna, India, about
Page 134
500 km south of New Delhi. By using new technology and domestic raw materials, the
firm had developed a steady franchise among dozens of small, local textile weavers. It
supplied synthetic fiber yarns used to weave colorful cloths for making saris,
the traditional women’s dress of India. On average, each sari required eight
yards of cloth. An Indian woman typically would buy three saris a year. With India’s
female population at around 600 million, the demand for saris accounted for more than
14 billion yards of fabric. This demand was currently being supplied entirely from
domestic textile mills that, in turn, filled their yarn requirements from suppliers such as
Synthetic Textile Market
The demand for synthetic textiles was stable, with year-to-year growth and predictable
seasonal fluctuations. Unit demand increased with both population and national income.
In addition, India’s population celebrated hundreds of festivals each year, in deference
to a host of deities, at which saris were traditionally worn. The most important festival,
the Diwali celebration in midautumn, caused a seasonal peak in the demand for new
saris, which in turn caused a seasonal peak in demand for nylon textiles in late summer
and early fall. Thus the seasonal demand for nylon yarn would peak in midsummer. Unit
growth in the industry was expected to be 15% per year.
Consumers purchased saris and textiles from cloth merchants located in villages
throughout the country. A cloth merchant usually was an important local figure who was
well known to area residents and who generally granted credit to support consumer
purchases. Merchants maintained relatively low levels of inventory and built stocks of
goods only shortly before and during the peak selling season.
Competition was keen among those merchants’ suppliers (the many small textileweaving
mills) and was affected by price, service, and the credit they could grant to the
Page 135
merchants. The mills essentially produced to order, building their inventories of woven
cloth shortly in advance of the peak selling season and keeping only maintenance stocks
at other times of the year.
The yarn manufacturers competed for the business of the mills through responsive
service and credit. The suppliers to the yarn manufacturers provided little or no trade
credit. Being near the origin of the textile chain in India, the yarn manufacturers
essentially banked the downstream activities of the industry.
Production and Distribution System
Thin profit margins had prompted Kumar to adopt policies against overproduction and
overstocking, which required Guna to carry inventories through the slack selling season.
She had adopted a plan of seasonal production, which meant that the yarn plant would
operate at peak capacity for two months of the year and at modest levels the rest of the
year. That policy imposed an annual ritual of hirings and layoffs.
To help ensure prompt service, Guna maintained two distribution warehouses, but
getting the finished yarn quickly from the factory in Guna to the customers was a
challenge. The roads were narrow and mostly in poor repair. A truck was often delayed
negotiating the trip between Kolkata and Guna, a distance of about 730 km. Journeys
were slow and dangerous, and accidents were frequent.
Company Performance
Guna had experienced consistent growth and profitability (see Exhibit 9.1 for firm’s
recent financial statements). In 2011, sales had grown at an impressive rate of 18%.
Recent profits were INR25 million, down from INR36 million in 2010. Kumar
expected Guna’s growth to continue with gross sales reaching more than INR900
million in 2012 (Exhibit 9.2).
EXHIBIT 9.1 | Guna’s Annual Income Statements (in millions of Indian rupees)
Source: All exhibits created by case writer.
EXHIBIT 9.2 | Guna’s Monthly Sales, 2011 Actual and 2012 Forecast (in millions of Indian rupees)
After the episode in the parking lot, Kumar and her bookkeeper went to her office to
analyze the situation. She pushed aside two items on her desk to which she had intended
to devote her morning: a message from the transportation manager regarding a possible
change in the inventory policy (Exhibit 9.3) and a proposal from the operations manager
for a scheme of level annual production (Exhibit 9.4).
EXHIBIT 9.3 | Message from Transportation Manager
To prepare a forecast on a business-as-usual basis, Kumar and Malik agreed on
various parameters. Cost of goods sold would run at 73.7% of gross sales—a figure that
was up from recent years because of increasing price competition. Annual operating
expenses would be about 6% of gross annual sales. Operating expenses were up from
recent years to include the addition of a quality-control department, two new sales
agents, and four young nephews in whom Kumar hoped to build allegiance to the family
business. Kumar had long felt pressure to hire family members to company management.
The four new fellows would join 10 other family members on her team. Although the
company’s income tax rate of 30% accrued monthly, positive balances were paid
quarterly in March, June, September, and December. The excise tax (at 15% of sales)
was different from the income tax and was collected at the factory gate as trucks left to
make deliveries to customers and the regional warehouses. Kumar expected to pay
dividends of INR5.0 million per quarter to the 11 members of her extended family who
EXHIBIT 9.4 | Message from Operations Manager
Page 136
owned the entirety of the firm’s equity. For years, Guna had paid substantial dividends.
The Kumar family believed that excess funds left in the firm were at greater risk than if
the funds were returned to shareholders.
Accounts receivable collections in any given month had been running steadily at the
rate of 48 days, comprised of 40% of the previous month’s gross sales plus 60% of the
gross sales from the month before that. The cost of the raw materials for Guna’s yarn
production ran about 55% of the gross sale price. To ensure sufficient raw material on
hand, it was Guna’s practice each month to purchase the amount of raw materials
expected to be sold in two months. The suppliers Guna used had little ability to provide
credit such that accounts payable were generally paid within two weeks. Monthly direct
labor and other direct costs associated with yarn manufacturing were equivalent to
about 34% of purchases in the previous month. Accounts payable ran at about half of
the month’s purchases. As a matter of policy, Kumar wanted to see a cash balance of at
least INR7.5 million. To sustain company expansion, capital expenditures were
anticipated to run at INR3.5 million per quarter.
Guna had a line of credit at the All-India Bank & Trust Company, where it also
maintained its cash balances. All-India’s short-term interest rate was currently 14.5%,
but Kumar was worried that inflation and interest rates might rise in the coming year. By
terms of the bank, the seasonal line of credit had to be reduced to a zero balance for at
least 30 days each year. The usual cleanup month had been October, but last year Guna
had failed to make a full repayment at that time. Only after strong assurances by Kumar
that she would clean up the loan in November or December had the lending officer
reluctantly agreed to waive the cleanup requirement in October. Unfortunately, the credit
needs of Guna did not abate as rapidly as expected in November and December, and
although his protests increased each month, the lending officer had agreed to meet
Guna’s cash requirements with loans. Now he was refusing to extend any more seasonal
credit until Kumar presented a reasonable financial plan for the company that
demonstrated its ability to clean up the loan by the end of 2012.
Financial Forecast
With some experience in financial modeling, Malik used the agreed-upon assumptions to
build out a monthly forecast of Guna’s financial statements (Exhibit 9.5). To summarize
the seasonal pattern of the model, Malik handed Kumar a graph showing the projected
monthly sales and key balance sheet accounts (Exhibit 9.6). After studying the forecasts
for a few moments, Kumar expostulated:
EXHIBIT 9.5 | Monthly Financial Statement Forecast (in millions of Indian rupees)
EXHIBIT 9.6 | Forecast of Accounts by Month
Page 137
The loan officer will never accept this forecast as a basis for more credit. We
need a new plan, and fast. Maintaining this loan is critical for us to scale up for the
most important part of our business season. Please go over these assumptions in
detail and look for any opportunities to improve our debt position.
Then looking toward the two proposals she had pushed aside earlier, she muttered,
“Perhaps these proposals will help.”
Page 143
PART 3 Estimating the Cost of Capital
Page 145
—W. Todd Brotherson, Kenneth M. Eades, Robert S. Harris, and Robert
C. Higgins
“Best Practices” in Estimating the Cost of
Capital: An Update
“Cost of capital is so critical to things we do, and CAPM has so many holes in it—and the books don’t tell you
which numbers to use . . . so at the end of the day, you wonder a bit if you’ve got a solid number. Am I fooling
myself with this well-disciplined, quantifiable number?”
—A Corporate Survey Participant
Theories on cost of capital have been around for decades. Unfortunately for practice,
the academic discussions typically stop at a high level of generality, leaving important
questions for application unanswered. Recent upheavals in financial markets have only
made the practitioner’s task more difficult. This paper updates our earlier work on the
state of the art in cost of capital estimation to identify current best practices that emerge.
Unlike many broadly distributed multiple choice or fill-in-the-blank surveys, our
findings are based on conversations with practitioners at highly regarded corporations
and leading financial advisors. We also report on advice from best-selling textbooks
and trade books. We find close alignment among all these groups on use of common
theoretical frameworks to estimate the cost of capital and on many aspects of estimation.
We find large variation, however, for the joint choices of the risk-free rate of return,
beta and the equity market risk premium, as well as for the adjustment of capital costs
for specific investment risk. When compared to our 1998 publication, we find that
practice has changed some since the late 1990s but there is still no consensus on
important practical issues. The paper ends with a synthesis of messages from best
practice companies and financial advisors and our conclusions.
Over the years, theoretical developments in finance converged into compelling Page 146
recommendations about the cost of capital to a corporation. By the early 1990s, a
consensus had emerged prompting descriptions such as “traditional . . . textbook . . .
appropriate,” “theoretically correct,” “a useful rule of thumb” and a “good vehicle.” In
prior work with Bob Bruner, we reached out to highly regarded firms and financial
advisors to see how they dealt with the many issues of implementation.1 Fifteen years
have passed since our first study. We revisit the issues and see what now constitutes
best practice and what has changed in both academic recommendations and in practice.
We present evidence on how some of the most financially sophisticated companies
and financial advisors estimate capital costs. This evidence is valuable in several
respects. First, it identifies the most important ambiguities in the application of cost of
capital theory, setting the stage for productive debate and research on their resolution.
Second, it helps interested companies to benchmark their cost of capital estimation
practices against best-practice peers. Third, the evidence sheds light on the accuracy
with which capital costs can be reasonably estimated, enabling executives to use the
estimates more wisely in their decision-making. Fourth, it enables teachers to answer
the inevitable question, “But how do companies really estimate their cost of capital?”
The paper is part of a lengthy tradition of surveys of industry practice. For instance,
Burns and Walker (2009) examine a large set of surveys conducted over the last quarter
century into how U.S. companies make capital budgeting decisions. They find that
estimating the weighted average cost of capital is the primary approach to selecting
hurdle rates. More recently, Jacobs and Shivdasani (2012) report on a large-scale
survey of how financial practitioners implement cost of capital estimation. Our
approach differs from most papers in several important respects. Typically studies are
based on written, closed-end surveys sent electronically to a large sample of firms,
often covering a wide array of topics, and commonly using multiple choice or fill-inPage
the-blank questions. Such an approach typically yields low response rates and provides
limited opportunity to explore subtleties of the topic. For instance, Jacobs and
Shivdasani (2012) provide useful insights based on the Association for Finance
Professionals (AFP) cost of capital survey. While the survey had 309 respondents, AFP
(2011, page 18) reports this was a response rate of about 7% based on its membership
companies. In contrast, we report the result of personal telephone interviews with
practitioners from a carefully chosen group of leading corporations and financial
advisors. Another important difference is that many existing papers focus on how well
accepted modern financial techniques are among practitioners, while we are interested
in those areas of cost of capital estimation where finance theory is silent or ambiguous
and practitioners are left to their own devices.
The following section gives a brief overview of the weighted-average cost of
capital. The research approach and sample selection are discussed in Section II. Section
III reports the general survey results. Key points of disparity are reviewed in Section IV.
Section V discusses further survey results on risk adjustment to a baseline cost
of capital, and Section VI highlights some institutional and market forces
affecting cost of capital estimation. Section VII offers conclusions and implications for
the financial practitioner.
I. The Weighted-Average Cost of Capital
A key insight from finance theory is that any use of capital imposes an opportunity cost
on investors; namely, funds are diverted from earning a return on the next best equal-risk
investment. Since investors have access to a host of financial market opportunities,
corporate uses of capital must be benchmarked against these capital market alternatives.
The cost of capital provides this benchmark. Unless a firm can earn in excess of its cost
of capital on an average-risk investment, it will not create economic profit or value for
Page 148
A standard means of expressing a company’s cost of capital is the weighted-average
of the cost of individual sources of capital employed. In symbols, a company’s
weighted-average cost of capital (or WACC) is:
K = component cost of capital.
W = weight of each component as percent of total capital.
t = marginal corporate tax rate.
For simplicity, this formula includes only two sources of capital; it can be easily
expanded to include other sources as well.
Finance theory offers several important observations when estimating a company’s
WACC. First, the capital costs appearing in the equation should be current costs
reflecting current financial market conditions, not historical, sunk costs. In essence, the
costs should equal the investors’ anticipated internal rate of return on future cash flows
associated with each form of capital. Second, the weights appearing in the equation
should be market weights, not historical weights based on often arbitrary, out-of-date
book values. Third, the cost of debt should be after corporate tax, reflecting the benefits
of the tax deductibility of interest.
Despite the guidance provided by finance theory, use of the weighted-average
expression to estimate a company’s cost of capital still confronts the practitioner with a
number of difficult choices. As our survey results demonstrate, the most nettlesome
component of WACC estimation is the cost of equity capital; for unlike readily available
yields in bond markets, no observable counterpart exists for equities. This
forces practitioners to rely on more abstract and indirect methods to estimate
the cost of equity capital.
II. Sample Selection
This paper describes the results of conversations with leading practitioners. Believing
that the complexity of the subject does not lend itself to a written questionnaire, we
wanted to solicit an explanation of each firm’s approach told in the practitioner’s own
words. Though our telephone interviews were guided by a series of questions, the
conversations were sufficiently open-ended to reveal many subtle differences in
Since our focus is on the gaps between theory and application rather than on average
or typical practice, we aimed to sample practitioners who were leaders in the field. We
began by searching for a sample of corporations (rather than investors or financial
advisors) in the belief that they had ample motivation to compute WACC carefully and
to resolve many of the estimation issues themselves. Several publications offer lists of
firms that are well-regarded in finance; of these, we chose Fortune’s 2012 listing of
Most Admired Companies. Working with the Hay Group, Fortune creates what it terms
“the definitive report card on corporate reputations.” Hay provided us with a listing of
companies ranked by the criterion “wise use of assets” within industry. To create our
sample we only used companies ranked first or second in their industry. We could not
obtain raw scores that would allow comparisons across industries.
The 2012 Fortune rankings are based on a survey of 698 companies, each of which
is among the largest in its industry. For each of 58 industry lists, Hay asks executives,
directors, and analysts to rate companies in their own industry on a set of criteria.
Starting with the top two ranked firms in each industry, we eliminated companies
headquartered outside North America (eight excluded). We also eliminated the one
firm classified as a regulated utility (on the grounds that regulatory mandates create
unique issues for capital budgeting and cost of capital estimation) and the seven firms in
Page 149
financial services (inclusive of insurance, banking, securities and real estate). Fortyseven
companies satisfied our screens. Of these, 19 firms agreed to be interviewed and
are included in the sample given in Table I. Despite multiple concerted attempts we
made to contact appropriate personnel at each company, our response rate is lower than
Bruner, Eades, Harris, and Higgins (1998) but still much higher than typical cost of
capital surveys. We suspect that increases in the number of surveys and in the demands
on executives’ time influence response rates now versus the late 1990s.
We approached corporate officers first with an email explaining our research. Our
request was to interview the individual in charge of estimating the firm’s WACC. We
then arranged phone conversations. We promised our interviewees that, in preparing a
TABLE I. Three Survey Samples
report on our findings, we would not identify the practices of any particular company by
name—we have respected this promise in the presentation that follows.
In the interest of assessing the practices of the broader community of finance
practitioners, we surveyed two other samples:
III. Survey Findings
Table II summarizes responses to our questions and shows that the estimation
Page 150
Financial advisors. Using a “league table” of merger and acquisition advisors from
Thomson’s Securities Data Commission (SDC) Mergers and Acquisitions database,
we drew a sample of the most active advisors based on aggregate deal volume in
M&A in the United States for 2011. Of the top twelve advisors, one firm chose not to
participate in the survey, giving us a sample of eleven.
We applied approximately the same set of questions to representatives of these firms’
M&A departments. Financial advisors face a variety of different pressures
regarding cost of capital. When an advisor is representing the sell side of an
M&A deal, the client wants a high valuation but the reverse may be true when an
advisor is acting on the buy side. In addition, banks may be engaged by either side of
the deal to provide a Fairness Opinion about the transaction. We wondered whether
the pressures of these various roles might result in financial advisors using
assumptions and methodologies that result in different cost of capital estimates than
those made by operating companies. This proved not to be the case.
Textbooks and Trade books. In parallel with our prior study, we focus on a handful of
widely-used books. From a leading textbook publisher we obtained names of the four
best-selling, graduate-level textbooks in corporate finance in 2011. In addition, we
consulted two popular trade books that discuss estimation of the cost of capital in
approaches are broadly similar across the three samples in several dimensions:
TABLE II. General Survey Results
Discounted Cash Flow (DCF) is the dominant investment evaluation technique.
WACC is the dominant discount rate used in DCF analyses.
Weights are based on market not book value mixes of debt and equity.6
The after-tax cost of debt is predominantly based on marginal pretax costs, and
marginal tax rates.7
The Capital Asset Pricing Model (CAPM) is the dominant model for estimating the
Page 156
These practices parallel many of the findings from our earlier survey. First, the
“best practice” firms show considerable alignment on many elements of practice.
Second, they base their practice on financial economic models rather than on rules of
thumb or arbitrary decision rules. Third, the financial frameworks offered by leading
texts and trade books are fundamentally unchanged from our earlier survey.
On the other hand, disagreements exist within and among groups on matters
of application, especially when it comes to using the CAPM to estimate the
cost of equity. The CAPM states that the required return (K) on any asset can be
expressed as:
R = interest rate available on a risk-free asset.
R = return required to attract investors to hold the broad market portfolio of
risky assets.
β = the relative risk of the particular asset.
According to CAPM then, the cost of equity, K , for a company depends on three
components: returns on riskfree assets (R ), the stock’s equity “beta” which measures
risk of the company’s stock relative to other risky assets (β = 1.0 is average risk), and
the market risk premium (R − R ) necessary to entice investors to hold risky assets
generally versus risk-free instruments. In theory, each of these components must be a
forward-looking estimate. Our survey results show substantial disagreements,
especially in terms of estimating the market risk premium.
cost of equity. Despite shortcomings of the CAPM, our companies and financial
advisors adopt this approach. In fact, across both companies and financial advisors,
only one respondent did not use the CAPM.8
m f
Page 157
A. The Risk-Free Rate of Return
As originally derived, the CAPM is a single period model, so the question of which
interest rate best represents the risk-free rate never arises. In a multi period world
typically characterized by upward-sloping yield curves, the practitioner must choose.
The difference between realized returns on short-term U.S. Treasury-bills and long-term
T-bonds has averaged about 150 basis points over the longrun; so choice of a risk-free
rate can have a material effect on the cost of equity and WACC.
Treasury bill yields are more consistent with the CAPM as originally derived and
reflect risk-free returns in the sense that T-bill investors avoid material loss in value
from interest rate movements. However, long-term bond yields more closely reflect the
default-free holding period returns available on long-lived investments and thus more
closely mirror the types of investments made by companies.
Our survey results reveal a strong preference on the part of practitioners for longterm
bond yields. As shown in Table II (Question 9), all the corporations and financial
advisors use Treasury bond yields for maturities of 10 years or greater, with the 10-year
rate being the most popular choice. Many corporations said they matched the term of the
risk-free rate to the tenor of the investment. In contrast, a third of the sample books
suggested subtracting a term premium from long-term rates to approximate a shorter
term yield. Half of the books recommended long-term rates but were not precise on the
choice of maturity.
Because the yield curve is ordinarily relatively flat beyond ten years, the
choice of which particular long-term yield to use often is not a critical one.
However, at the time of our survey, Treasury markets did not display these “normal”
conditions in the wake of the financial crisis and expansionary monetary policy. In the
year we conducted our survey (2012), the spread between 10- and 30-year Treasury
yields averaged 112 basis points. While the text and trade books do not directly
address the question of how to deal with such markets, it is clear that some practitioners
are looking for ways to “normalize” what they see as unusual circumstances in the
government bond markets. For instance, 21% of the corporations and 36% of the
financial advisors resort to some historical average of interest rates rather than the spot
rate in the markets. Such an averaging practice is at odds with finance theory in which
investors see the current market rate as the relevant opportunity. We return to this issue
later in the paper.
B. Beta Estimates
Finance theory calls for a forward-looking beta, one reflecting investors’ uncertainty
about the future cash flows to equity. Because forward-looking betas are unobservable,
practitioners are forced to rely on proxies of various kinds. Often this involves using
beta estimates derived from historical data.
The usual methodology is to estimate beta as the slope coefficient of the market
model of returns:
R = return on stock I in time period (e.g., day, week, month) t.
R = return on the market portfolio in period t.
α = regression constant for stock i.
β = beta for stock i.
In addition to relying on historical data, use of this equation to estimate beta
requires a number of practical compromises, each of which can materially affect the
results. For instance, increasing the number of time periods used in the estimation may
improve the statistical reliability of the estimate but risks including stale, irrelevant
information. Similarly, shortening the observation period from monthly to weekly, or
Page 158
even daily, increases the size of the sample but may yield observations that are not
normally distributed and may introduce unwanted random noise. A third compromise
involves choice of the market index. Theory dictates that R is the return on the “market
portfolio,” an unobservable portfolio consisting of all risky assets, including human
capital and other non traded assets, in proportion to their importance in world
wealth. Beta providers use a variety of stock market indices as proxies for the
market portfolio on the argument that stock markets trade claims on a sufficiently wide
array of assets to be adequate surrogates for the unobservable market portfolio.
Another approach is to “predict” beta based on underlying characteristics of a
company. According to Barra, the “predicted beta, the beta Barra derives from its risk
model, is a forecast of a stock’s sensitivity to the market. It is also known as
fundamental beta because it is derived from fundamental risk factors . . . such as size,
yield, and volatility—plus industry exposure. Because we re-estimate these risk factors
daily, the predicted beta reflects changes in the company’s underlying risk structure in a
timely manner.” Table III shows the compromises underlying the beta estimates of
three prominent providers (Bloomberg, Value Line and Barra) and their combined effect
on the beta estimates of our sample companies. The mean beta of our sample companies
is similar from all providers, 0.96 from Bloomberg, 0.93 according to Value Line, and
0.91 from Barra. On the other hand, the averages mask differences for individual
companies. Table IV provides a complete list of sample betas by provider.
TABLE III. Compromises Underlying Beta Estimates and Their Effect on Estimated Betas of Sample
Over half of the corporations in our sample (Table II, Question 10) cite Bloomberg
*With the Bloomberg service it is possible to estimate a beta over many differing time periods, market indices, and
smoothed or unadjusted. The figures presented here represent the baseline or default-estimation approach used if one
does not specify other approaches. Value Line states that “the Beta coefficient is derived from a regression analysis of
the relationship between weekly percentage changes in the price of a stock and weekly percentage changes in the
NYSE Index over a period of five years. In the case of shorter price histories, a smaller time period is used, but two years
is the minimum. The betas are adjusted for their long-term tendency to converge toward 1.00.”
TABLE IV. Betas for Corporate Survey Respondents
Value Line betas are as of January 11, 2013.
Bloomberg betas are as of Jan 22, 2013. The adjusted beta is calculated as 2/3 times the raw beta plus 1/3 times 1.0.
Barra betas are from January 2013. Source: Barra. The Barra data contained herein are the property of Barra, Inc. Barra,
its affiliates and information providers make no warranties with respect to any such data. The Barra data contained
herein is used under license and may not be further used, distributed or disseminated without the express written
consent of Barra.
Page 159
Page 160
as the source for their beta estimates, and some of the 37% that say they calculate their
own may use Bloomberg data and programs. 26% of the companies cite some other
published source and 26% explicitly compare a number of beta sources before making a
final choice. Among financial advisors, there is strong reliance on fundamental betas
with 89% of the advisors using Barra as a source. Many advisors (44%) also
use Bloomberg. About a third of both companies and financial advisors
mentioned levering and unlevering betas even though we did not ask them for this
information. And in response to a question about using data from other firms (Table II,
Question 12), the majority of companies and all advisors take advantage of data on
comparable companies to inform their estimates of beta and capital costs.
Within these broad categories, the comments in Table V indicate that a number of
survey participants use more pragmatic approaches which combine published beta
estimates or adjust published estimates in various heuristic ways.
C. Equity Market Risk Premium
This topic prompted the greatest variety of responses among survey participants.
Finance theory says the equity market risk premium should equal the excess return
TABLE V. Choice of Beta
Page 161
expected by investors on the market portfolio relative to riskless assets. How one
measures expected future returns on the market portfolio and on riskless assets is a
problem left to practitioners. Because expected future returns are unobservable, past
surveys of practice have routinely revealed a wide array of choices for the market risk
premium. For instance, Fernandez, Aguirreamalloa, and Corres (2011) survey
professors, analysts and companies on what they use as a U.S. market risk premium. Of
those who reported a reference to justify their choice, the single most mentioned source
was Ibbotson/Morningstar, but even among those citing this reference, there was a wide
dispersion of market risk premium estimates used. Carleton and Lakonishok (1985)
demonstrate empirically some of the problems with such historical premiums when they
are disaggregated for different time periods or groups of firms. Dimson, Marsh, and
Staunton (2011a, 2011b) discuss evidence from an array of markets around the globe.
How do our best practice companies cope? Among financial advisors, 73%
extrapolate historical returns into the future on the presumption that past experience
heavily conditions future expectations. Among companies, 43% cite historical data and
another 16% use various sources inclusive of historical data. Unlike the results of our
earlier study (1998) in which historical returns were used by all companies and
advisors, we found a number of respondents (18% of financial advisors and 32% of
companies) using forward-looking estimates of the market risk premium. The advisors
cited versions of the dividend discount model. The companies used a variety of methods
including Bloomberg’s version of the dividend discount model.
Even when historical returns are used to estimate the market risk premium,
a host of differences emerge including what data to use and what method to use
for averaging. For instance, a leading textbook cites U.S. historical data back to 1900
from Dimson and Staunton (as cited by Brealey, Myers, and Allen (2011), p.158) while
73% of our financial advisors cite Ibbotson data which traces U.S. history back to 1926.
Page 162
Among companies, only 32% explicitly cite Ibbotson as their main reference for data
and 11% cite other historical sources.
Even using the same data, another chief difference was in their use of arithmetic
versus geometric averages. The arithmetic mean return is the simple average of past
returns. Assuming the distribution of returns is stable over time and that periodic returns
are independent of one another, the arithmetic return is the best estimator of expected
return. The geometric mean return is the internal rate of return between a single outlay
and one or more future receipts. It measures the compound rate of return investors
earned over past periods. It accurately portrays historical investment experience. Unless
returns are the same each time period, the geometric average will always be less than
the arithmetic average and the gap widens as returns become more volatile.
Based on Ibbotson data (2012) from 1926 to 2011, Table VI illustrates the possible
range of equity market risk premiums depending on use of the geometric as opposed to
the arithmetic mean equity return and on use of realized returns on T-bills as opposed to
T-bonds. Even wider variations in market risk premiums can arise when one changes the
historical period for averaging or decides to use data from outside the United States.
For instance, Dimson, Marsh, and Staunton (2011a) provide estimates of historical
equity risk premiums for a number of different countries since 1900.
Since our respondents all used longer-term Treasuries as their risk-free rate, the
right-most column of Table VI most closely fits that choice. Even when respondents
explicitly referenced the arithmetic or geometric mean of historical returns, many
rounded the figure or used other data to adjust their final choice. The net result is a wide
array of choices for the market risk premium. For respondents who provided a
numerical figure (Table II, Question 11), the average for companies was
6.49%, very close to the average of 6.6% from financial advisors. These
averages mask considerable variation in both groups. We had responses as low as 4%
and as high as 9%. The 4% value is in line with the Ibbotson (2012) historical figures
using the geometric mean spread between stocks and long-term government bonds. The
upper end of 9% comes from forward-looking estimates done in 2012 when U.S.
financial markets reflected a very low interest rate environment. We add a word of
caution in how to interpret some of the differences we found in the market risk premium
since the ultimate cost of capital calculation depends on the joint choice of a risk
premium, the risk-free rate and beta. We return to this issue when we illustrate potential
differences in the cost of capital.
As shown in Table VII, comments in our interviews exemplify the diversity among
survey participants. This variety of practice displays the challenge of application since
theory calls for a forward-looking risk premium, one that reflects current market
sentiment and may change with market conditions. What is clear is that there is
substantial variation as practitioners try to operationalize the theoretical call for a
market risk premium. And, as is clear in some of the respondent comments, volatility in
markets has made the challenge even harder. Compared to our earlier study (1998) in
which respondents almost always applied historical averages, current practice shows a
wider variation in approach and considerable judgment. This situation points the way
for valuable future research on the market risk premium.
TABLE VI. Historical Averages to Estimate the Equity Market Risk Premium, (Rm − Rf)
TABLE VII. Choice of the Market Risk Premium
Page 163
IV. The Impact of Various Assumptions for Using CAPM
To illustrate the effect of these various practices on estimated capital costs, we
mechanically selected the two sample companies with the largest and smallest range of
beta estimates in Table IV. We estimated the hypothetical cost of equity and WACC for
Target Corporation, which has the widest range in estimated betas, and for UPS
which has the smallest range. Our estimates are “hypothetical” in that we do
not adopt any information supplied to us by the companies and financial advisors but
rather apply a range of approaches based on publicly available information as of early
2013. Table VIII gives Target’s estimated costs of equity and WACCs under various
combinations of riskfree rate, beta, and market risk premiums. Three clusters of
possible practice are illustrated, each in turn using betas as provided by Bloomberg,
Value Line, and Barra. The first approach, adopted by a number of our respondents, uses
a 10-year T-bond yield and a risk premium of 6.5% (roughly the average response from
both companies and financial advisors). The second approach also uses a 6.5% risk
premium but moves to a 30-year rate to proxy the long-term interest rate. The third
method uses the ten-year Treasury rate but a risk premium of 9%, consistent with what
some adopters of a forward-looking risk premium applied. We repeated these general
procedures for UPS
The resulting ranges of estimated WACCs for the two firms are as follows:
TABLE VIII. Variations in Cost of Capital (WACC) Estimates for Target Corporation Using Different
Methods of Implementing the Capital Asset Pricing Model
The range from minimum to maximum is considerable for both firms, and the
Page 165
economic impact potentially stunning. To illustrate this, the present value of a level
perpetual annual stream of $10 million would range between $117 million and $218
million for Target, and between $109 million and $151 million for UPS.
Given the positive yield curve in early 2013, the variations in our illustration are
explained by choices for all three elements in applying the CAPM: the risk-free rate,
beta, and the equity market premium assumption. Moreover, we note that use of a 10-
year Treasury rate in these circumstances leads to quite low cost of capital estimates if
one sticks to traditional risk premium estimates often found in textbooks which are in the
range of 6%. From talking to our respondents, we sense that many are struggling with
how to deal with current market conditions that do not fit typical historical norms, a
topic we discuss in more detail in Section VI.
V. Risk Adjustments to WACC
Finance theory is clear that the discount rate should rise and fall in concert with an
investment’s risk and that a firm’s WACC is an appropriate discount rate only for
average-risk investments by the firm. High-risk, new ventures should face higher
discount rates, while replacement and repair investments should face lower ones.
Attracting capital requires that prospective return increase with risk. Most practitioners
accept this reasoning but face two problems when applying it. First, it is often not clear
precisely how much a given investment’s risk differs from average, and second, even
when this amount is known, it is still not obvious how large an increment should be
added to, or subtracted from, the firm’s WACC to determine the appropriate discount
We probed the extent to which respondents alter discount rates to reflect
risk differences in questions about variations in project risk, strategic
investments, terminal values, multidivisional companies, and synergies (Table II,
Questions 13 and 17–20). Responses indicate that the great preponderance of financial
advisors and text authors strongly favor varying the discount rate to capture differences
in risk (Table II, Questions 13, 19, and 20). Corporations, on the other hand, are more
evenly split, with a sizeable minority electing not to adjust discount rates for risk
differences among individual projects (Table II, Questions 13 and 17). Comparing these
results with our earlier study, it is worth noting that while only about half of corporate
respondents adjust discount rates for risk, this figure is more than double the percentage
reported in 1998. Despite continuing hesitance, companies are apparently becoming
more comfortable with explicit risk adjustments to discount rates.
A closer look at specific responses suggests that respondents’ enthusiasm for riskadjusting
discount rates depends on the quality of the data available. Text authors live in
a largely data-free world and thus have no qualms recommending risk adjustments
whenever appropriate. Financial advisors are a bit more constrained. They regularly
confront real-world data, but their mission is often to value companies or divisions
where extensive market information is available about rates and prices.
Correspondingly, virtually all advisors questioned value multi division businesses by
parts when the divisions differed materially in size and risk, and over 90% are prepared
to use separate division WACCs to reflect risk differences. Similarly, 82% of advisors
value merger synergies and strategic opportunities separately from operating cash
flows, and 73% are prepared to use different discount rates when necessary on the
various cash flows.
There is a long history of empirical research on how shareholder returns vary
across firm size, leading some academics to suggest that a small cap premium should be
added to the calculated cost of capital for such firms. Our study focuses on large
public companies, so it is not surprising that firm responses do not reveal any such
small cap adjustments. In contrast, financial advisors work with a wide spectrum of
Page 166
companies and are thus more likely to be sensitive to the issue—as indeed they are.
Among financial advisors interviewed, 91% said they would at times increase the
discount rate when evaluating small companies. Of those who did make size
adjustments, half mentioned using Ibbotson (2012) data which show differences in past
returns among firms of different size. The adjustment process varied among advisors, as
the following illustrative quotes suggest. “Adjustments are discretionary, but we tend to
adjust for extreme size.” “We have used a small cap premium, but we don’t have a set
policy to make adjustments. It is fairly subjective.” “We apply a small cap premium
only for microcap companies.” “We use a small cap premium for $300 million and
In important ways corporate executives face a more complex task than financial
advisors or academics. They must routinely evaluate investments in internal
opportunities, and new products and technologies, for which objective, third party
information is nonexistent. Moreover, they work in an administrative setting
where decision rights are widely dispersed, with headquarters defining
procedures and estimating discount rates, and various operating people throughout the
company analyzing different aspects of a given project. As Table IX reveals, these
complexities lead to a variety of creative approaches to dealing with risk. A number of
respondents describe making discount rate adjustments to distinguish among divisional
capital costs, international as opposed to domestic investments, and leased versus
purchased assets. In other instances, however, respondents indicated they hold the
discount rate constant and deal with risk in more qualitative ways, sometimes by
altering the project cash flows being discounted.
TABLE IX. Adjusting Discount Rates for Risk
Why do corporations risk-adjust discount rates in some settings and use different,
often more ad hoc, approaches in others? Our interpretation is that risk-adjusted
discount rates are more likely to be used when the analyst can establish relatively
objective financial market benchmarks for what the risk adjustment should be. At the
division level, data on comparable companies inform division cost of capital estimates.
Debt markets provide surrogates for the risks in leasing cash flows, and international
financial markets shed light on cross-country risk differences. When no such
benchmarks exist, practitioners look to other more informal methods for dealing with
risk. In our view, then, practical use of risk-adjusted discount rates occurs when the
analyst can find reliable market data indicating how comparablerisk cash flows are
being valued by others.
The same pragmatic perspective was evident when we asked companies how
frequently they re-estimated their capital costs (Table II, Question 14). Even among
firms that re-estimate costs frequently, there was reluctance to alter the underlying
methodology employed or to revise the way they use the number in decision making.
Firms also appear sensitive to administrative costs, evidencing reluctance to make
small adjustments but prepared to revisit the numbers at any time in anticipation of
Page 167
major decisions or in response to financial market upheavals. Benchmark companies
recognize a certain ambiguity in any cost number and are willing to live with
approximations. While the bond market reacts to minute basis point changes in interest
rates, investments in real assets involve much less precision, due largely to
greater uncertainty, decentralized decision-making, and time consuming
decision processes. As noted in Table X, one respondent evidences an extreme
tolerance for rough estimates in saying that the firm re-estimates capital costs every
quarter, but has used 10% for a long time because it “seems to have been successful so
far.” Our interpretation is that the mixed responses to questions about risk adjusting and
re-estimating discount rates reflect an often sophisticated set of practical
considerations. Chief among them are the size of the risk differences among investments,
the volume and quality of information available from financial markets, and the realities
of administrative costs and processes. When conditions warrant, practitioners routinely
employ risk adjustments in project appraisal. Acquisitions, valuing divisions and crossborder
investments, and leasing decisions were frequently cited examples. In contrast,
when conditions are not favorable, practitioners are more likely to rely on cruder
capital cost estimates and cope with risk differences by other means.
TABLE X. Re-estimating the Cost of Capital
Page 168
VI. Recent Institutional and Market Developments
As discussed in the prior section, our interviews reveal that the practice of cost of
capital estimation is shaped by forces that go beyond considerations found in usual
academic treatments of the topic. A feature that was more pronounced than in our prior
study is the influence of a wide array of stakeholders. For instance, a number of
companies voiced that any change in estimation methods would raise red flags
with auditors looking for process consistency in key items such as impairment estimates.
Some advisors mentioned similar concerns, citing their work in venues where
consistency and precedent were major considerations (e.g., fairness opinions, legal
settings). Moreover, some companies noted that they “outsourced” substantial parts of
their estimation to advisors or data providers. These items serve as a reminder that the
art of cost of capital estimation and its use are part of a larger process of management—
not simply an application of finance theory.
The financial upheaval in 2008–2009 provided a natural test of respondents’
commitment to existing cost of capital estimation methodologies and applications. When
confronted with a major external shock, did companies make wholesale changes or did
they keep to existing practices? When we asked companies and advisors if financial
market conditions in 2008–2009 caused them to change the way they estimate and use
the cost of capital (Table II, Question 15), over three-fifths replied “No.” In the main,
then, there was not a wholesale change in methods. That said, a number of respondents
noted discomfort with cost of capital estimation in recent years. Some singled out high
volatility in markets. Others pointed to the low interest rate environment resulting from
Federal Reserve policies to stimulate the U.S. economy. Combining low interest rates
and typical historical risk premiums created capital cost estimates that some
practitioners viewed as “too low.” One company was so distrustful of market signals
that it placed an arbitrary eight percent floor under any cost of capital estimate, noting
Page 169
that “since 2008, as rates have decreased so drastically, we don’t feel that [the estimate]
represents a long-term cost of capital. Now we don’t report anything below 8% as a
minimum [cost of capital].”
Among the minority who did revise their estimation procedures to cope with these
market forces, one change was to put more reliance on historical numbers when
estimating interest rates as indicated in Table XI. This is in sharp contrast to both
finance theory and what we found in our prior study. Such rejection of spot rates in
favor of historical averages or arbitrary numbers is inconsistent with the academic view
that historical data do not accurately reflect current attitudes in competitive markets.
The academic challenge today is to better articulate the extent to which the superiority
of spot rates still applies when markets are highly volatile and when governments are
aggressively attempting to lower rates through such initiatives as quantitative easing.
Another change in estimation methods since our earlier study is reflected in the fact
that more companies are using forward-looking risk premiums as we reported earlier
and illustrated in Table VII. Since the forward-looking premiums cited by our
respondents were higher than historical risk premiums, they mitigated or offset to some
degree the impact of low interest rates on estimated capital costs.
TABLE XI. Judgments Related to Financial Market Conditions
VII. Conclusions
Our research sought to identify the “best practice” in cost of capital estimation through
interviews with leading corporations and financial advisors. Given the huge annual
expenditure on capital projects and corporate acquisitions each year, the wise selection
of discount rates is of material importance to senior corporate managers.
Consistent with our 1998 study of the same topic, this survey reveals broad
acceptance of the WACC as the basis for setting discount rates. In addition, the survey
reveals general alignment between the advice of popular textbooks and the practices of
leading companies and corporate advisors in many aspects of the estimation of WACC.
The main continuing area of notable disagreement is in the details of implementing the
CAPM to estimate the cost of equity. This paper outlines the varieties of practice in
CAPM use, the arguments in favor of different approaches, and the practical
implications of differing choices.
In summary, we believe that the following elements represent “best current practice”
in the estimation of WACC:
Weights should be based on market-value mixes of debt and equity.
The after-tax cost of debt should be estimated from marginal pretax costs, combined
with marginal tax rates.
CAPM is currently the preferred model for estimating the cost of equity.
Betas are drawn substantially from published sources. Where a number of statistical
publishers disagree, best practice often involves judgment to estimate a beta.
Moreover, practitioners often look to data on comparable companies to help
benchmark an appropriate beta.
Risk-free rate should match the tenor of the cash flows being valued. For most capital
projects and corporate acquisitions, the yield on the U.S. government Treasury bond of
Best practice is largely consistent with finance theory. Despite broad Page 170
agreement at the theoretical level, however, several problems in application
can lead to wide divergence in estimated capital costs. Based on our results, we believe
that two areas of practice cry out for further applied research. First, practitioners need
additional tools for sharpening their assessment of relative project and market risk. The
variation in company specific beta estimates from different published sources can create
substantial differences in capital cost estimates. Moreover, the use of risk-adjusted
discount rates appears limited by lack of good market proxies for different risk profiles.
We believe that appropriate use of comparablerisk, cross-industry or other risk
ten or more years in maturity would be appropriate.
Choice of an equity market risk premium is the subject of considerable controversy
both as to its value and method of estimation. While the market risk premium averages
about 6.5% across both our “best practice” companies and financial advisors, the
range of values starts from a low of around 4% and ends with a high of 9%.
Monitoring for changes in WACC should be keyed to major investment opportunities
or significant changes in financial market rates, but should be done at least annually.
Actually flowing a change through a corporate system of project appraisal and
compensation targets must be done gingerly and only when there are material changes.
WACC should be risk-adjusted to reflect substantive differences among different
businesses in a corporation. For instance, financial advisors generally find the
corporate WACC to be inappropriate for valuing different parts of a corporation.
Given publicly traded companies in different businesses, such risk adjustment
involves only modest revision in the WACC and CAPM approaches already used.
Corporations also cite the need to adjust capital costs across national boundaries. In
situations where market proxies for a particular type of risk class are not available,
best practice involves finding other means to account for risk differences.
categories deserves further exploration. Second, practitioners could benefit from further
research on estimating the equity market risk premium. Current practice still relies
primarily on averaging past data over often lengthy periods and yields a wide range of
estimates. Use of forward-looking valuation models to estimate the implied market risk
premium could be particularly helpful to practitioners dealing with volatile markets. As
the next generation of theories sharpens our insights, we feel that research attention to
the implementation of existing theory can make for real improvements in practice.
In fundamental ways, our conclusions echo those of our study fifteen years ago. Our
conversations with practitioners serve as a reminder that cost of capital estimation is
part of the larger art of management—not simply an application of finance theory. There
is an old saying that too often in business we measure with a micrometer, mark with a
pencil, and cut with an ax. Despite the many advances in finance theory, the particular
“ax” available for estimating company capital costs remains a blunt one. Best practice
companies can expect to estimate their weighted average cost of capital with an
accuracy of no more than plus or minus 100 to 150 basis points. This has important
implications for how managers use the cost of capital in decision making. First, do not
mistake capital budgeting for bond pricing. Despite the tools available, effective capital
appraisal continues to require thorough knowledge of the business and wise business
judgment. Second, be careful not to throw out the baby with the bath water. Do not reject
the cost of capital and attendant advances in financial management because your finance
people cannot provide a precise number. When in need, even a blunt ax is better than
nothing. ■
Association for Finance Professionals, 2011, “Current Trends in Estimating and
Applying the Cost of Capital: Report of Survey Results,”,
Page 171
accessed March 2011.
Banz, R., 1981, “The Relationship between Return and Market Value of Common
Stock,” Journal of Financial Economics 9 (No. 1), 3–18.
Barra, 2007, Barra Risk Model Handbook, MSCI Barra,
Brealey, R., S. Myers, and F. Allen, 2011, Principles of Corporate Finance, 10th ed.,
New York, NY, McGraw-Hill.
Brigham, E. and M. Ehrhardt, 2013, Financial Management: Theory and Practice,
14th ed., Mason, OH, South-Western Publishing.
Bruner, R., K. Eades, R. Harris, and R. Higgins, 1998, “Best Practices in
Estimating the Cost of Capital: Survey and Synthesis,” Financial Practice and
Education 8 (No. 1), 13–28.
Burns, R. and J. Walker, 2009, “Capital Budgeting Surveys: The Future is Now,”
Journal of Applied Finance 19 (No. 1–2), 78–90.
Carleton, W.T. and J. Lakonishok, 1985, “Risk and Return on Equity: The Use and
Misuse of Historical Estimates,” Financial Analysts Journal 41 (No. 1), 38–48.
Conroy, R. and R. Harris, 2011, “Estimating Capital Costs: Practical Implementation of
Theory’s Insights” Capital Structure and Financing Decisions: Theory and Practice
(K. Baker and J. Martin, eds.), New York, NY, John Wiley & Sons.
Dimson, E., P. Marsh, and M. Staunton, 2011a, “Equity Premia Around the World,”
London Business School Working Paper.
Dimson, E., P. Marsh, and M. Staunton, 2011b, The Dimson-Marsh-Staunton Global
Investment Returns Database (the “DMS Database”), New York, NY, Morningstar Inc.
Fama, E.F. and K.R. French, 1992, “The Cross-section of Expected Returns,” Journal
of Finance 47 (No. 2), 27–465.
Fernandez, P., J. Aguirreamalloa, and L. Corres, 2011, “US Market Risk Premium used
in 2011 by Professors, Analysts and Companies: A Survey with 5,731 Answers,” IESE
Business School Working Paper.
Graham, J. and L. Mills, 2008, “Using Tax Return Data to Simulate Corporate Marginal
Tax Rates,” Journal of Accounting and Economics 46 (No. 2–3), 366–380.
Harris, R. and F. Marston, 2001, “The Market Risk Premium: Expectational Estimates
Using Analysts’ Forecasts,” Journal of Applied Finance 11 (No. 1), 6–16.
Harris, R. and F. Marston, 2013, “Changes in the Market Risk Premium and the Cost of
Capital: Implications for Practice,” Journal of Applied Finance 23 (No. 1), 34–47.
Higgins, R.C., 2012, Analysis for Financial Management, 10th Ed., New York, NY,
Ibbotson, SBBI, 2012, Classic Yearbook, Market Results for Stocks, Bonds, Bills and
Inflation 1926–2011,” Morningstar.
Jacobs, M.T. and A. Shivdasani, 2012, “Do You Know Your Cost of Capital?” Harvard
Business Review (July).
Koller, T., M. Goedhart, and D. Wessels, 2005, Valuation: Measuring and Managing
the Value of Companies, 5th Ed., Hoboken, NJ, John Wiley & Sons, Inc.
Pratt, S. and R. Grabowski, 2010, Cost of Capital: Applications and Examples 4th Ed.,
Hoboken, NJ, John Wiley and Sons.
Reinganum, M.R. 1981, “Misspecification of Capital Asset Pricing: Empirical
Anomalies Based on Earnings’ Yields and Market Values,” Journal of Financial
Economics 9 (No. 1), 19–46.
Ross, S., R. Westerfield, and J. Jaffe, 2013, Corporate Finance, 10th Ed., New York,
NY, McGraw-Hill.
Page 173
Page 174
Roche Holding AG: Funding the Genentech
We are confident that we will have the financing available when the money is needed
. . . The plan is to use as financing partly our own funds and then obviously bonds
and then commercial paper and traditional bank financing. We will start by going to
the bond market first.
—Roche Chairman Franz Hume
In July 2008, Swiss pharmaceutical company Roche Holding AG (Roche) made an offer
to acquire all remaining outstanding shares of U.S. biotechnology leader Genentech for
(U.S. dollars) USD89.00 per share in cash. Six months later, with equity markets down
35%, Roche announced its recommitment to the deal with a discounted offer of
USD86.50 in cash per share of Genentech stock.
To pay for the deal, Roche needed a massive USD42 billion in cash. To meet
the need, management planned to sell USD32 billion in bonds at various maturities from
1 year to 30 years and in three different currencies (U.S. dollar, euro, and British
pound). The sale would begin with the dollar-denominated offering and followed up
soon after with rounds of offerings in the other currencies.
In mid-February 2009, Roche was ready to move forward with what was
anticipated to be the largest bond offering in history. With considerable ongoing turmoil
in world financial markets and substantial uncertainty surrounding the willingness of
Genentech minority shareholders to sell their shares for the reduced offer of USD86.50,
Roche’s financing strategy was certainly bold.
In 1894, Swiss banker Fritz Hoffmann-La Roche, 26, joined Max Carl Traub to take
over a small factory on Basel’s Grenzacherstrasse from druggists Bohny, Hollinger &
Co. Following a difficult first two years, Hoffmann-La Roche bought out his partner and
entered F. Hoffmann-La Roche & Co. in the commercial register.
In the early years, the company’s primary products included sleeping agents,
antiseptics, and vitamins; by the late 1930s, the company had already expanded to 35
countries, an expansion that continued in the decades following the Second World War.
In 1990, the company, by then known as Roche, acquired a majority stake in Genentech,
a South San Francisco biotechnology company, for USD2.1 billion. Genentech’s
research focused primarily on developing products based on gene splicing or
recombinant DNA to treat diseases such as cancer and AIDS. The acquisition gave
Roche a strong foothold in the emerging biologics market as well as stronger presence
in the U.S. market.
Since the 1990s, Roche had maintained focus on its two primary business units,
pharmaceuticals and medical diagnostics; in 2004, Roche sold its over-the-counter
consumer health business to Bayer AG for nearly USD3 billion. In 2008, Roche
expanded its diagnostics business with the acquisition of Ventana Medical Systems for
USD3.4 billion.
By the end of 2008, Roche’s total revenue was just shy of (Swiss francs)
CHF50 billion. The pharmaceutical division contributed 70% of the total Roche
revenue and over 90% of the operating profit. Roche was clearly one of the leading
pharmaceuticals in the world. Exhibit 11.1 provides a revenue breakdown of Roche’s
2008 revenue by geography and therapeutic area, as well as a detailed overview of
Roche’s top selling pharmaceutical products. Roche and Genentech’s financial
statements are detailed in Exhibit 11.2 and 11.3, respectively, and the stock
performance of the two companies is shown in Exhibit 11.4 .
EXHIBIT 11.1 | 2008 Revenue Breakdown (sales in millions of Swiss francs)
Data source: Roche 2008 annual report.
CEMAI: Central and Eastern Europe, the Middle East, Africa, Central Asia, and the Indian Subcontinent. This acronym
appears to be unique to Roche.
EXHIBIT 11.2 | Roche Financial Statements, Financial Years Ended December 31 (in millions of
Swiss francs)
Data source: Capital IQ.
EXHIBIT 11.3 | Genentech Financial Statements (in millions of U.S. dollars)
Data source: Capital IQ.
EXHIBIT 11.4 | Stock Price Performance of Roche and Genentech, February 2007 to February 2009
(in Swiss francs and U.S. dollars, respectively)1
Market Conditions
The past 18 months had been historic for global financial markets, with dramatic
declines in equity and credit markets. Since October 2007, world equity market prices
had declined over 45%. Large numbers of commercial and investment banks had failed.
The global labor market was shedding jobs, resulting in sharp increases in
unemployment rates. Broad economic activity was also affected, with large declines in
overall economic activity.
In response to what some feared would become the next Great Depression, world
governments made massive investments in financial and industrial institutions. In an
effort to stimulate liquidity, central banks had lowered interest rates. The market
Data source: Capital IQ.
Correspondence of values between axes is approximate, based on exchange rates on February 28, 2007. The average
rate for the period was USD1.13/CHF1.00.
Page 175
uncertainty was accompanied by a massive “flight to quality” as global investors moved
capital to government securities (particularly U.S. Treasuries), thereby driving
government yields to historic lows. Exhibit 11.5 shows the prevailing yield curve in
U.S. dollars, euros, and British pounds. With benchmark yields declining but overall
borrowing rates rising, the credit spreads (the difference between corporate yields and
benchmark yields) were expanding to historic levels. Exhibit 11.6 contains the
prevailing credit spreads over benchmark yields for U.S. industrial corporate
bonds based on bond ratings from bond-rating agency Standard and Poor’s.
Exhibit 11.7 plots historical trends in yields of bonds by various credit ratings over the
past two years. Exhibit 11.8 provides a definitional overview of Standard and Poor’s
credit ratings. Roche’s current credit rating with Standard and Poor’s was AA−, and
with Moody’s was Aa1. Exhibit 11.9 details median values for various financial ratios
for companies rated within a particular category for 2007 and 2008.
EXHIBIT 11.5 | Annual Yield Rate to Maturity (U.S. Dollar, Euro, British Pound), February 2009 (in
Data source: Bloomberg.
The euro benchmark is obtained from the midrate of the euro versus the EURIBOR mid-1 interest rate swap.
EXHIBIT 11.6 | U.S. Yield Spreads of U.S. Industrial Corporate Bonds over Comparable Maturity of
U.S. Treasuries for S&P’s Bond-Rating Categories, February 2009 (in basis points)
Data source: Bloomberg.
EXHIBIT 11.7 | History of U.S. Bond Yields for 30-Year Maturities, February 2006 to February 2009 (in
Data source: Datastream, Mergent Bond Record.
EXHIBIT 11.8 | S&P Credit Ratings Overview
Data source: Guide to Credit Rating Essentials, Standard and Poor’s, (accessed February 16, 2011).
EXHIBIT 11.9 | Median Financial Ratio Values for all U.S. Rated Industrial Companies, 2007 and 2008
Despite the uncertainty in the credit markets, corporate transactions were
reawakening in the pharmaceutical industry. Pfizer had recently agreed to acquire Wyeth
for USD68 billion. In the deal, five banks had agreed to lend Pfizer USD22.5 billion to
pay for the deal, and Pfizer was funding the remaining USD45.5 billion through issuance
of a combination of cash and stock.
The Bond Offering Process
The issuance of publicly traded bonds, in addition to the pricing and marketing of the
deal, required the satisfaction of certain legal requirements. Because of the complexity
and importance of these two processes, corporations typically hired investment bankers
to provide assistance. Given the size of the deal, Roche hired three banks as joint lead
managers for the U.S. dollar deal (Banc of America Securities, Citigroup Global
Markets, and JPMorgan) and four bankers for the euro and pound sterling deals
Data source: Case writer analysis of Compustat data.
(Barclays Capital, BNP Paribas, Deutsche Bank, and Banco Santander).
Because Roche’s bonds would be publicly traded, it had to file with the appropriate
regulatory agencies in the countries where the bonds would be issued. Simultaneous
with the drafting of the documentation by legal teams, the underwriting banks’ debt
capital markets and syndication desks began the marketing process. The initial phase of
this process was the “road show.” During the road show, management teams for Roche
and the banks held initial meetings with investors from all over the world. The Roche
management team expected to meet with investors in many of the major investment
centers in the United States and Europe.
Given the global nature of Roche’s business, the banks determined that a mix of
bonds at different maturities and in different currencies was the best option. By
matching differing maturities and currencies to the company’s operating cash flows in
those currencies, Roche was able to reduce exchange rate risk. Exhibit 11.10 provides
an overview of the different currency and maturity tranches planned in the offering. The
final amounts raised from each offering, along with the coupon rate, were not yet
determined because pricing was expected to be highly influenced by investor demand.
To ensure that the bond offering raised the targeted proceeds, the coupon rate was set to
approximate the anticipated yield, such that the bond traded at par. Following market
conventions, the U.S. dollar bonds would pay interest semiannually, and the euro and
sterling issues would pay interest annually.
EXHIBIT 11.10 | Plan for Currency and Maturity of Roche Bond Offering Tranches1
Page 176
The coupon payments of the shorter durations were to be floating, and the interest to
be paid was equivalent to the short-term interbank interest rate (LIBOR) plus a credit
spread. The longer durations were to have fixed coupon payments for the duration of the
bond. Investors typically referenced the “price” of bonds as the spread over the
applicable risk-free rate. The risk-free rate was commonly established as the
respective government borrowing rate and was referred to as the benchmark,
sovereign, or Treasury rate. The logic of the credit spread was that bonds were riskier
than the benchmark bonds, so to entice investors, the issuer had to offer a price over the
risk-free rate.
During the road show, banks received feedback from investors on the demand for
each tranche. Determining the final size and pricing of each issue was an iterative
Data source: Company documents.
Prevailing exchange rates at the time were CHF1.67/GBP1.00, CHF1.18/USD1.00, and 1 CHF1.48/EUR1.00.
process between the investors, banks, and issuer. In the case of Roche, if investors
showed strong demand for the four-year euro tranche, Roche could decide to either
issue more at that price (thus reducing the amount of another tranche) or lower the
coupon and pay a lower interest rate on the four-year euro issue. The banks’ process of
determining demand and receiving orders for each issue was known as book-building.
Bond prices were set based on prevailing yields of bond issues by similar companies.
Exhibit 11.11 and 11.12 provide a sample of prevailing prices and terms of company
bonds traded in the market, in addition to various equity market and accounting data.
EXHIBIT 11.11 | Prevailing Prices of Sample of Recently Rated Corporate Bonds (Mid-February
Data source: Case writer analysis using Bloomberg data.
EXHIBIT 11.12 | Selected Comparable Companies’ Data for 2008 (in millions of U.S. dollars)1
The Genentech Deal
On July 21, 2008, Roche publicly announced an offer to acquire the 44.1% of
Genentech’s outstanding shares that it did not already own. The offer price of USD89.00
represented a 19% premium over the previous one-month share prices for Genentech.
Roche management believed that economies justified the premium with an estimate that,
following the transaction, the combined entity could realize USD750 million to USD850
million in operational efficiencies. Following the offer, Genentech’s stock price shot up
beyond the USD89.00 offer price with the anticipation that Roche would increase its
On August 13, 2008, a special committee of Genentech’s board of directors (those
without direct ties to Roche) responded to Roche’s offer. The committee stated that the
offer “substantially undervalues the company.” Without the support of Genentech’s
board of directors, Roche needed either to negotiate with the board or take the offer
directly to shareholders with what was known as a tender offer. In that case,
Data source: Capital IQ and case writer analysis.
Because the Genentech financial figures were already consolidated in the Roche financial statements, only the debt
and interest expense was expected to vary. The pro-forma interest expense was based on an arbitrary 5% interest rate.
Page 177
shareholders would receive a take-it-or-leave-it offer. If sufficient shareholders
“tendered” their shares, the deal would go through regardless of the support of the
Over the next six months, capital markets fell into disarray. As credit markets
deteriorated, Genentech shareholders realized that Roche might not be able to finance an
increased bid for the company, and the share price continued to decline through the end
of the year. Contemporaneously with the deal, Genentech awaited the announcement of
the clinical trial results for several of its next generation of potential drugs, including its
promising cancer drug Avastin.
On January 30, 2009, Roche announced its intention to launch a tender offer for the
remaining shares at a reduced price of USD86.50. The revised offer was contingent on
Roche’s ability to obtain sufficient financing to purchase the shares. The announcement
was accompanied by a 4% price drop of Genentech’s share price to USD80.82. Bill
Tanner, analyst at Leerink Swann, warned Genentech shareholders that the stock was
overvalued and that if upcoming Genentech drug trials showed mediocre results then the
stock would fall into the USD60 range. He encouraged shareholders to take the sure
USD86.50 offer claiming that “DNA’s [the stock ticker symbol for Genentech]
best days may be over.”
Jason Napadano, analyst at Zach’s Investment Research, claimed that Roche was
trying “to pull the wool over the eyes of Genentech shareholders.” He continued,
“Roche is trying to get this deal done before the adjuvant colon cancer data comes out
and Genentech shareholders are well aware of that. I don’t know why they would tender
their shares for [USD]86.50, which is only 10% above today’s price, when they can get
closer to $95 to $100 a share if they wait.”
The Financing Proposal
Unlike Pfizer in its acquisition of Wyeth, Roche could not issue equity to Genentech
shareholders. Roche was controlled by the remnants of its founder in the Oeri, Hoffman,
and Sacher families. The company maintained two classes of shares, bearer and
Genussscheine (profit-participation) shares. Both share classes had equal economic
rights (i.e., same dividends, etc.) and traded on the Swiss Stock Exchange, but the
bearer shares were the only shares with voting rights, and the founding family controlled
just over 50% of the bearer shares. This dual-share structure existed before modern
shareholder rights legislation in Switzerland and was grandfathered in. In the event
Roche were to issue equity to Genentech shareholders, this dual-class share structure
would have to be revisited, and the family might lose control. Given this ownership
structure, Roche was forced to finance the deal entirely of debt and current cash on
When Roche originally announced the transaction, the company had intended to
finance the acquisition with a combination of bonds and loans from a variety of
commercial banks. The collapse of the financial markets caused many of the commercial
banks to demand a much higher interest rate on the loans than originally anticipated by
Roche. As a result of the change in market conditions, Roche was limited to the bond
market for the majority of its financing. Despite the magnitude of the debt-financing
need, the investment banks assisting in the deal expected that Roche’s cash flow was
stable enough to manage the additional level of debt.
To ensure that Roche raised the necessary capital, it was important to correctly
anticipate the required yield on each bond and set the coupon rate at the rate that would
price the bond at par. This was done by simply setting the coupon rate equal to the
anticipated yield. With such a substantial amount of money riding on the deal, it was
critical that Roche correctly set the price, despite the immense uncertainty in capital

Page 189
H. J. Heinz: Estimating the Cost of Capital in
Uncertain Times
To do a common thing uncommonly well brings success.
—H. J. Heinz Founder Henry John Heinz
As a financial analyst at the H. J. Heinz Company (Heinz) in its North American
Consumer Products division, Solomon Sheppard, together with his co-workers,
reviewed investment proposals involving a wide range of food products. Most
discussions in his office focused on the potential performance of new products and
reasonableness of cash flow projections. But as the company finished its 2010 fiscal
year at the end of April—with financial markets still in turmoil from the onset of the
recession that started at the end of 2007—the central topic of discussion was the
company’s weighted average cost of capital (WACC).
At the time, there were three reasons the cost of capital was a subject of
controversy. First, Heinz’s stock price had just finished a two-year roller coaster ride:
Its fiscal year-end stock price dropped from $47 in 2008 to $34 in 2009, then rose back
to $47 in 2010, and a vigorous debate ensued as to whether the weights in a cost of
capital calculation should be updated to reflect these changes as they occurred. Second,
interest rates remained quite low—unusually so for longer-term bond rates; there was
concern that updating the cost of capital to reflect these new rates would lower the cost
of capital and therefore bias in favor of accepting projects. Third, there was a strong
sense that, as a result of the recent financial meltdown, the appetite for risk in the market
had changed, but there was no consensus as to whether this should affect the cost of
capital of the company and, if so, how.
Page 190
When Sheppard arrived at work on the first of May, he found himself at the very
center of that debate. Moments after his arrival, Sheppard’s immediate supervisor asked
him to provide a recommendation for a WACC to be used by the North American
Consumer Products division. Recognizing its importance to capital budgeting decisions
in the firm, he vowed to do an “uncommonly good” job with this analysis, gathered the
most recent data readily available, and began to grind the numbers.
Heinz and the Food Industry
In 1869, Henry John Heinz launched a food company by making horseradish from his
mother’s recipe. As the story goes, Heinz was traveling on a train when he saw a sign
advertising 21 styles of shoes, which he thought was clever. Since 57 was his lucky
number, the entrepreneur began using the slogan “57 Varieties” in his advertising. By
2010, the company he eventually founded had become a food giant, with $10 billion in
revenues and 29,600 employees around the globe.
Heinz manufactured products in three categories: Ketchup and Sauces, Meals and
Snacks, and Infant Nutrition. Heinz’s strategy was to be a leader in each product
segment and develop a portfolio of iconic brands. The firm estimated that 150 of the
company’s brands held either the number one or number two position in their respective
target markets. The famous Heinz Ketchup, with sales of $1.5 billion a year or 650
million bottles sold, was still the undisputed world leader. Other well-known brands
included Weight Watchers (a leader in dietary products), Heinz Beans (in 2010, the
brand sold over 1.5 million cans a day in Britain, the “biggest bean-eating nation in the
world”), and Plasmon (the gold standard of infant food in the Italian market). Wellknown
brands remained the core of the business with the top 15 brands accounting for
about 70% of revenues, and each generating over $100 million in sales.
Heinz was a global powerhouse. It operated in more than 200 countries. The
Page 191
company was organized into business segments based primarily on region: North
American Consumer Products, U.S. Foodservice, Europe, Asia Pacific, and Rest of
World. About 60% of revenues were from outside the United States and the North
American Consumer Products and Europe segments were of comparable size.
Increasingly, the company was focusing on emerging markets, which had generated 30%
of recent growth and comprised 15% of total sales.
The most prominent global food companies based in the United States included
Kraft Foods, the largest U.S.-based food and beverage company; Campbell Soup
Company, the iconic canned food maker; and Del Monte Foods, one of largest producers
and distributers of premium-quality branded food and pet products focused on the U.S.
market (and a former Heinz subsidiary). Heinz also competed with a number of other
global players such as Nestlé, the world leader in sales, and Unilever, the British-Dutch
consumer goods conglomerate.
Recent Performance
With the continued uncertainty regarding any economic recovery and deep concerns
about job growth over the previous two years, consumers had begun to focus on value in
their purchases and to eat more frequently at home. This proved a benefit for those
companies providing food products and motivated many top food producers and
distributors to focus on core brands. As a result, Heinz had done well in both 2009 and
2010, with positive sales growth and profits above the 2008 level both years, although
2010 profits were lower than those in 2009. These results were particularly
striking since a surge in the price of corn syrup and an increase in the cost of
packaging had necessitated price increases for most of its products. Overseas sales
growth, particularly in Asia, had also positively affected the company’s operations.
Exhibit 12.1 and Exhibit 12.2 present financial results for the years 2008, 2009, and
EXHIBIT 12.1 | Income Statement (numbers in thousands except per-share amounts; fiscal year
ends in April)
Data source: H. J. Heinz SEC filings, 2008–10.
EXHIBIT 12.2 | Balance Sheet (numbers in thousands except per-share amounts; fiscal year ends in
The relation between food company stock prices and the economy was complicated.
In general, the performance of a food products company was not extremely sensitive to
market conditions and might even benefit from market uncertainty. This was clear to
Heinz CFO Art Winkelblack, who in early 2009 had remarked, “I’m sure glad we’re
selling food and not washing machines or cars. People are coming home to Heinz.”
Still an exceptionally prolonged struggle or another extreme market decline could drive
more consumers to the private-label brands that represented a step down from the Heinz
brands. While a double-dip recession seemed less likely in mid-2010, it was clear the
economy continued to struggle, and this put pressure on margins.
While the stock price for Heinz had been initially unaffected by adverse changes in
Data source: H. J. Heinz SEC filings, 2008–10.
Page 192
the economy and did not decline with the market, starting in the third quarter of 2008,
Heinz’s stock price began tracking the market’s movement quite closely. Figure 12.1
plots the Heinz stock price against the S&P Index (normalized to match Heinz’s stock
price at the start of the 2005 fiscal year). The low stock price at the start of 2009 had
been characterized by some as an over-reaction and, even with the subsequent recovery,
it was considered undervalued by some.
Cost of Capital Considerations
Recessions certainly could wreak havoc on financial markets. Given that the recent
downturn had been largely precipitated by turmoil in the capital markets, it was not
surprising that the interest rate picture at the time was unusual. Exhibit 12.3 presents
information on interest rate yields. As of April 2010, short-term government rates and
even commercial paper for those companies that could issue it were at strikingly low
levels. Even long-term rates, which were typically less volatile, were low by historic
standards. Credit spreads, which had drifted upwards during 2008 and jumped upwards
during 2009, had settled down but were still somewhat high by historic standards.
FIGURE 12.1 | Heinz stock price and normalized S&P 500 Index.
Data sources: S&P 500 and Yahoo! Finance.
Interestingly, the low level of long-term rates had more than offset the rise in credit
spreads, and borrowers with access to debt markets had low borrowing costs.
Sheppard gathered some market data related to Heinz (also shown in Exhibit 12.3 ).
He easily obtained historic stock price data. Most sources he accessed estimated the
company’s beta using the previous five years of data at about 0.65. Sheppard obtained
prices for two bonds he considered representative of the company’s outstanding
borrowings: a note due in 2032 and a note due in 2012. Heinz had regularly accessed
the commercial paper market in the past, but that market had recently dried up.
Fortunately, the company had other sources for short-term borrowing and Sheppard
estimated these funds cost about 1.20%.
What most surprised Sheppard was the diversity of opinions he obtained regarding
the market risk premium. Integral to calculating the required return on a company’s
equity using the capital asset pricing model, this rate reflected the incremental return an
EXHIBIT 12.3 | Capital Market Data (yields and prices as of the last trading day in April of the year
Note that bond data were slightly modified for teaching purposes.
Data sources: Federal Reserve, Value Line, Morningstar, and case writer estimates.
The 20-year yield is used for 2003–05, when the 30-1 year was not issued.
investor required for investing in a broad market index of stocks rather than a riskless
bond. When measured over long periods of time, the average premium had been about
7.5%. But when measured over shorter time periods, the premium varied greatly;
recently the premium had been closer to 6.0% and by some measures even lower. Most
striking were the results of a survey of CFOs indicating that expectations were for an
even lower premium in the near future—close to 5.0%. On the other hand, some
asserted that market conditions in 2010 only made sense if a much higher premium—
something close to 8%—were assumed.
As Sheppard prepared for his cost of capital analysis and recommendation, he
obtained recent representative data for Heinz’s three major U.S. competitors
(Exhibit 12.4). This information would allow Sheppard to generate cost-of-capital
estimates for these competitors as well as for Heinz. Arguably, if market conditions for
Heinz were unusual at the time, the results for competitors could be more representative
for other companies in the industry. At the very least, Sheppard knew he would be more
comfortable with his recommendation if it were aligned with what he believed was
appropriate for the company’s major competitors.
EXHIBIT 12.4 | Comparable Firm Data
Data sources: Value Line; H. J. Heinz SEC filings, 2008–10; case writer estimates; Morningstar.

Page 197
CASE 13 Royal Mail plc: Cost of Capital
As Hillary Hart, senior financial analyst at the British postal service company Royal
Mail plc (Royal Mail), approached company headquarters near Blackfriars Bridge in
London, she reflected on the momentous nature of the seven years she had spent in that
building. During that time, the company had faced important changes in broad demand
for letters and parcels, significant restructuring of government regulation, competitive
entry into its long-standing monopoly position, deep workforce cuts, wide-scale labor
negotiations and strikes, and, lastly, the transition from 500 years as a governmentowned
enterprise to a massive for-profit company traded on the London Stock
Now on July 21, 2015, Hart had an upcoming meeting with several senior managers
of the company. Central to the meeting was an evaluation of the cost of capital for Royal
Mail. The cost of capital had become a point of discussion for two reasons. First, since
privatization, Royal Mail was increasingly looking to shed its government-based
decision-making policies of the past for a more market-based orientation. The adoption
of an investor-oriented cost-of-capital benchmark provided an important step in moving
forward the governance of company investment policy. That said, it was by no means
easy to shift company objectives toward rewarding investors and away from a focus on
facilitating national employment and communication needs. Second, the company was
under an important review by the British regulatory authority, the Office of
Communications (Ofcom). Deregulation of private postal services was still very much
an experiment in Britain. Due to recent competitive events in the country, Ofcom was
reevaluating existing regulatory policies. The cost of capital provided an appropriate
benchmark by which to properly assess the profitability of Royal Mail’s operations and
Page 198
the viability of Royal Mail’s operations under the existing regulatory policies.
Royal Mail plc
Royal Mail originated in 1516 when King Henry VIII established Sir Brian Tuke as
“Master of the Posts” with the charge to organize a postal service to carry mail between
British cities. Throughout its long history, Royal Mail proved to be the world’s
foremost pioneer in postal services. The company introduced many features
that became ubiquitous to postal services worldwide. In 1661, the Royal Mail
postmaster introduced the first postmark with the declaration [in his own spelling], “A
stamp is invented that is putt upon every letter shewing the day of the moneth that every
letter comes to the office, so that no Letter Carryer may dare detayne a letter from post
to post; which before was usual.” In the late 1700s, Royal Mail was the first postal
service to operate a fleet of independent mail coaches and outfit postal carriers in
uniforms. In 1840, Royal Mail was the first mail service to offer letter delivery services
throughout the entire country for a single rate. To certify postage prepayment in such a
system, Royal Mail invented the postage stamp. The original postage stamp, the Penny
Black, was a one-penny stamp bearing the face of Queen Victoria (see Exhibit 13.1),
and provided prepaid postage for letters of up to half an ounce to be delivered
anywhere in Great Britain and Ireland. In recognition of Royal Mail’s role in
developing the first postage stamp, British stamps remained throughout the world as the
only postage stamps that did not specify the country of issuance on the stamp. In the mid-
19th century, Royal Mail introduced letter boxes where senders could simply deposit
letters to be sent with the affixed paid postage.
EXHIBIT 13.1 | The Penny Black, the World’s First Postage Stamp (Issued in 1840)
Now, due to the dramatic changes in postal services demand at the beginning of the
21st century, it was once again a time for innovation at Royal Mail. In 2006, the British
government had removed Royal Mail’s monopoly status, allowing private companies to
compete in collecting and sorting mail in the United Kingdom. With the change,
government regulation was reduced, and Royal Mail was freed to set its own postage
rates. Over the next six years, Royal Mail responded by increasing the price of First-
Class postage from 32 pence to 60 pence.
In 2011, Parliament passed the important Postal Services Act. In this Act, the Postal
Services Commission was disbanded and the regulatory purview of postal services in
the United Kingdom shifted to the Ofcom. The intent was to dramatically alter the
regulation of Royal Mail. Despite the increased liberties, however, the Act designated
that Royal Mail was required to maintain six-days-a-week, one-price-goes-anywhere
universal service for letters regardless of the ownership structure of the company.
A decision to privatize the British mail service followed a government conclusion
that Royal Mail was less efficient and disciplined than many other post offices
elsewhere in Europe, and that it “urgently needed commercial confidence, capital, and
corporate experience to modernize quickly and effectively.” This need followed a
Source: “Penny black,” posted to public domain by the General Post Office of the United Kingdom of Great Britain and
Ireland, August 28, 2007, (accessed Jan. 3, 2017).
Page 199
sustained worldwide decline in letter volume in the first decade of the twenty-first
century as a result of the substitution of electronic communication. Vince Cable, the
UK’s Business Secretary, had argued his position before the House of Commons: “The
government’s decision on the sale is practical, it is logical, it is a commercial decision
designed to put Royal Mail’s future in a long-term sustainable business.” Over the
years since deregulation, the financial performance of Royal Mail improved and its
operating margin was up fourfold. Cash flow for the company had grown from
negative GBP504 million in fiscal year 2009 to positive GBP282 million in
fiscal year 2013.
The privatization of Royal Mail came in October 2013, when the British government
sold 60% of its 1 billion shares of Royal Mail to the public for 330 pence each. The
transaction generated proceeds of GBP2 billion. Of the shares sold, 73% were sold to
institutional investors and 23% were sold to 690,000 individual investors.
On Royal Mail’s first day of trading on the London Stock Exchange, its share price
rose 38% to 455 pence. Despite high-profile strikes by postal workers who were
strongly opposed to the privatization, the share price for Royal Mail rose in the months
following the sale. By January 2014, the share price was trading at over 600 pence, a
more than 80% increase over the sale price. Although the share price suffered a
substantial price reversal in mid-2014, over the first six months of 2015 Royal Mail
shares had recovered and closed on July 20th at 511 pence. Exhibit 13.2 provides a
history of returns to Royal Mail equity investors relative to returns in the broad FTSE
100 Index since October 2013. Despite the strong and growing profits and assets, there
was still substantial uncertainty about the value of the shares.
EXHIBIT 13.2 | Cumulative Weekly Total Stock Returns—Royal Mail and FTSE 100 Index
Two sources of uncertainty were competition and regulation. In April 2012, the
Dutch postal service company TNT had entered the UK door-to-door postal services
market with a subsidiary business that would eventually become known as Whistl. Over
the ensuing years, the financial viability of the venture had proved challenging. Just last
month, Whistl management had announced that the company would be suspending its
door-to-door business to focus on its bulk mail processing service in Britain. It would
rely on Royal Mail’s infrastructure for the “final mile” of delivery service. Two
thousand Whistl employees were laid off. The British government responded by calling
into question Ofcom’s regulatory policies. In response, Ofcom began a high-profile
review of the postal services market in an effort to stimulate competition. One concern
was that Royal Mail maintained pricing power in the market such that it could engage in
anticompetitive pricing. Such allegations raised the question of what was the
appropriate level of profitability for a firm such as Royal Mail.
Source: Created by author using data from
Page 200
Moya Greene, CEO of Royal Mail, expressed the company’s willingness to fully
comply with the review. She also indicated that Royal Mail was facing difficulties of its
own but was determined to continue to get better. She specifically stated:
This has been a challenging year. Through a continued focus on efficiency and tight
cost control, we have offset the impact of lower than anticipated UK parcel
revenue this year, so that operating profit before transformation costs is in line
with our expectations. It has also been a year of innovation, with a range of new
initiatives delivered at pace. We have introduced around 30 new projects,
including services, products and promotions, to improve our customer offering.
One example was the recent announcement that the company was pursuing an
efficiency objective by purchasing 76,000 hand-held scanner devices from Zebra
Technologies. Other initiatives included upstream investment opportunities
such as the acquisition of online shopping platforms such as Mallzee and
Just that morning, Greene had issued an update on the company performance for the
most recent quarter and emphasized substantial success and concerns. She reported that
strong performance from the parcels business was offsetting declines in letter delivery
revenue, and that the company was committed to investing in innovation and efficiency.
She cautioned that the rest of year’s performance would depend on the critically
important Christmas period. Exhibit 13.3 shows historical data on Royal Mail unit
volume. Exhibits 13.4 and 13.5 provide historical financial statements for Royal Mail.
EXHIBIT 13.3 | Royal Mail Unit Volume History (Millions of Units, Period Ending March 31)
Source: Company annual reports.
EXHIBIT 13.4 | Royal Mail Consolidated Income Statement (Reported as of the End of March in
Millions of GBP)
Source: Royal Mail annual report, 2015.
EXHIBIT 13.5 | Royal Mail Consolidated Balance Sheet (Reported as of the End of March in Millions
The Cost of Capital
The cost of capital was theoretically defined as the prevailing return that investors
could earn on alternative investments of similar risk. As such, it was inherently a figure
determined by market forces rather than by the company. One attractive feature of the
cost of capital was that it provided an opportunity cost benchmark for evaluating
investment returns. Business returns that were expected to exceed the cost of capital
of GBP)
Source: Royal Mail annual report, 2015.
were considered value creating to investors since the expected returns exceeded what
investors could generate on their own with investments of similar risk. Business returns
that were expected to be less than the cost of capital were considered value destroying
to investors. Moreover, the cost of capital provided an estimate of the fair return for
investors in competitive businesses. It was expected that businesses in competitive
markets would, on average, earn their cost of capital.
In estimating the cost of capital it was common to consider all capital used in the
business. To estimate the opportunity cost of total business capital, it was common to
use a weighted average of the prevailing required return values for the various types of
investors in the business, such as debt holders and equity holders. This approach was
called the weighted average cost of capital (WACC).
In response to the burgeoning interest in a cost of capital estimate for Royal Mail,
Hart had asked a colleague, Kyle Brooks, to provide an estimate. Given the unique
nature of Royal Mail’s business and its limited life in public capital markets, Hart
recognized that this was not an easy task. Still, Brooks had quickly generated an
estimate of 3.828% and provided a stack of documentation. Exhibit 13.6 provides his
analysis and summary. Exhibits 13.7 through 13.9 provide his supporting documents.
EXHIBIT 13.6 | Kyle Brooks’s Cost-of-Capital Analysis

Source: Created by author.
EXHIBIT 13.7 | One-Month Interbank Lending Rate and Government Bond Yields (%)
Data source: Bank of England Statistical Interactive Database,
EXHIBIT 13.8 | UK Corporate Benchmark Bond Yields for 10-Year Maturity in GBP (July 14, 2015)
Sources: Thomson Reuters and author estimates.
EXHIBIT 13.9 | Financial Data for Comparables (Market Data as of July 14, 2015, Other Data is Most
Recent Available)
Data source: Financial Times,
Page 207
CASE 14 Chestnut Foods
In early 2014, stock performance at Minneapolis-based Chestnut Foods (Chestnut) had
failed to meet expectations for several years running, and senior management was hardpressed
to talk about much else. CFO Brenda Pedersen, eager to reverse the trend, had
begun advocating two strategic initiatives: a $1 billion investment in company growth
and the adoption of a more progressive corporate identity. At a restaurant overlooking
the Mississippi River, Pedersen hosted an informal meeting of company VPs to build
support; exchanges had been highly spirited, but no consensus had materialized. Then,
on her drive home from the restaurant, she received a call from Claire Meyer, VP of
Food Products, who had attended the dinner. Given the tone of the meeting, Pedersen
wasn’t surprised to get a call so soon, but what Meyer shared floored the CFO. “It just
came up on Twitter. My admin saw it and texted me. I’m not going to say I told you so.”
Meyer read her the tweet. “Van Muur buys 10% of Chestnut, seeks seats on board
and a new management direction.” Meyer filled in the details: based on filings earlier in
the day with the U.S. Securities and Exchange Commission, Rollo van Muur, a highprofile
activist investor, had quietly and unexpectedly purchased 10% of the company
and was asserting the right to two seats on the board. In addition, Van Muur was
recommending that the Instruments division be sold off “to keep the focus where it
Pedersen drove in shocked silence and processed the information while Meyer
waited patiently on the line, not sure what to expect. When Pedersen finally responded,
she fell back on humor: “Well, that’s one way to move the discussion along, but he could
have just come to dinner with us.” By the end of the night, she had spoken with CEO
Moss Thornton and organized a team of lawyers and finance staff to assess the
Page 208
company’s options.
The Company
Chestnut Foods began in north Minneapolis in 1887, when 22-year-old Otto Chestnut
(born Otto Kestenbaum in Bavaria) opened a bakery that made lye rolls and pretzels,
then stumbled into success as a supplier of sandwiches to the St. Paul, Minneapolis, &
Manitoba Railway. Six years later, on a trip to Chicago, Illinois, to visit the
Columbian Exposition, Chestnut happened to come upon the Maxwell Street
Market, a vibrant melting-pot community of merchants of eastern European descent. At
the market, he had a chance meeting with Lem Vigoda and George Maszk, founders of
V&M Classic Foods, which provided a range of meat and fish products as well as
preserves and condiments. Through them he witnessed a nascent ad hoc distribution
system to neighborhood groceries in the rapidly growing city. A vision of wholesale
food production and distribution struck him, and he returned to Minneapolis determined
to realize it.
By 1920, as regional grocery chains had begun to materialize, Chestnut, since joined
by his sons Thomas and Andrew, had purchased V&M among other food businesses.
Their plan was for the expanded Chestnut to stock the regional grocery chains across the
upper Midwest, while also continuing to supply railroad dining cars and, beginning in
1921, a Chestnut chain of automats in Chicago and Detroit. Otto Chestnut died in 1927
at age 62, but the company was well positioned to weather the Great Depression; in
1935, the Chestnut brothers sold the automat division to Horn & Hardart, then used the
proceeds to purchase farmland in Florida and central California. In the postwar period,
as the supermarket model emerged, Chestnut grew with it, both organically and through
acquisition, going public in 1979. By 2013, the company was valued at $1.8 billion,
with annual profits of more than $130 million.
Chestnut sought to “provide hearty sustenance that gets you where you’re going.”
The firm had two main business segments: Food Products, which produced a broad
range of fresh, prepackaged, and processed foods for retail and food services, and
Instruments, which delivered systems and specialized equipment used in the processing
and packaging of food products. Instruments provided a variety of quality control and
automation services used within the company. The company took increasing pride in the
high quality of its manufacturing process and believed it to be an important
differentiator among both investors and consumers.
In recent years, Chestnut’s shares had failed to keep pace with either the overall
stock market or industry indexes for foods or machinery (see Exhibit 14.1). The
company’s credit rating with Standard & Poor’s had recently declined one notch to A−.
Securities analysts had remarked on the firm’s lackluster earnings growth, pointing to
increasing competition in the food industry due to shifting demands. One prominent Wall
Street analyst noted on his blog, “Chestnut has become as vulnerable to a hostile
takeover as a vacant umbrella on a hot beach.”
EXHIBIT 14.1 | Value of $1.00 Invested from January 2010 to December 2013 (weekly adjusted
Food Products Division
The Food Products division provided a range of prepackaged and frozen products
related to the bread and sandwich market for both institutional food services and retail
grocery distribution throughout North America, and some limited distribution in parts of
Central and South America. Revenues for the segment had long been stable; the
company achieved an average annual growth rate of 2% during 2010 through 2013. In
Data source: Yahoo! Finance and case writer data.
Page 209
2013, segment net operating profit after tax (NOPAT) and net assets were $88 million
and $1.4 billion, respectively. Looking to the foreseeable future, operating
margins were expected to be tight such that return on capital for the division
was expected to be 6.3%.
From its long association with the sandwich market prior to the advent of fast food,
through its expansion in the 1950s and 1960s, Chestnut had consistently retained
portions of the market for institutional ready-to-bake frozen bread dough, bread and
rolls, and ready-to-bake soft pretzels. Premium-quality versions of these were packaged
and sold in supermarkets under both the Chestnut brand and store brands.
Despite repeated efforts over the years to expand into other markets, the specialty
bread and pretzel market remained Chestnut’s primary driver of growth, reliant on
scale, multiple outlets and packaging formats, and product innovation, most recently
with Chestnut Classic Rapid-Rise Soft Pretzels, a newly formulated ready-to-bake
product that produced oven-fresh pretzels in 10 minutes, including preheating.
Particularly since the 1980s, after Chestnut had gone public and as demand for fresh
produce, diverse ethnic cuisines, and health-conscious snacks had begun to increase, the
firm made a series of moves designed to broaden its range of offerings, but the industry
remained highly competitive and the returns on those alternative products modest.
Nevertheless, customer surveys reflected consistently high ratings for product quality,
freshness, and flavor. Chestnut was frequently referred to in popular culture,
particularly in the northern states. Its well-known catchphrase “You’ll make it with
Chestnut,” was synonymous with warm, hearty bread for people on the move.
Instruments Division
Since its earliest days amid the bustling flour mills and rail lines of Minneapolis,
Minnesota, Chestnut’s management had maintained a shared value that technology,
Page 210
properly harnessed, could improve quality and efficiency across production processes,
and over the years, the company had developed a strong expertise in food process
instruments. The success of companies such as Toledo Scale, founded in Toledo in 1901
before merging to become Columbus, Ohio-based, Swiss-owned Mettler-Toledo in
1989, was not lost on Otto Chestnut himself, although thoughts of such diversification
were repeatedly deferred. Yet as a more cyclical and diverse industry (with products
providing advanced capabilities to utilities, military and aerospace programs, and
industrial and residential applications in addition to food production), precision
instruments seemed to complement the food industry and to present opportunities for
growth overseas. In 1991, Chestnut capitalized on an opportunity to purchase
Consolidated Automation Systems, a medium-sized food-processing-instrument
equipment company based in Thunder Bay, Ontario, and the Instruments division was
born. This proved very successful and was followed by the purchase in 1997 of
Redhawk Laboratories, a small manufacturer of computer-controlled precision
equipment based in Troy, New York.
Although 20% of the division’s revenue was derived internally from the Chestnut’
Products division, the Instruments division produced equipment and automation support
for a wide range of food producers in North America. Demand, much of it from
overseas, was strong, but required substantial investments in R&D and fixed assets.
Instruments division sales had increased by nearly 20% in 2013. Segment
NOPAT was $46 million, and net assets were $600 million. The expected
return on capital for the division over the foreseeable future was 7.7%.
Recent Developments
Concerned above all else with the poor stock-price performance, and mindful of the
importance of scale to profitability in the precision instrument industry, Pederson hoped
to sustain corporate growth opportunities by raising $1 billion to invest in the expansion
of the Instruments division. She had been delighted with the market’s strong interest for
the high-value-added offerings the division maintained and believed that funneling
investment in its direction was the way forward for Chestnut. She believed that the
7.7% expected returns for this division could be maintained with additional company
investment. She also believed that the tradition-laden company name failed to capture
the firm’s strategic direction and that the name “CF International” better reflected the
growth and modern dynamism envisioned by leadership.
At the dinner meeting, as over the past few weeks, her initiative had generated
partisan reactions from the company’s two divisions. Curiously, much of the discussion
at dinner focused on the rather pedestrian topic of the company hurdle rate. Meyer had
strongly contended from her perspective in Food Products that the two segments of the
business were different enough that they warranted separate hurdle rates; Rob Suchecki,
VP of Instruments, was ardent in his opposition.
SUCHECKI: Look, Claire, to investors, the firm is just a big black box. They hire us to
take care of what’s inside the box and judge us by the dividends coming
out of the box. Our job as managers should be to put their money where the
returns are best. Consistent with this reality, our company has a longstanding
policy of using a single common hurdle rate. If that hurdle rate
takes from an underperforming division and gives to a more profitable
division, isn’t that how it’s supposed to work? We’re all well aware that
investors consider past profits unacceptable.
MEYER: Rob, the question is how you define profitability. High-return investments
are not necessarily the best investments, and to be fair, our investors are
way more savvy than you are giving them credit for; they have a wide
Page 211
range of information sources and analytic tools at their disposal and have a
firm grasp on what is going on inside the company. They appreciate the
risk and return of the different business units, and they adjust performance
expectations accordingly. So to this type of investor, different hurdle rates
for the different levels of risk reflects how things really are.
SUCHECKI: But Claire, multiple hurdle rates create all sorts of inequities that are
bound to create discord among the ranks. If you set the hurdle rate for
Food Products lower than the firm-wide hurdle rate, you’re just moving
your division’s goalposts closer to the ball. You haven’t improved
performance, you’ve only made it easier to score!
MEYER: You’ve got to realize, Rob, that we are playing in different leagues. Each
part of the business has to draw on capital differently, because the rules
for each unit are different. If Food Products was on its own, investors
would be happy with a lower return because Food Products’ risk is so
much lower. Stability has its perks. And likewise, if Food Products could
raise capital on its own, we’d surely get that capital at a cheaper rate.
SUCHECKI: Different leagues? The fact is that we don’t raise capital separately; we
raise it as a firm, based on our overall record. Our debt is Chestnut debt
and our equity is Chestnut equity. It’s a simple fact that investors expect
returns that beat our corporate cost of capital of 7.0%. It is only by
growing cash flow company-wide that investors are rewarded for their
risk capital. In fact, being diversified as a company most likely helps
reduce our borrowing costs, letting us borrow more as a unit than we
could separately.
Page 212
MEYER: Rob, you know very well the kind of problems that thinking creates. If
7.0% is always the hurdle, the company will end up overinvesting in highrisk
projects. Why? Because sensible, low-risk projects won’t tend to
clear the hurdle. Before long, the company will be packed with high-risk
projects, and 7.0% will no longer be enough to compensate investors for
the higher risk. By not accommodating multiple hurdle rates, we are setting
ourselves up for all sorts of perverse investment incentives. The Food
Products division is getting starved for capital, penalized for being a safer
bet, while the Instruments division is getting overfed, benefitting from a
false sense of security.
SUCHECKI: Hold on, I object! The reason Food Products is not getting capital is
because there’s no growth in your division. Instruments is coming on like
gangbusters. Why would investors want us to put additional capital into a
business that is barely keeping up with inflation?
MEYER: With a plot of risk versus return, the dashed line is our current corporate
hurdle rate based on the average risk of the company. The solid line is a
theoretical hurdle rate that adjusts for the risk of businesses within the
company. Food Products is marked with an “F.” It is expected to earn
6.3% on capital, which doesn’t clear the corporate hurdle rate, but if you
adjust for risk, it does clear it, and it is profitable! Instruments is the
opposite. It’s marked on the graph with an “I.” It can expect 7.7% returns,
which clears the corporate hurdle. But since it is inherently riskier, the
risk-adjusted hurdle rate exceeds 7.7%. Unless we are careful to adjust for
that risk, it remains a hidden cost, and we are fooling ourselves.
SUCHECKI: Claire, I believe it is pure speculation to claim that the risk adjustment line
you’ve sketched out is anywhere close to that steep. Second, even if you
are theoretically correct, I believe there is practical wisdom in
maintaining a single, simple, consistent, and understandable performance
criterion. A single measure of the cost of money makes NPV results
consistent, at least in economic terms. If Chestnut adopts multiple rates for
discounting cash flows, the NPV and economic-profit calculations are
going to lose their meaning, and business segments won’t be able to make
At this point, pens and paper napkins were procured for Meyer, who presented the
group with a diagram illustrating her argument (Figure 14.1) before continuing.
At this point, Pederson had finally managed to rein in the heated debate and redirect
FIGURE 14.1. | Meyer’s diagram of constant versus risk-adjusted hurdle rates.
Source: Created by case writer.
the conversation to matters that were less controversial.
The Future of Chestnut
It had been quite a night. Pedersen realized that she didn’t have the time to resolve all
the issues before her to influence Rollo van Muur’s attack on management, but any
proposal she made needed to be clear on its merits. Her thoughts returned to the
discussion between the VPs. Was the historical Chestnut way of doing business as
defensible as Suchecki made it sound? Was Instruments underperforming, as Van Muur
and Meyer asserted? She knew that Van Muur’s purchases had been prompted by
Chestnut’ depressed share price. In light of this development, weren’t her investment
and identity proposals all the more relevant?
EXHIBIT 14.2 | Estimation of WACC for Chestnut Foods (year-end 2013)
An alternative model that uses a market risk premium of 9% and a risk-free rate of 0.1% gives a similar cost-of-equity
EXHIBIT 14.3 | Capital Market Data, December 2013
Data source: Bloomberg, case writer estimates.
EXHIBIT 14.4 | Financial Data for Industry Comparables, December 2013 (dollar figures in millions)

*Identifies bond ratings that are estimated by case writer.
Data source: Bloomberg, Yahoo! Finance, Value Line, and case writer estimates.
Page 217
PART 4 Capital Budgeting and Resource Allocation
Page 219
CASE 15 Target Corporation
On November 14, 2006, Doug Scovanner, CFO of Target Corporation, was preparing
for the November meeting of the Capital Expenditure Committee (CEC). Scovanner was
one of five executive officers who were members of the CEC (Exhibit 15.1). On tap for
the 8:00 a.m. meeting the next morning were 10 projects representing nearly $300
million in capital-expenditure requests. With the fiscal year’s end approaching in
January, there was a need to determine which projects best fit Target’s future store
growth and capital-expenditure plans, with the knowledge that those plans would be
shared early in 2007, with both the board and investment community. In reviewing the
10 projects coming before the committee, it was clear to Scovanner that five of the
projects, representing about $200 million in requested capital, would demand the
greater part of the committee’s attention and discussion time during the meeting.
EXHIBIT 15.1 | Executive Officers and Capital Expenditure Committee Members
The CEC was keenly aware that Target had been a strong performing company in
part because of its successful investment decisions and continued growth. Moreover,
Target management was committed to continuing the company’s growth strategy of
opening approximately 100 new stores a year. Each investment decision would have
long-term implications for Target: an underperforming store would be a drag on
earnings and difficult to turn around without significant investments of time and money,
whereas a top-performing store would add value both financially and strategically for
years to come.
Retail Industry
The retail industry included a myriad of different companies offering similar product
lines (Exhibit 15.2). For example, Sears and JCPenney had extensive networks of
Source: Target Corporation, used with permission.
Page 220
stores that offered a broad line of products, many of which were similar to Target’s
product lines. Because each retailer had a different strategy and a different customer
base, truly comparable stores were difficult to identify. Many investment analysts,
however, focused on Wal-Mart and Costco as important competitors for Target, although
for different reasons. Wal-Mart operated store formats similar to Target, and most
Target stores operated in trade areas where one or more Wal-Mart stores were
located. Wal-Mart and Target also carried merchandising assortments, which
overlapped on many of the same items in such areas as food, commodities, electronics,
toys, and sporting goods.
Costco, on the other hand, attracted a customer base that overlapped closely with
Target’s core customers, but there was less often overlap between Costco and Target
with respect to trade area and merchandising assortment. Costco also differed from
Target in that it used a membership-fee format. Most of the sales of these companies
were in the broad categories of general merchandise and food. General merchandise
included electronics, entertainment, sporting goods, toys, apparel, accessories, home
furnishing, and décor, and food items included consumables ranging from apples to
EXHIBIT 15.2 | Retail Company Financial Information
Data Source: Yahoo! Finance and Value Line Investment Survey.
Wal-Mart had become the dominant player in the industry with operations located in
the United States, Argentina, Brazil, Canada, Puerto Rico, the United Kingdom, Central
America, Japan, and Mexico. Much of Wal-Mart’s success was attributed to its
“everyday low price” pricing strategy that was greeted with delight by consumers but
created severe challenges for local independent retailers who needed to remain
competitive. Wal-Mart sales had reached $309 billion for 2005 for 6,141 stores and a
market capitalization of $200 billion, compared with sales of $178 billion and 4,189
stores in 2000. In addition to growing its top line, Wal-Mart had been successful in
creating efficiency within the company and branching into product lines that offered
higher margins than many of its commodity type of products.
Costco provided discount pricing for its members in exchange for membership fees.
For fiscal 2005, these fees comprised 2.0% of total revenue and 72.8% of operating
income. Membership fees were such an important factor to Costco that an equity analyst
had coined a new price-to-membership-fee-income ratio metric for valuing the
company. By 2005, Costco’s sales had grown to $52.9 billion across its 433
warehouses, and its market capitalization had reached $21.8 billion. Over the previous
five years, sales excluding membership fees had experienced compound growth of
10.4%, while membership fees had grown 14.6% making the fees a significant growth
source and highly significant to operating income in a low-profit-margin business.
In order to attract shoppers, retailers tailored their product offerings, pricing, and
branding to specific customer segments. Segmentation of the customer population had
led to a variety of different strategies, ranging from price competition in Wal-Mart
stores to Target’s strategy of appealing to style-conscious consumers by offering unique
assortments of home and apparel items, while also pricing competitively with Wal-Mart
on items common to both stores. The intensity of competition among retailers had
Page 221
resulted in razor-thin margins making every line item on the income statement an
important consideration for all retailers.
The effects of tight margins were felt throughout the supply chain as retailers
constantly pressured their suppliers to accept lower prices. In addition, retailers used
off-shore sources as low-cost substitutes for their products and implemented methods
such as just-in-time inventory management, low-cost distribution networks, and high
sales per square foot to achieve operational efficiency. Retailers had found that
profit margins could also be enhanced by selling their own brands, or products
with exclusive labels that could be marketed to attract the more affluent customers in
search of a unique shopping experience.
Sales growth for retail companies stemmed from two main sources: creation of new
stores and organic growth through existing stores. New stores were expensive to build,
but were needed to access new markets and tap into a new pool of consumers that could
potentially represent high profit potential depending upon the competitive landscape.
Increasing the sales of existing stores was also an important source of growth and value.
If an existing store was operating profitably, it could be considered for renovation or
upgrading in order to increase sales volume. Or, if a store was not profitable,
management would consider it a candidate for closure.
Target Corporation
The Dayton Company opened the doors of the first Target store in 1962, in Roseville,
Minnesota. The Target name had intentionally been chosen to differentiate the new
discount retailer from the Dayton Company’s more upscale stores. The Target concept
flourished. In 1995, the first SuperTarget store opened in Omaha, Nebraska, and in
1999, the website was launched. By 2000, the parent company, Dayton
Hudson, officially changed its name to Target Corporation.3
By 2005, Target had become a major retailing powerhouse with $52.6 billion in
revenues from 1,397 stores in 47 states (Exhibit 15.3 and Exhibit 15.4). With sales of
$30 billion in 2000, the company had realized a 12.1% sales growth over the past five
years and had announced plans to continue its growth by opening approximately
100 stores per year in the United States in the foreseeable future. While Target
Corporation had never committed to expanding internationally, analysts had been
speculating that domestic growth alone would not be enough to sustain its historic
success. If Target continued its domestic growth strategy, most analysts expected capital
expenditures would continue at a level of 6% to 7% of revenues, which equated to
about $3.5 billion for fiscal year 2006.
EXHIBIT 15.3 | Target Income Statements ($ millions)
Data source: Target Corporation annual reports.
EXHIBIT 15.4 | Balance Sheet Statements ($ millions)
Page 222
In contrast with Wal-Mart’s focus on low prices, Target’s strategy was to consider
the customer’s shopping experience as a whole. Target referred to its customers as
guests and consistently strived to support the slogan, “Expect more. Pay less.” Target
focused on creating a shopping experience that appealed to the profile of its “core
guest”: a college-educated woman with children at home who was more affluent than
the typical Wal-Mart customer. This shopping experience was created by emphasizing a
store décor that gave just the right shopping ambience. The company had been highly
successful at promoting its brand awareness with large advertising campaigns; its
advertising expenses for fiscal 2005 were $1.0 billion or about 2.0% of sales and
26.6% of operating profit. In comparison, Wal-Mart’s advertising dollars amounted to
0.5% of sales and 9.2% of operating income. Consistent advertising spending
Data source: Target Corporation annual reports.
resulted in the Target bull’s-eye logo’s (Exhibit 15.5) being ranked among the most
recognized corporate logos in the United States, ahead of the Nike “swoosh.”
As an additional enhancement to the customer shopping experience, Target offered
credit to qualified customers through its REDcards: Target Visa Credit Card and Target
Credit Card. The credit-card business accounted for 14.9% of Target’s operating
earnings and was designed to be integrated with the company’s overall strategy by
focusing only on customers who visited Target stores.
Capital-Expenditure Approval Process
The Capital Expenditure Committee was composed of a team of top executives that met
monthly to review all capital project requests (CPRs) in excess of $100,000. CPRs
were either approved by the CEC, or in the case of projects larger than $50 million,
required approval from the board of directors. Project proposals varied widely and
included remodeling, relocating, rebuilding, and closing an existing store to building a
new store. A typical CEC meeting involved the review of 10 to 15 CPRs. All of the
proposals were considered economically attractive, as any CPRs with questionable
EXHIBIT 15.5 | Target Logo
Source: Target Corporation, used with permission.
Page 223
economics were normally rejected at the lower levels of review. In the rare instance
when a project with a negative net present value (NPV) reached the CEC, the committee
was asked to consider the project in light of its strategic importance to the company.
CEC meetings lasted several hours as each of the projects received careful scrutiny
by the committee members. The process purposefully was designed to be rigorous
because the CEC recognized that capital investment could have significant impact on the
short-term and long-term profitability of the company. In addition to the large amount of
capital at stake, approvals and denials also had the potential to set precedents that
would affect future decisions. For example, the committee might choose to reject a
remodeling proposal for a store with a positive NPV, if the investment amount requested
was much higher than normal and therefore might create a troublesome precedent for all
subsequent remodel requests for similar stores. Despite how much the projects differed,
the committee was normally able to reach a consensus decision for the vast majority of
them. Occasionally however, a project led to such a high degree of disagreement within
the committee that the CEO made the final call.
Projects typically required 12 to 24 months of development prior to being
forwarded to the CEC for consideration. In the case of new store proposals, which
represented the majority of the CPRs, a real-estate manager assigned to that geographic
region was responsible for the proposal from inception to completion and also for
reviewing and presenting the proposal details. The pre-CPR work required a certain
amount of expenditures that were not recoverable if the project were ultimately rejected
by CEC. More important than these expenditures, however, were the “emotional sunk
costs” for the real-estate managers who believed strongly in the merits of their
proposals and felt significant disappointment if any project was not approved.
The committee considered several factors in determining whether to accept
or reject a project. An overarching objective was to meet the corporate goal of adding
about 100 stores a year while maintaining a positive brand image. Projects also needed
to meet a variety of financial objectives, starting with providing a suitable financial
return as measured by discounted cash-flow metrics: NPV and IRR (internal rate of
return). Other financial considerations included projected profit and earnings per share
impacts, total investment size, impact on sales of other nearby Target stores, and
sensitivity of NPV and IRR to sales variations. Projected sales were determined based
on economic trends and demographic shifts but also considered the risks involved with
the entrance of new competitors and competition from online retailers. And lastly, the
committee attempted to keep the project approvals within the capital budget for the year.
If projects were approved in excess of the budgeted amount, Target would likely need to
borrow money to fund the shortfall. Adding debt unexpectedly to the balance sheet could
raise questions from equity analysts as to the increased risk to the shareholders as well
as to the ability of management to accurately project the company’s funding needs.
Other considerations included tax and real-estate incentives provided by local
communities as well as area demographics. Target typically purchased the properties
where it built stores, although leasing was considered on occasion. Population growth
and affluent communities were attractive to Target, but these factors also invited
competition from other retailers. In some cases, new Target stores were strategically
located to block other retailers despite marginal short-term returns.
When deciding whether to open a new store, the CEC was often asked to consider
alternative store formats. For example, the most widely used format was the 2004
version of a Target store prototype called P04, which occupied 125,000 square feet,
whereas a SuperTarget format occupied an additional 50,000 square feet to
accommodate a full grocery assortment. The desirability of one format over another
often centered on whether a store was expected to eventually be upgraded. Smaller
stores often offered a higher NPV; but the NPV estimate did not consider the effect of
Page 224
future upgrades or expansions that would be required if the surrounding communities
grew, nor the advantage of opening a larger store in an area where it could serve the
purpose of blocking competitors from opening stores nearby.
The committee members were provided with a capital-project request “dashboard”
for each project that summarized the critical inputs and assumptions used for the NPV
and IRR calculations. The template represented the summary sheet for an elaborate
discounted cash flow model. For example, the analysis of a new store included
incremental cash flow projections for 60 years over which time the model included a
remodeling of the store every 10 years. Exhibit 15.6 provides an example of a
dashboard with a detailed explanation of the “Store Sensitivities” section. The example
dashboard shows that incremental sales estimates, which were computed as the total
sales expected for the new store less the sales cannibalized from Target stores already
located in the general vicinity. Sales estimates were made by the Research and Planning
group. The R&P group used demographic and other data to make site-specific forecasts.
Incremental sales were computed as total sales less those cannibalized from other
Target stores. The resulting NPV and IRR metrics were divided between value created
by store sales and credit-card activity. NPV calculations used a 9.0% discount rate for
cash flows related to the store cash flows and a 4.0% discount rate for creditcard
cash flows. The different discount rates were chosen to represent the
different costs of capital for funding store operations versus funding credit-card
EXHIBIT 15.6 | Example of a Capital Project Request Dashboard
The dashboards also presented a variety of demographic information, investmentcost
details and sensitivity analyses. An important sensitivity feature was the
comparison of the project’s NPV and IRR to the prototype. For example, the P04 store
had an NPV of about $10 million and an IRR of 13%. The sensitivity calculations
answered the question of how much a certain cost or revenue item needed to change in
order for the project to achieve the same NPV or IRR that would be experienced for the
typical P04 or SuperTarget store.
The November Meeting
Source: Target Corporation, used with permission.
Of the 10 projects under consideration for the November CEC meeting, Doug Scovanner
recognized that five would be easily accepted, but that the remaining five CPRs were
likely to be difficult choices for the committee. These projects included four new store
openings (Gopher Place, Whalen Court, The Barn, and Goldie’s Square) and one
remodeling of an existing store into a SuperTarget format (Stadium Remodel).
Exhibit 15.7 contains a summary of the five projects, and Exhibit 15.8 contains the CPR
dashboards for the individual projects.
EXHIBIT 15.7 | Economic Analysis Summary of Project Proposals
Source: Target Corporation, used with permission.
EXHIBIT 15.8 | Individual Capital Project Request “Dashboards”
As was normally the case, all five of the CPRs had positive NPVs, but Scovanner
wondered if the projected NPVs were high enough to justify the required investment.
Further, with stiff competition from other large retailers looking to get footholds in
major growth areas, how much consideration should be given to short-term versus longterm
sales opportunities? For example, Whalen Court represented a massive investment
with relatively uncertain sales returns. Should Scovanner take the stance that the CEC
should worry less about Whalen Court’s uncertain sales and focus more on the project
as a means to increase Target’s brand awareness in an area with dense foot traffic and
high-fashion appeal? Goldie’s Square represented a more typical investment level of
Source: Target Corporation, used with permission.
$24 million for a SuperTarget. The NPV, however, was small at $317,000, well below
the expected NPV of a SuperTarget prototype, and would be negative without the value
contribution of credit-card sales.
As CFO, Scovanner was also aware that Target shareholders had experienced a
lackluster year in 2006, given that Target’s stock price had remained essentially flat
(Exhibit 15.9). Stock analysts were generally pleased with Target’s stated growth
policy and were looking for decisions from management regarding investments that
were consistent with the company maintaining its growth trajectory. In that regard,
Scovanner recognized that each of the projects represented a growth opportunity for
Target. The question, however, was whether capital was better spent on one project or
another to create the most value and the most growth for Target shareholders. Thus
Scovanner believed that he needed to rank the five projects in order to be able to
recommend which ones to keep and which ones to reject during the CEC meeting the
next day.
EXHIBIT 15.9 | Stock Price Performance 2002–06
Data source: Yahoo! Finance.
Page 239
CASE 16 The Investment Detective
The essence of capital budgeting and resource allocation is a search for good
investments in which to place the firm’s capital. The process can be simple when
viewed in purely mechanical terms, but a number of subtle issues can obscure the best
investment choices. The capital-budgeting analyst, therefore, is necessarily a detective
who must winnow bad evidence from good. Much of the challenge is in knowing what
quantitative analysis to generate in the first place.
Suppose you are a new capital-budgeting analyst for a company considering
investments in the eight projects listed in Exhibit 16.1. The CFO of your company has
asked you to rank the projects and recommend the “four best” that the company should
EXHIBIT 16.1 | Projects’ Free Cash Flows (dollars in thousands)
In this assignment, only the quantitative considerations are relevant. No other
project characteristics are deciding factors in the selection, except that management has
determined that projects 7 and 8 are mutually exclusive.
All the projects require the same initial investment, $2 million. Moreover, all are
believed to be of the same risk class. The firm’s weighted average cost of capital has
never been estimated. In the past, analysts have simply assumed that 10% was an
appropriate discount rate (although certain officers of the company have recently
asserted that the discount rate should be much higher).
To stimulate your analysis, consider the following questions:
*Indicates year in which payback was accomplished.
1. Can you rank the projects simply by inspecting the cash flows?
2. What criteria might you use to rank the projects? Which quantitative ranking methods
are better? Why?
3. What is the ranking you found by using quantitative methods? Does this ranking differ
from the ranking obtained by simple inspection of the cash flows?
4. What kinds of real investment projects have cash flows similar to those in
Exhibit 16.1?
Page 241
CASE 17 Centennial Pharmaceutical Corporation
In early 2014, the board of directors of Centennial Pharmaceutical Corporation (CPC)
was debating its next move regarding an earnout plan (EP) that was currently in effect
for managing one of its business units. The EP had originally been structured for
CloneTech management as part of the consideration for the company when CPC bought
CloneTech in 2013. The EP served as a performance incentive to CloneTech’s
management by stipulating bonus payments to them based on the level of CloneTech’s
earnings during its first four years as a CPC business unit. CPC’s board of directors
understood the effectiveness of an earnout when acquiring a closely held company. The
bonus payments motivated the newly acquired managers to work as efficiently as
possible with CPC management and therefore create the most value for themselves and
CPC shareholders.
But the EP had recently become infeasible due to CPC’s acquisition of PharmaNew.
PharmaNew had several lines of business, one of which was Strategic Research
Projects (SRP), which performed the same cloning research as CloneTech. Part of the
synergies that CPC hoped to realize from PharmaNew would occur by combining the
CloneTech and SRP operations. Once that combination was completed, however, it
would no longer be possible to directly measure the performance of CloneTech as a
separate entity. If the earnings could not be directly attributable to CloneTech, the
original EP could not continue to be in force. Therefore, CPC’s board had constructed a
revised EP based on the joint performance of CloneTech and SRP and had presented it
to CloneTech management for their consideration. Unfortunately, the proposed revision
had not been well received by CloneTech management, who felt that their economic
position had been materially compromised by the uncertainty of SRP’s future
Page 242
The Pharmaceutical Industry
The key challenge for a pharmaceutical was to maintain a pipeline of new drugs to bring
to the market. A large part of a pharma company’s profits was due to the profits of new
drugs, which were protected from being copied for 20 years by U.S. patent law. Once a
drug lost patent protection and became generic, many of the company’s competitors
rushed to produce the drug under their own brand names. Such competition usually
drove the price down close to the cost of production, leaving the original producer and
founder of the drug with a dramatically lower volume and margin. Thus, it was either
feast or famine for a pharmaceutical through large profits when it first created a new
drug or little to no profits when the drug lost its patent protection. Patents, therefore,
served as incentives for pharmas to invest in research that produced lucrative new
drugs. Consumers were rewarded with new treatments for ailments, but they had to pay
“monopoly rents” to the founder of the drug as long as it enjoyed patent protection. In
the long term, however, the low cost of production for a drug eventually rewarded the
consumer with the choice of many brands at low prices once the patent protection was
The challenge for the U.S. drug industry was complicated further by the need to
receive approval from the Federal Drug Administration (FDA) for any drug before it
could be taken to the market. The approval process often took years to complete and
sometimes became an insurmountable hurdle that prevented the drug from being
marketed in the United States. Thus, the drug business was fraught with chances: the
chance of discovering a viable drug, the chance of getting FDA approval, and the chance
that a competitor might be first to make the same discovery. To combat the odds of not
finding the next blockbuster drug, such as Viagra or Celebrex, companies 1 were forced
Page 243
to maintain a constant research effort with a wide spectrum of potential drug
breakthroughs. Only a small percentage of research dollars were directly responsible
for a successful new drug. With so few drugs reaching the market, it was all the more
important for the company to have a large portfolio of potential new drugs in the
pipeline at all times. Accordingly, stock analysts looked carefully at R&D budgets and
announcements of possible new breakthroughs by pharmaceuticals in order to judge
their market value.
In 2012, CPC was a USD20 billion U.S.-based company with 55% of its revenues
coming from North America, 25% from Europe, 15% from Asia, and the remaining 5%
from 20 countries around the world. The company was growing in North America, but
most of its growth was coming from its international markets. For the past several years,
CPC had enjoyed strong growth in sales and profits, which had resulted in an A debt
rating from Moody’s. In the 1990s, however, CPC had run into financial problems
resulting from class-action lawsuits in the United States related to the side effects of
several of its drugs. In an effort to cut costs and survive the cash-flow losses,
management decided to reduce R&D for several years, gambling that they
could pick the right areas on which to focus their research efforts and, therefore, not
overly compromise the firm’s future. Although CPC returned to profitability and settled
all the lawsuits for considerably less than expected, the reduction of its R&D budget
proved to be a flawed strategy.
In 2002, in an attempt to buy the research that CPC had not been producing
internally, management began a strategy of acquiring specialized biochemical research
companies, including CloneTech, which was a small privately owned company in
Belgium. Like CloneTech, most of the companies acquired by CPC had five to ten
principal owners who were scientists in charge of the company’s research. All the firms
were purchased using a combination of CPC shares and an earnout contract. CPC’s
board preferred to use an earnout structure for acquisitions that involved a promising,
but as yet unproven product. By providing for specific payments to the owner/managers
as a function of profit targets, an earnout served to keep the senior managers/scientists
actively involved in the company aware of its full value potential. In this case, as the
profits of the newly formed CPC subsidiary met the various profit targets, the
CloneTech management team would receive a percentage of the performance pool.
The Earnout Plan
Exhibit 17.1 presents the original earnout plan (EP) utilized in the CloneTech
acquisition. The EP stipulated an annual bonus schedule (ABS), which defined bonuses
to management based on the earnings levels achieved for each of CloneTech’s first four
years as part of CPC. For the first year (2013), management could receive 100% of the
2013 bonus (EUR2 million) by meeting or exceeding target earnings of EUR10 million.
Lower bonuses would be realized for earnings less than EUR10 million: a bonus of
EUR1.5 million (75% of EUR2 million) was payable for earnings between EUR9
million and EUR10 million, and EUR1.0 million (50% of EUR2 million) would be
distributed for earnings between EUR8 million and EUR9 million. Failure to reach an
earnings level of EUR8 million in the first year would mean no bonuses for management
in 2013.
EXHIBIT 17.1 | CloneTech’s Original Earnout Program (in millions of euros)1
Page 244
In addition to the ABS, the EP recognized the risky nature of biotech research with
the inclusion of a multiyear bonus schedule (MBS). The MBS was a cumulativeearnings
feature that allowed CloneTech management to receive bonus payments, in
years 2, 3, and 4, which had not been fully distributed in one or more of the prior years.
For example, CloneTech management could receive a full bonus of EUR2 million in
2013 and 2014 by earning EUR11 million in 2013 and EUR13 million in 2014. The full
100% bonuses would be distributed each year because earnings exceeded each year’s
respective target amounts in the ABS. The same bonus dollars, however, could also be
realized if CloneTech earned EUR7 million in 2013 and EUR15 million in
2014. Under this scenario, management would receive no ABS bonus for 2013
but would earn an ABS bonus of EUR2 million for 2014 plus a “cumulative bonus” of
an additional EUR2 million, based on the MBS, for the EUR22 million of cumulative
earnings for 2013 and 2014.
The flexibility of the EP was such that CloneTech management could potentially earn
Source: Created by author.
Bonus payments were made each year based on the Annual Bonus Schedule and, if appropriate, the Multiyear Bonus
Schedule. MBS payments were distributed when the sum of all annual payments to date were less than the Cumulative
Bonus Potential for that year.
the entire EUR8 million bonus pool by reporting zero earnings for the first three years
and EUR53.7 million in the last year. This facet was important because CloneTech had
earned only EUR7 million in 2013, which was below the minimum threshold for bonus
distributions. Despite having “underperformed” in 2013, CloneTech managers were
optimistic that the entire bonus pool remained within reach once the full impact of their
research was reflected in future profits. Moreover, their optimism was buoyed by the
fact that half of the profits in 2013 had been earned in the fourth quarter alone, which
suggested the possibility of a turnaround in the company’s performance for the following
years. Like most biotechs, CloneTech’s historical profits varied substantially, so
predicting future earnings was highly problematic.
As CloneTech was finishing its first year as an acquired company, CPC purchased
another pharmaceutical company, PharmaNew. PharmaNew was a German-based
company with a business unit called Strategic Research Projects (SRP), which was
doing the exact same cloning research as CloneTech. CPC believed that much of the
value in buying PharmaNew would come from moving CloneTech to the SRP facilities
in Hamburg to allow the two groups of scientists to work together. The newly combined
business unit would create significant cost savings and allow the combined labs to
achieve faster and more productive results. Once the two entities were combined,
however, it would no longer be possible to measure CloneTech’s performance as a
separate entity for purposes of the earnout agreement. According to the original EP, in
the event that CloneTech was consolidated with another business unit, CPC was
required to make a good-faith adjustment to the earnout schedule in order to avoid
compromising the bonus incentives of CloneTech management.
Exhibit 17.2 presents CPC’s revised EP that had been proposed to CloneTech
management. The revised EP retained many of the basic features of the original EP. For
example, bonus levels were paid according to gradations of target earnings each year,
Page 245
no bonuses were paid below the minimum-earnings level, and a cumulative-earnings
feature remained in place. The proposal, however, also had a number of important
differences from the original EP. In particular, all the earnings targets had been adjusted
by adding SRP’s expected earnings to CloneTech’s earnings targets for the remaining
three years. The earnings numbers added to CloneTech’s targets were exactly the
numbers CPC had used to determine the value of SRP as part of the PharmaNew
acquisition. Likewise, the target-earnings figures used in the original EP were exactly
the earnings numbers used by CPC to determine the fair price to pay for CloneTech the
previous year. Because of the similarities of their research, CPC’s investment bankers
predicted that both entities would experience 20% earnings growth.
Perhaps the most important difference of the revised EP was CPC’s
proposal to convert EUR4 million of the bonus pool into a series of guaranteed
payments over the remaining three years: EUR1 million in year 2, EUR1 million in year
EXHIBIT 17.2 | CloneTech’s Revised Earnout Program (in millions of euros)
Source: Created by author.
Bonus payments were made each year based on the Annual Bonus Schedule and, if appropriate, the Multiyear Bonus
Schedule. MBS payments were distributed when the sum of all annual payments to date were less than the Cumulative
Bonus Potential for year 4. In addition to bonus payments, guaranteed payments of EUR1 million, EUR1 million, and
EUR2 million were paid at the end of years 2, 3, and 4, respectively.
3, and EUR2 million in year 4. Therefore, regardless of CloneTech or SRP’s
performance, CloneTech management was guaranteed to receive EUR4 million over the
life of the revised EP. But in order to realize the full EUR8 million, the combined
earnings of CloneTech and SRP together would have to reach the newly defined
earnings targets.
The negative reaction of CloneTech management toward the revised EP surprised
CPC’s board of directors. The board thought they had made a good faith effort to
preserve, if not improve, the economic value of its original agreement with CloneTech
management; however, the board had not done a formal valuation of the original and
revised EPs. To do so required a discounted-cash-flow analysis for both contracts
(using the current interest-rate data in Exhibit 17.3).
EXHIBIT 17.3 | Capital-Market Conditions as of January 6, 2014
Data sources: Datastream and “Statistics,” European Central Bank, January 14, 2016, (accessed May 31, 2016).
Page 249
CASE 18 Worldwide Paper Company
In January 2016, Bob Prescott, the controller for the Blue Ridge Mill, was considering
the addition of a new on-site longwood woodyard. The addition would have two
primary benefits: to eliminate the need to purchase shortwood from an outside supplier
and create the opportunity to sell shortwood on the open market as a new market for
Worldwide Paper Company (WPC). Now the new woodyard would allow the Blue
Ridge Mill not only to reduce its operating costs but also to increase its revenues. The
proposed woodyard utilized new technology that allowed tree-length logs, called
longwood, to be processed directly, whereas the current process required shortwood,
which had to be purchased from the Shenandoah Mill. This nearby mill, owned by a
competitor, had excess capacity that allowed it to produce more shortwood than it
needed for its own pulp production. The excess was sold to several different mills,
including the Blue Ridge Mill. Thus adding the new longwood equipment would mean
that Prescott would no longer need to use the Shenandoah Mill as a shortwood supplier
and that the Blue Ridge Mill would instead compete with the Shenandoah Mill by
selling on the shortwood market. The question for Prescott was whether these expected
benefits were enough to justify the $18 million capital outlay plus the incremental
investment in working capital over the six-year life of the investment.
Construction would start within a few months, and the investment outlay would be
spent over two calendar years: $16 million in 2016 and the remaining $2 million in
2017. When the new woodyard began operating in 2017, it would significantly reduce
the operating costs of the mill. These operating savings would come mostly from the
difference in the cost of producing shortwood on-site versus buying it on the open
market and were estimated to be $2.0 million for 2017 and $3.5 million per year
Page 250
Prescott also planned on taking advantage of the excess production capacity
afforded by the new facility by selling shortwood on the open market as soon as
possible. For 2017, he expected to show revenues of approximately $4 million, as the
facility came on-line and began to break into the new market. He expected shortwood
sales to reach $10 million in 2018 and continue at the $10 million level through 2022.
Prescott estimated that the cost of goods sold (before including depreciation
expenses) would be 75% of revenues, and SG&A would be 5% of revenues.
In addition to the capital outlay of $18 million, the increased revenues would
necessitate higher levels of inventories and accounts receivable. The total working
capital would average 10% of annual revenues. Therefore the amount of working
capital investment each year would equal 10% of incremental sales for the year. At the
end of the life of the equipment, in 2022, all the net working capital on the books would
be recoverable at cost, whereas only 10% or $1.8 million (before taxes) of the capital
investment would be recoverable.
Taxes would be paid at a 40% rate, and depreciation was calculated on a straightline
basis over the six-year life, with zero salvage. WPC accountants had told Prescott
that depreciation charges could not begin until 2017, when all the $18 million had been
spent, and the machinery was in service.
Prescott was conflicted about how to treat inflation in his analysis. He was
reasonably confident that his estimates of revenues and costs for 2016 and 2017
reflected the dollar amounts that WPC would most likely experience during those years.
The capital outlays were mostly contracted costs and therefore were highly reliable
estimates. The expected shortwood revenue figure of $4.0 million had been based on a
careful analysis of the shortwood market that included a conservative estimate of the
Blue Ridge Mill’s share of the market plus the expected market price of shortwood,
Page 251
taking into account the impact of Blue Ridge Mill as a new competitor in the market.
Because he was unsure of how the operating costs and the price of shortwood would be
impacted by inflation after 2017, Prescott decided not to include it in his analysis.
Therefore the dollar estimates for 2018 and beyond were based on the same costs and
prices per ton used in 2017. Prescott did not consider the omission critical to the final
decision because he expected the increase in operating costs caused by inflation would
be mostly offset by the increase in revenues associated with the rise in the price of
WPC had a company policy to use 10% as the hurdle rate for such investment
opportunities. The hurdle rate was based on a study of the company’s cost of capital
conducted 10 years ago. Prescott was uneasy using an outdated figure for a discount
rate, particularly because it was computed when 30-year Treasury bonds were yielding
4.7%, whereas currently they were yielding less than 3% (Exhibit 18.1).
EXHIBIT 18.1 | Cost-of-Capital Information
Source: Datastream
Page 253
CASE 19 Fonderia del Piemonte S.p.A.
In November 2015, Martina Bellucci, managing director of Fonderia del Piemonte
S.p.A., was considering the purchase of a Thor MM-9 automated molding machine. This
machine would prepare the sand molds into which molten iron was poured to obtain
iron castings. The Thor MM-9 would replace an older machine and would offer
improvements in quality and some additional capacity for expansion. Similar moldingmachine
proposals had been rejected by the board of directors for economic reasons on
three previous occasions, most recently in 2014. This time, given the size of the
proposed expenditure of nearly EUR2 million, Bellucci was seeking a careful estimate
of the project’s costs and benefits and, ultimately, a recommendation of whether to
proceed with the investment.
The Company
Fonderia del Piemonte specialized in the production of precision metal castings for use
in automotive, aerospace, and construction equipment. The company had acquired a
reputation for quality products, particularly for safety parts (i.e., parts whose failure
would result in loss of control for the operator). Its products included crankshafts,
transmissions, brake calipers, axles, wheels, and various steering-assembly parts.
Customers were original-equipment manufacturers (OEMs), mainly in Europe. OEMs
were becoming increasingly insistent about product quality, and Fonderia del
Piemonte’s response had reduced the rejection rate of its castings by the OEMs to 70
parts per million.
This record had won the company coveted quality awards from BMW, Ferrari, and
Peugeot, and had resulted in strategic alliances with those firms: Fonderia del Piemonte
Page 254
and the OEMs exchanged technical personnel and design tasks; in addition, the OEMs
shared confidential market-demand information with Fonderia del Piemonte, which
increased the precision of the latter’s production scheduling. In certain instances, the
OEMs had provided cheap loans to Fonderia del Piemonte to support capital
expansion. Finally, the company received relatively long-term supply contracts
from the OEMs and had a preferential position for bidding on new contracts.
Fonderia del Piemonte, located in Turin, Italy, had been founded in 1912 by
Bellucci’s great-great-grandfather, Benito Bellucci, a naval engineer, to produce
castings for the armaments industry. In the 1920s and 1930s, the company expanded its
customer base into the automotive industry. Although the company barely avoided
financial collapse in the late 1940s, Benito Bellucci predicted a postwar demand for
precision metal casting and positioned the company to meet it. From that time, Fonderia
del Piemonte grew slowly but steadily; its sales for calendar-year 2015 were expected
to be EUR1.3 billion. It was listed for trading on the Milan stock exchange in 1991, but
the Bellucci family owned 55% of the common shares of stock outstanding. The
company’s beta was estimated at 1.25.
The company’s traditional hurdle rate of return on capital deployed was 7%,
although this rate had not been reviewed since 2012. In addition, company policy sought
payback of an entire investment within five years. At the time of the case, the market
value of the company’s capital was 33% debt and 67% equity. The prevailing
borrowing rate Fonderia del Piemonte faced on its loans was 2.6%. The company’s
effective tax rate was about 43%, which reflected the combination of national and local
corporate income-tax rates.
Bellucci, age 57, had assumed executive responsibility for the company 15 years
earlier, upon the death of her father. She held a doctorate in metallurgy and was the
matriarch of an extended family. Only a son and a niece worked at Fonderia del
Page 255
Piemonte, however. Over the years, the Bellucci family had sought to earn a rate of
return on its equity investment of 12%—this goal had been established by Benito
Bellucci and had never once been questioned by management.
The Thor MM-9 Machine
Sand molds used to make castings were currently prepared in a semiautomated process
at Fonderia del Piemonte. Workers stamped impressions in a mixture of sand and
adhesive under heat and high pressure. The process was relatively labor intensive,
required training and retraining to obtain consistency in mold quality, and demanded
some heavy lifting from workers. Indeed, medical claims for back injuries in the
molding shop had doubled since 2012 as the mix of Fonderia del Piemonte’s casting
products shifted toward heavy items. Items averaged 25 kg in 2015.
The new molding machine would replace six semiautomated stamping machines that
together had originally cost EUR423,000. Cumulative depreciation of EUR169,200 had
already been charged against the original cost and six years of depreciation charges
remained over the total useful life of 10 years. Fonderia del Piemonte’s management
believed that those semiautomated machines would need to be replaced after six years.
Bellucci had recently received an offer of EUR130,000 for the six machines.
The current six machines required 12 workers per shift (24 in total) at
EUR14.66 per worker per hour, plus the equivalent of two maintenance
workers, each of whom was paid EUR15.70 an hour, plus maintenance supplies of
EUR6,000 a year. Bellucci assumed that the semiautomated machines, if kept, would
continue to consume electrical power at the rate of EUR15,300 a year.
The Thor MM-9 molding machine was produced by an American company in
Allentown, Pennsylvania. Fonderia del Piemonte had received a firm offering price of
USD1.9 million from the American firm. Since the prevailing exchange rate between the
euro and the U.S. dollar was 1.06 USD per euro, the price in euros was EUR1.8
million. The estimate for modifications to the plant, including wiring for the machine’s
power supply, was EUR100,000. Allowing for EUR50,000 for shipping, installation,
and testing, the total cost of the Thor MM-9 machine was expected to be EUR1.95
million, all of which would be capitalized and depreciated for tax purposes over eight
years. Bellucci assumed that, at a high and steady rate of machine utilization, the Thor
MM-9 would be worthless after the eighth year and need to be replaced.
The new machine would require two skilled operators (one per shift), each
receiving EUR22.72 an hour (including benefits), and contract maintenance of
EUR120,000 a year, and would incur power costs of EUR40,000 yearly. In addition, the
automatic machine was expected to save at least EUR30,000 yearly through improved
labor efficiency in other areas of the foundry.
With the current machines, more than 30% of the foundry’s floor space was needed
for the wide galleries the machines required; raw materials and in-process inventories
had to be staged near each machine in order to smooth the workflow. With the automated
machine, almost half of that space would be freed for other purposes—although at
present there was no need for new space.
Certain aspects of the Thor MM-9 purchase decision were difficult to quantify.
First, Bellucci was unsure whether the tough collective-bargaining agreement her
company had with the employees’ union would allow her to lay off the 24 operators of
the semiautomated machines. Reassigning the workers to other jobs might be easier, but
the only positions needing to be filled were unskilled jobs, which paid EUR9.13 an
hour. The extent of any labor savings would depend on negotiations with the union.
Second, Bellucci believed that the Thor MM-9 would result in even higher levels of
product quality and lower scrap rates than the company was now boasting. In light of the
ever-increasing competition, this outcome might prove to be of enormous, but currently
unquantifiable, competitive importance. Finally, the Thor MM-9 had a theoretical
maximum capacity that was 30% higher than that of the six semiautomated machines; but
those machines were operating at only 90% of capacity, and Bellucci was unsure when
added capacity would be needed. There was plenty of uncertainty about the economic
outlook in Europe, and the latest economic news suggested that the economies of Europe
might be headed for a slowdown.
Page 257
20 Victoria Chemicals plc (A): The Merseyside
Late one afternoon in January 2008, Frank Greystock told Lucy Morris, “No one seems
satisfied with the analysis so far, but the suggested changes could kill the project. If
solid projects like this can’t swim past the corporate piranhas, the company will never
Morris was plant manager of Victoria Chemicals’ Merseyside Works in Liverpool,
England. Her controller, Frank Greystock, was discussing a capital project that Morris
wanted to propose to senior management. The project consisted of a GBP12 million
expenditure to renovate and rationalize the polypropylene production line at the
Merseyside plant in order to make up for deferred maintenance and to exploit
opportunities to achieve increased production efficiency.
Victoria Chemicals was under pressure from investors to improve its financial
performance because of the accumulation of the firm’s common shares by a well-known
corporate raider, Sir David Benjamin. Earnings had fallen to 180 pence per share at the
end of 2007 from around 250 pence per share at the end of 2006. Morris thus believed
that the time was ripe to obtain funding from corporate headquarters for a modernization
program for the Merseyside Works—at least she had believed this until Greystock
presented her with several questions that had only recently surfaced.
Victoria Chemicals and Polypropylene
Victoria Chemicals, a major competitor in the worldwide chemicals industry, was
a leading producer of polypropylene, a polymer used in an extremely wide variety
of products (ranging from medical products to packaging film, carpet fibers, and
automobile components) and known for its strength and malleability. Page 258
Polypropylene was essentially priced as a commodity.
The production of polypropylene pellets at Merseyside Works began with
propylene, a refined gas received in tank cars. Propylene was purchased from four
refineries in England that produced it in the course of refining crude oil into gasoline. In
the first stage of the production process, polymerization, the propylene gas was
combined with a diluent (or solvent) in a large pressure vessel. In a catalytic reaction,
the polypropylene precipitated to the bottom of the tank and was then concentrated in a
The second stage of the production process compounded the basic polypropylene
with stabilizers, modifiers, fillers, and pigments to achieve the desired attributes for a
particular customer. The finished plastic was extruded into pellets for shipment to the
The Merseyside Works production process was old, semicontinuous at best, and,
therefore, higher in labor content than its competitors’ newer plants. The Merseyside
Works plant was constructed in 1967.
Victoria Chemicals produced polypropylene at Merseyside Works and in Rotterdam,
Holland. The two plants were of identical scale, age, and design. The managers of both
plants reported to James Fawn, executive vice president and manager of the
Intermediate Chemicals Group (ICG) of Victoria Chemicals. The company positioned
itself as a supplier to customers in Europe and the Middle East. The strategic-analysis
staff estimated that, in addition to numerous small producers, seven major competitors
manufactured polypropylene in Victoria Chemicals’ market region. Their plants
operated at various cost levels. Exhibit 20.1 presents a comparison of plant sizes and
indexed costs.
EXHIBIT 20.1 | Comparative Information on the Seven Largest Polypropylene Plants in Europe
The Proposed Capital Program
Morris had assumed responsibility for the Merseyside Works only 12 months
previously, following a rapid rise from the entry position of shift engineer nine years
before. When she assumed responsibility, she undertook a detailed review of the
operations and discovered significant opportunities for improvement in polypropylene
production. Some of those opportunities stemmed from the deferral of maintenance over
the preceding five years. In an effort to enhance the operating results of Merseyside
Works, the previous manager had limited capital expenditures to only the most essential.
Now what previously had been routine and deferrable was becoming essential. Other
opportunities stemmed from correcting the antiquated plant design in ways that would
save energy and improve the process flow: (1) relocating and modernizing tank-car
unloading areas, which would enable the process flow to be streamlined; (2)
refurbishing the polymerization tank to achieve higher pressures and thus greater
throughput; and (3) renovating the compounding plant to increase extrusion throughput
and obtain energy savings.
Morris proposed an expenditure of GBP12 million on this program. The entire
polymerization line would need to be shut down for 45 days, however, and because the
Source: Author analysis.
Page 259
Rotterdam plant was operating near capacity, Merseyside Works’ customers would buy
from competitors. Greystock believed the loss of customers would not be permanent.
The benefits would be a lower energy requirement as well as a 7% greater
manufacturing throughput. In addition, the project was expected to improve
gross margin (before depreciation and energy savings) from 11.5% to 12.5%. The
engineering group at Merseyside Works was highly confident that the efficiencies would
be realized.
Merseyside Works currently produced 250,000 metric tons of polypropylene pellets
a year. Currently, the price of polypropylene averaged GBP675 per ton for Victoria
Chemicals’ product mix. The tax rate required in capital-expenditure analyses was 30%.
Greystock discovered that any plant facilities to be replaced had been completely
depreciated. New assets could be depreciated on an accelerated basis over 15 years,
the expected life of the assets. The increased throughput would necessitate an increase
of work-in-process inventory equal in value to 3.0% of cost of goods. Greystock
included in the first year of his forecast preliminary engineering costs of GBP500,000
spent over the preceding nine months on efficiency and design studies of the renovation.
Finally, the corporate manual stipulated that overhead costs be reflected in project
analyses at the rate of 3.5% times the book value of assets acquired in the project per
Greystock had produced the discounted-cash-flow (DCF) summary given in
Exhibit 20.2. It suggested that the capital program would easily hurdle Victoria
Chemicals’ required return of 10% for engineering projects.
EXHIBIT 20.2 | Greystock’s DCF Analysis of the Merseyside Project (financial values in millions of
Page 260
Concerns of the Transport Division
Victoria Chemicals owned the tank cars with which Merseyside Works received
propylene gas from four petroleum refineries in England. The Transport Division, a cost
center, oversaw the movement of all raw, intermediate, and finished materials
throughout the company and was responsible for managing the tank cars.
Because of the project’s increased throughput, the Transport Division would
have to increase its allocation of tank cars to Merseyside Works. Currently, the
Transport Division could make this allocation out of excess capacity, although doing so
would accelerate from 2012 to 2010 the need to purchase new rolling stock to support
Source: Created by author.
the anticipated growth of the firm in other areas. The purchase was estimated to be
GBP2 million in 2010. The rolling stock would have a depreciable life of 10 years, but
with proper maintenance, the cars could operate much longer. The rolling stock could
not be used outside Britain because of differences in track gauge.
A memorandum from the controller of the Transport Division suggested that the cost
of the tank cars should be included in the initial outlay of Merseyside Works’ capital
program. But Greystock disagreed. He told Morris:
The Transport Division isn’t paying one pence of actual cash because of what
we’re doing at Merseyside. In fact, we’re doing the company a favor in using its
excess capacity. Even if an allocation has to be made somewhere, it should go on
the Transport Division’s books. The way we’ve always evaluated projects in this
company has been with the philosophy of “every tub on its own bottom”—every
division has to fend for itself. The Transport Division isn’t part of our own
Intermediate Chemicals Group, so they should carry the allocation of rolling stock.
Accordingly, Greystock had not reflected any charge for the use of excess rolling stock
in his preliminary DCF analysis, given in Exhibit 20.2.
The Transport Division and Intermediate Chemicals Group reported to separate
executive vice presidents, who reported to the chairman and chief executive officer of
the company. The executive vice presidents received an annual incentive bonus pegged
to the performance of their divisions.
Concerns of the ICG Sales and Marketing Department
Greystock’s analysis had led to questions from the director of sales. In a recent meeting,
the director had told Greystock:
Your analysis assumes that we can sell the added output and thus obtain the full
Page 261
efficiencies from the project, but as you know, the market for polypropylene is
extremely competitive. Right now, the industry is in a downturn and it looks like
an oversupply is in the works. This means that we will probably have to shift
capacity away from Rotterdam toward Merseyside in order to move the added
volume. Is this really a gain for Victoria Chemicals? Why spend money just so one
plant can cannibalize another?
The vice president of marketing was less skeptical. He said that with lower costs at
Merseyside Works, Victoria Chemicals might be able to take business from the plants of
competitors such as Saône-Poulet or Vaysol. In the current severe recession,
competitors would fight hard to keep customers, but sooner or later the market
would revive, and it would be reasonable to assume that any lost business volume
would return at that time.
Greystock had listened to both the director and the vice president and chose to
reflect no charge for a loss of business at Rotterdam in his preliminary analysis of the
Merseyside project. He told Morris:
Cannibalization really isn’t a cash flow; there is no check written in this instance.
Anyway, if the company starts burdening its cost-reduction projects with fictitious
charges like this, we’ll never maintain our cost competitiveness. A cannibalization
charge is rubbish!
Concerns of the Assistant Plant Manager
Griffin Tewitt, the assistant plant manager and Morris’s direct subordinate, proposed an
unusual modification to Greystock’s analysis during a late-afternoon meeting with
Greystock and Morris. Over the past few months, Tewitt had been absorbed with the
development of a proposal to modernize a separate and independent part of the
Merseyside Works, the production line for ethylene-propylene-copolymer rubber
(EPC). This product, a variety of synthetic rubber, had been pioneered by Victoria
Chemicals in the early 1960s and was sold in bulk to European tire manufacturers.
Despite hopes that this oxidation-resistant rubber would dominate the market in
synthetics, EPC remained a relatively small product in the European chemical industry.
Victoria Chemicals, the largest supplier of EPC, produced the entire volume at
Merseyside Works. EPC had been only marginally profitable to Victoria Chemicals
because of the entry by competitors and the development of competing synthetic-rubber
compounds over the past five years.
Tewitt had proposed a renovation of the EPC production line at a cost of GBP1
million. The renovation would give Victoria Chemicals the lowest EPC cost base in the
world and would improve cash flows by GBP25,000 ad infinitum. Even so, at current
prices and volumes, the net present value (NPV) of this project was −GBP750,000.
Tewitt and the EPC product manager had argued strenuously to the company’s executive
committee that the negative NPV ignored strategic advantages from the project and
increases in volume and prices when the recession ended. Nevertheless, the executive
committee had rejected the project, basing its rejection mainly on economic grounds.
In a hushed voice, Tewitt said to Morris and Greystock:
Why don’t you include the EPC project as part of the polypropylene line
renovations? The positive NPV of the poly renovations can easily sustain the
negative NPV of the EPC project. This is an extremely important project to the
company, a point that senior management doesn’t seem to get. If we invest now,
we’ll be ready to exploit the market when the recession ends. If we don’t invest
now, you can expect that we will have to exit the business altogether in three
years. Do you look forward to more layoffs? Do you want to manage a shrinking
plant? Recall that our annual bonuses are pegged to the size of this operation. Also
Page 262
remember that, in the last 20 years, no one from corporate has monitored
renovation projects once the investment decision was made.
Concerns of the Treasury Staff
After a meeting on a different matter, Greystock described his dilemmas to Andrew
Gowan, who worked as an analyst on Victoria Chemicals’ treasury staff. Gowan
scanned Greystock’s analysis and pointed out:
Cash flows and discount rate need to be consistent in their assumptions about
inflation. The 10% hurdle rate you’re using is a nominal target rate of return. The
Treasury staff thinks this impounds a long-term inflation expectation of 3% per
year. Thus Victoria Chemicals’ real (that is, zero inflation) target rate of return is
The conversation was interrupted before Greystock could gain full understanding of
Gowan’s comment. For the time being, Greystock decided to continue to use a discount
rate of 10% because it was the figure promoted in the latest edition of Victoria
Chemicals’ capital-budgeting manual.
Evaluating Capital-Expenditure Proposals at Victoria
In submitting a project for senior management’s approval, the project’s initiators had to
identify it as belonging to one of four possible categories: (1) new product or market,
(2) product or market extension, (3) engineering efficiency, or (4) safety or environment.
The first three categories of proposals were subject to a system of four performance
“hurdles,” of which at least three had to be met for the proposal to be considered. The
Page 263
Merseyside project would be in the engineering-efficiency category.
Morris wanted to review Greystock’s analysis in detail and settle the questions
surrounding the tank cars and the potential loss of business volume at Rotterdam. As
Greystock’s analysis now stood, the Merseyside project met all four investment
1. Impact on earnings per share: For engineering-efficiency projects, the contribution to
net income from contemplated projects had to be positive. This criterion was
calculated as the average annual earnings per share (EPS) contribution of the project
over its entire economic life, using the number of outstanding shares at the most recent
fiscal year-end (FYE) as the basis for the calculation. (At FYE2007, Victoria
Chemicals had 92,891,240 shares outstanding.)
2. Payback: This criterion was defined as the number of years necessary for free cash
flow of the project to amortize the initial project outlay completely. For engineeringefficiency
projects, the maximum payback period was six years.
3. Discounted cash flow: DCF was defined as the present value of future cash flows of
the project (at the hurdle rate of 10% for engineering-efficiency proposals) less the
initial investment outlay. This net present value of free cash flows had to be positive.
4. Internal rate of return: IRR was defined as being the discount rate at which the
present value of future free cash flows just equaled the initial outlay—in other words,
the rate at which the NPV was zero. The IRR of engineering-efficiency projects had to
be greater than 10%.
1. Average annual addition to EPS = GBP0.022
2. Payback period = 3.8 years
Morris was concerned that further tinkering might seriously weaken the attractiveness of
the project.
3. Net present value = GBP10.6 million
4. Internal rate of return = 24.3%
Page 266
Page 265
21 Victoria Chemicals PLC (B): The Merseyside
and Rotterdam Projects
James Fawn, executive vice president of the Intermediate Chemicals Group (ICG) of
Victoria Chemicals, planned to meet with his financial analyst, John Camperdown, to
review two mutually exclusive capital-expenditure proposals. The firm’s capital budget
would be submitted for approval to the board of directors in early February 2008, and
any projects Fawn proposed for the ICG had to be forwarded to the CEO of Victoria
Chemicals soon for his review. Plant managers in Liverpool and Rotterdam had
independently submitted expenditure proposals, each of which would expand the
polypropylene output of their respective plants by 7% or 17,500 tons per year. Victoria
Chemicals’ strategic-analysis staff argued strenuously that a company-wide increase in
polypropylene output of 35,000 tons made no sense but half that amount did. Thus Fawn
could not accept both projects; he could sponsor only one for approval by the board.
Corporate policy was to evaluate projects based on four criteria: (1) net present
value (NPV) computed at the appropriate cost of capital, (2) internal rate of return
(IRR), (3) payback, and (4) growth in earnings per share. In addition, the board of
directors was receptive to “strategic factors”—considerations that might be difficult to
quantify. The manager of the Rotterdam plant, Elizabeth Eustace, argued vociferously
that her project easily surpassed all the relevant quantitative standards and that it had
important strategic benefits. Indeed, Eustace had interjected those points in two
recent meetings with senior management and at a cocktail reception for the
board of directors. Fawn expected to review the proposal from Lucy Morris, manager
of Merseyside Works, the Liverpool plant, at the meeting with Camperdown, but he
suspected that neither proposal dominated the other on all four criteria. Fawn’s choice
would apparently not be straightforward.
The Proposal from Merseyside, Liverpool
The project for the Merseyside plant entailed enhancing the existing facilities and the
production process. Based on the type of project and the engineering studies, the
potential benefits of the project were quite certain. To date, Morris had limited her
discussions about the project to conversations with Fawn and Camperdown.
Camperdown had raised exploratory questions about the project and had presented
preliminary analyses to managers in marketing and transportation for their comments.
The revised analysis emerging from those discussions would be the focus of Fawn’s
discussion with Camperdown in the forthcoming meeting.
Camperdown had indicated that Morris’s final memo on the project was only three
pages long. Fawn wondered whether this memo would satisfy his remaining questions.
The Rotterdam Project
Elizabeth Eustace’s proposal consisted of a 90-page document replete with detailed
schematics, engineering comments, strategic analyses, and financial projections. The
basic discounted cash flow (DCF) analysis presented in Exhibit 21.1 shows that the
project had an NPV of GBP15.5 million and an IRR of 18.0%. Accounting for a worstcase
scenario, which assumed erosion of Merseyside’s volume equal to the gain in
Rotterdam’s volume, the NPV was GBP12.45 million.
EXHIBIT 21.1 | Analysis of Rotterdam Project (financial values in GBP millions)
In essence, Eustace’s proposal called for the expenditure of GBP10.5 million over
three years to convert the plant’s polymerization line from batch to continuous-flow
technology and to install sophisticated state-of-the-art process controls throughout the
polymerization and compounding operations. The heart of the new system would be an
Source: Created by author.
Page 267
analog computer driven by advanced software written by a team of engineering
professors at an institute in Japan. The three-year-old process-control technology had
been installed in several polypropylene production facilities in Japan, and although the
improvements in cost and output had been positive on average, the efficiency gains had
varied considerably across each of the production facilities. Other major producers
were known to be evaluating this system for use in their plants.
Eustace explained that installing the sophisticated new system would not be feasible
without also obtaining a continuous supply of propylene gas. She proposed obtaining
this gas by pipeline from a refinery five kilometers away (rather than by railroad tank
cars sourced from three refineries). Victoria Chemicals had an option to
purchase a pipeline and its right-of-way for GBP3.5 million, which Eustace
had included in her GBP10.5 million estimate for the project; then, for relatively little
cost, the pipeline could be extended to the Rotterdam plant and refinery at the other end.
The option had been purchased several years earlier. A consultant had informed Eustace
that to purchase a right-of-way at current prices and to lay a comparable pipeline would
cost approximately GBP6 million, a value the consultant believed was roughly equal to
what it could be sold for at auction in case the plan didn’t work out. The consultant also
forecasted that the value of the right-of-way would be GBP40 million in 15 years. This
option was set to expire in six months.
Some senior Victoria Chemicals executives firmly believed that if the Rotterdam
project were not undertaken, the option on the right-of-way should be allowed to expire
unexercised. The reasoning was summarized by Jeffrey Palliser, chairman of the
executive committee:
Our business is chemicals, not land speculation. Simply buying the right-of-way
with the intention of reselling it for a profit takes us beyond our expertise. Who
knows when we could sell it, and for how much? How distracting would this little
Page 268
side venture be for Elizabeth Eustace?
Younger members of senior management were more willing to consider a potential
investment arbitrage on the right-of-way.
Eustace expected to realize the benefit of this investment (i.e., a 7% increase in
output) gradually over time, as the new technology was installed and shaken down and
as the learning-curve effects were realized. She advocated a phased-investment
program (as opposed to all at once) in order to minimize disruption to plant operations
and to allow the new technology to be calibrated and fine-tuned. Admittedly, there was
a chance that the technology would not work as well as hoped, but due to the complexity
of the technology and the extent to which it would permeate the plant, there would be no
going back once the decision had been made to install the new controls. Yet it was
possible that the technology could deliver more efficiencies than estimated in the cash
flows, if the controls reached the potential boasted by the Japanese engineering team.
Fawn recalled that the strategic factors to which Eustace referred had to do with the
obvious cost and output improvements expected from the new system, as well as from
the advantage of being the first major European producer to implement the new
technology. Being the first to implement the technology probably meant a head start in
moving down the learning curve toward reducing costs as the organization
became familiar with the technology. Eustace argued:
The Japanese, and now the Americans, exploit the learning-curve phenomenon
aggressively. Fortunately, they aren’t major players in European polypropylene, at
least for now. This is a once-in-a-generation opportunity for Victoria Chemicals to
leapfrog its competition through the exploitation of new technology.
In an oblique reference to the Merseyside proposal, Eustace went on to say:
There are two alternatives to implementation of the analog process-control
technology. One is a series of myopic enhancements to existing facilities, but this
is nothing more than sticking one’s head in the sand, for it leaves us at the mercy of
our competitors who are making choices for the long term. The other alternative is
to exit the polypropylene business, but this amounts to walking away from the
considerable know-how we’ve accumulated in this business and from what is
basically a valuable activity. Our commitment to analog controls makes it the right
choice at the right time.
Fawn wondered how to take the technology into account in making his decision.
Even if he recommended the Merseyside project over the Rotterdam project, it would
still be possible to add the new controls to Merseyside at some point in the future.
Practically speaking, Fawn believed the controls could be added in 2010, which would
allow sufficient time to complete all the proposed capital improvements before
embarking on the new undertaking. As with the Rotterdam project, it was expected that
the controls would raise Merseyside’s margin by 0.5% a year, to a maximum of 15%.
The controls would not result in an incremental volume gain, however, as Merseyside
would already be operating at its capacity of 267,500 tons. To obtain a supply of
propylene gas at Merseyside, it would be necessary to enter into a 15-year contract with
a local supplier. Although the contract would cost GBP0.4 million a year, it would
obviate the need to build the proposed pipeline for Rotterdam, resulting in an
investment at Merseyside of GBP7.0 million spread over three years.
Lucy Morris, the plant manager at Merseyside, told James Fawn that she preferred
to “wait and see” before entertaining a technology upgrade at her plant because there
was considerable uncertainty in her mind as to how valuable, if at all, the analog
technology would prove to be. Fawn agreed that the Japanese technology had not been
tested with much of the machinery that was currently being used at Rotterdam and
Page 269
Merseyside. Moreover, he knew that reported efficiency gains had varied substantially
across the early adopters.
Fawn wanted to give this choice careful thought because the plant managers at
Merseyside and Rotterdam seemed to have so much invested in their own proposals. He
wished that the capital-budgeting criteria would give a straightforward indication of the
relative attractiveness of the two mutually exclusive projects. He wondered by what
rational analytical process he could extricate himself from the ambiguities of the present
measures of investment attractiveness. Moreover, he wished he had a way to evaluate
the primary technological difference between the two proposals: (1) the Rotterdam
project, which firmly committed Victoria Chemicals to the new-process technology, or
(2) the Merseyside project, which retained the flexibility to add the technology in the
Page 274
Page 273
The Procter & Gamble Company: Investment in
Crest Whitestrips Advanced Seal
It was May 2008, and Jackson Christopher, a financial analyst for the Procter & Gamble
Company’s (P&G) North America Oral Care (NAOC) group, hustled along a sunny
downtown Cincinnati street on his way to work. NAOC’s Crest teeth whitening group
was considering the launch of an extension to its Whitestrips product, and the project
had dominated most of his working hours. At least he avoided a long commute by living
The week before, the group had met to consider the merits of the proposed product,
known as Crest Advanced Seal. Although openly intrigued by the concept, Angela
Roman, the group’s general manager (GM), was reserving judgment until she had a
clearer picture of the idea and risks. She had tasked Christopher with putting together
the economic perspective on Advanced Seal, an effort that had required a lot of work
amalgamating all the different considerations and thinking through the financial
implications. In the process, he had had to manage a lot of different constituencies. In
short, it had been an interesting week, and with the follow-up meeting the next day,
Christopher knew he needed to present some conclusions.
The Procter & Gamble Company
P&G was one of the world’s premier consumer goods companies. Its 2007 total revenue
exceeded $72 billion and came from almost every corner of the globe. P&G’s wide
range of brands focused on beauty, grooming, and household care and delivered a broad
array of products from fragrances to batteries and medication to toothpaste
(Exhibit 22.1).
P&G was an aggressive competitor in its market, seeking to deliver total
shareholder returns in the top one-third of its peer group (Exhibit 22.2). Management
achieved these returns by following a strategy to reach more consumers (by extending
category portfolios vertically into higher and lower value tiers) in more parts of the
world (by expanding geographically into category whitespaces) more completely (by
EXHIBIT 22.1 | Procter & Gamble Brands
improving existing products and extending portfolios into adjacent categories).
NAOC’s portfolio consisted of seven different product lines: toothpaste, manual
toothbrushes, power toothbrushes, oral rinses, dental floss, denture adhesives and
cleansers, and teeth whitening strips. Leveraging the collective benefit of multiple
products enabled P&G to focus on more complete oral health solutions for consumers.
NAOC followed the corporate strategy by, among other things, expanding the global
toothpaste presence under the Oral-B brand and to multiple adjacencies under the 3D
White brand. At the heart of the portfolio, representing more than $5 billion in annual
sales, was the Crest brand.
Crest Whitestrips and the context for Advanced Seal
Crest Whitestrips, an at-home tooth enamel whitening treatment launched in 2001,
allowed consumers to achieve whitening results that rivaled far more expensive dental
office treatments. Existing whitening toothpastes had worked by polishing surface stains
from the tooth enamel, but they were unable to change the fundamental color of teeth.
EXHIBIT 22.2 | Value of $1 Invested in P&G Stock and the S&P 500 Index, 2001 to 2008
Data source: Yahoo! Finance.
Whitestrips worked through a strip applied temporarily to the teeth, binding the product
to surface enamel and actually whitening the layer of dentin beneath the enamel itself.
The intrinsic whitening results were unique to the category.
On its introduction, Crest Whitestrips saw nearly $300 million in annual sales but
virtually no growth in sales or profits after the first year (Exhibit 22.3). Multiple
attempts at line extensions had failed to significantly improve results, only managing to
breed skepticism in major customers. Competitors that entered the category either left
shortly thereafter or encountered the same stagnant sales as had P&G. (Exhibit 22.4
documents the category history.)
EXHIBIT 22.3 | Crest Whitestrips’ Revenue and After-Tax Profit Since 2001 Launch (in millions of
Note: Data is disguised.
EXHIBIT 22.4 | Whitening Category History
The commercial team believed that, to turn around the business’s lackluster
performance and win back trust and merchandising support, something fundamental had
to change. Advanced Seal, the extension under consideration, was based on a new
technology that prevented the strips from slipping out of position during use. Because
the new product binded with teeth more reliably, the active ingredient was delivered
more effectively, improving both the usage experience and the whitening results, which
were superior to any existing product on the market. Exhibit 22.5 provides the proposed
packaging for the product.
Image source: Procter & Gamble Company. Used with permission.
EXHIBIT 22.5 | Crest Whitestrips’ Advanced Seal Packaging
Page 275
With an extremely strong market share position (Figure 22.1), the Whitestrips team
had to manage any new launch carefully; future success had to be as much a function of
P&L accretion as of increasing competitive share. The business rarely saw
household penetration figures any higher than 3% , so there were plenty of new
consumers to target.
Image source: Procter & Gamble Company. Used with permission.
FIGURE 22.1 | Market share of the teeth whitening category, 2008.
Last Week’s Meeting
The previous week, NAOC members had gathered in a conference room to consider the
proposed launch of Advanced Seal. As the meeting had progressed, the group had
strained to gauge the GM’s reaction to the concept.
“I follow you so far,” said Roman. “I have questions, but I don’t want to derail you,
Christina. Keep going.”
Even among other brand managers, Christina Whitman was known for her energy
and enthusiasm, which was saying something.
“Consumer research has been clear,” Whitman asserted briskly. “The tendency of
Whitestrips to slip off teeth is the number one barrier to repeat purchase and word-ofmouth
recommendation. Advanced Seal’s new technology will address this concern,
providing a real jolt of energy to the whitening category and a strong sales lift in the
process. ”
“We see pricing this innovation at the high end of our range, which should drive up
trade in our portfolio and improve market share. The product improvement gives us
creative advertising and positioning opportunities to leverage as well. We definitely
think we should move forward.”
Roman sat back in her chair and exhaled thoughtfully. “What’s the downside
Source: Created by author
Page 276
scenario here, everyone?”
Hector Toro, the account executive, cleared his throat. “I’m worried about whether
we can count on getting the merchandising support we’ll need to get this off to a good
start. For the product to catch on, we’ll need to get out of the gates fast, and a
lot of retailers are still frustrated about the mediocre velocity of our last
line extension. If they don’t get behind this, it won’t be successful no matter what we
Whitman agreed immediately. “To show them we’re committed to pulling consumers
to the oral care aisle for this, we really need to adequately fund marketing. We also need
to allow for strong trade margins to get us display space and offset the high carrying
cost of this inventory. It’s a much higher price point than buyers are used to carrying in
Jackson Christopher, the data floating in his head from hours of study, saw an
opportunity to bring up some of his concerns. “That may not be as straightforward as it
sounds. Pricing this at a premium is one thing, but can we price it high enough to cover
the costs of the improvements?”
This was the first Roman had heard of this potential issue. “Say more about that. I
agree with Christina in principle, but what are the preliminary economics we’re looking
at here?”
“Oh, we’ll be able to price this up, for sure,” he replied. “We could charge a 25%
premium without having a precipitous drop in volume. The problem is that this product
improvement will drive up our costs by almost 75%. That could easily dilute our
margins. We could end up making less gross profit on this product than on our current
Premium product line. If we’re not careful, the more this product takes off, the worse off
we’ll be.”
“But even so,” Whitman interjected, “we’re confident that we’ll pick up so much
Page 277
incremental volume that we’ll be net better off anyway.” Whitman knew Christopher’s
concerns were valid but didn’t want them to kill the idea prematurely.
“What do you think, Margaret?” asked Roman, turning to Margaret Tan, a market
“I think the real answer is probably somewhere in the middle,” Tan replied. “I don’t
think we’ll be able to price this high enough to offset the costs, but we probably will
pick up a lot of new volume. Whether we’ll be net better off depends on bringing in
enough new users to the category to offset profit dilution from the cost structure.”
Everyone was silent as Roman took a few moments to think it over. “Alright then,”
she said. “I’m OK to proceed at this point. I like the idea. We need to be looking for
ways to delight our consumers. This product improvement really is huge for this
consumer; we know that she’s been complaining about Whitestrips slipping off her teeth
for quite some time. But we need to find ways to meet her needs while preserving our
core structural economics.
“If I’m following your logic, Christina, you’re saying we’ll sell enough incremental
units to end up net better off, even with the margin dilution. That can happen sometimes,
but I’ve been doing this long enough to know that’s a risky strategy. That said, we need a
jolt to drive top-line sales on this category. I may be willing to take that risk, but there
must be enough of a top-line opportunity to make it interesting.”
She turned to Christopher. “I’m going to need you to set our baseline here.
There are a lot of moving pieces, and I need you to paint the picture on how
this comes together. Does this pay out for our business? Are we financially better off
launching this product or not, what are the risks, what do we need to be thinking about
as we design this? Work with marketing, sales, manufacturing, and market research to
pull together the overall picture in the next week or so. We’ll get back together and
decide where to go from here.”
Christopher agreed, and the meeting wrapped up.
Establishing a Base Case
Christopher’s initial analysis established the expected price point for retailers at $22
per unit for Advanced Seal, compared to $18 and $13 per unit for P&G’s Premium and
Basic offering, respectively. Christopher had worked with his supply chain leaders to
estimate the cost structure. The new technology would run at a cost of $5 per unit cost
more than the current Premium product offering, such that the gross profit for Advanced
Seal would be lower than for Premium. Exhibit 22.6 provides the summary assessments
that had coalesced regarding the unit price and cost for the Crest Whitestrips products.
The forecasting models suggested a base case annual forecast of 2 million units for
Advanced Seal. The analysis also suggested that cannibalization of existing Crest
Whitestrips products would be high, on the order of 50% to 60% for Premium units and
15% for Basic units. Such cannibalization rates meant that 65% to 75% of Advanced
Seal’s 2 million expected units was coming straight out of existing P&G sales.
Preliminary discussions around advertising spending indicated an expected launch
budget of $6 million per year. He estimated that the cannibalized Premium and Basic
products already received $4 million per year in advertising support that would no
longer be required after the launch. This meant the group would have to spend an
incremental $2 million in advertising to support the launch. He also needed to include
EXHIBIT 22.6 | Gross Profit Comparison
Note: Data is disguised.
Page 278
$1 million per year for incremental selling, general, and administrative expenses.
Based on the amount of time R&D felt it would take a competitor to match the
product innovation, Christopher expected a project life of four years, over which time
annual unit sales were expected to be relatively constant. For this type of decision, P&G
used an 8% discount rate and a 40% tax rate. Manufacturing partners expected to spend
$4 million in capital expenditures and incur $1.5 million in one-time development
expenses to get the project going. Regarding capital expenditure depreciation, he
conferred with an accounting team, which recommended the five-year accelerated
schedule for tax purposes and the straight-line schedule for reporting purposes.
Engineering indicated that the equipment likely would need to be replaced at the end of
the project life, and they did not expect it to have any residual value.
Christopher also knew that he had to factor in any incremental working capital
required to support the project. For the Whitestrips business, net working capital
turnover typically ran at a rate of between 8 and 10 times. The project would
require that at least this amount be on hand prior to the market launch date. It
was P&G’s policy to model the recovery of any working capital investment at the end of
the project life.
Proposal to Drive Revenue
Later that week, as Christopher rubbed his eyes to remove the imprint of a spreadsheet
from his vision, Whitman popped her head into his cube. “I came to see where the steam
was coming from. I guess from your ears.”
Christopher chuckled. “The math isn’t really complicated, but the results all depend
on what you assume. I just need to make sure I think through everything the right way.”
He was getting close to wrapping up his work, but he knew that when Whitman came by
unannounced and excited, it meant her creative wheels were turning and that she was
looking for more advertising dollars.
“I had some great buzz-creation ideas that I think we can use for the launch,” she
said, her voice lowering. “I’m envisioning some digital campaigns that I think could go
viral, and I’m also interested in expanding our initial media plan. We have such low
household penetration numbers that, if we drive a change in launch plans, we could
focus a great deal more on driving trial. According to Margaret, one problem with trial
is that we’re really at the high end of the price range. She thinks a small drop in price
could really accelerate sales.”
“That makes sense, assuming this consumer is as elastic as Margaret says. What
kind of numbers are we talking about?”
“I’m going to need my starting advertising budget to go from $6 million to
$7.5 million in Year 1. I can then go back to $6 million per year after that. Next, we
reduce price by $1 to $21 for Advanced Seal. Margaret thinks those two effects will
drive annual unit sales up 1.25 million to 3.25 million units per year.”
“Sounds impressive. Let me take a look, and I’ll let you know where we land.”
“Thanks! We all know that Roman is looking for bigger revenue dollars from
Whitestrips and my calculations suggest this will certainly deliver big revenue gains for
the group.”
Proposal to Minimize Cannibalization
The next day, Christopher thought he had figured out what he would recommend to
Roman, and he had a good risk profile built for the team to design and sell against. Just
as he was starting to relax, Tan entered his cube.
“This can’t be good,” Christopher said preemptively.
Tan sighed. “Yes and no. I’ve gone back and reworked the volume forecast for
Christina’s initiative. We have the potential for a more severe cannibalization problem
Page 279
than we originally thought. It’s not certain, but there is greater likelihood that we end up
sourcing more of the incremental volume from our current Premium products.”
“How much of an increase are we talking about here?”
“I expect the price reduction and extra advertising to expand the range of
cannibalization rates on Premium to between 50% and 65%.”
“All right, that might not be so bad. I need to look at the financials to be sure
“Well, in case it is, we’ve worked up an alternative strategy.” Tan continued. “The
alternative is to pivot to a more conservative position, to minimize cannibalization by
reducing the launch advertising splash and focusing the marketing on untapped
customers. In doing so, we’ll have less of a broad appeal than we thought. More of a
niche. We’d be prioritizing cannibalization over trial. Our thought was to also offset the
gross profit differential by raising price to $23, giving Advanced Seal an $11 gross
profit. It’s clearly not what Christina was hoping for, but it’s a choice that we have.
Essentially, instead of dropping the price, raise it a little.”
Together, they agreed on the final assumptions. The advertising budget would be
reduced by $1 million each year, to $5 million. The sales model predicted that the effect
on Advanced Seal units would be strong with unit sales declining to just 1 million per
year. The changes would also reduce the cannibalization rate for Premium to a more
certain rate of 45%.
The Recommendation
Christopher still needed to figure out how to convert all this data into a realistic P&L
for the initiative and find the baseline net present value. Beyond that, he needed to
determine what the team needed to do to mold this opportunity into a winning
proposition for P&G shareholders. He agreed with Whitman that this was an exciting
technology, but he had to make sure that any decision would give investors something to
smile about.
Page 285
CASE 23 The Jacobs Division 2010
Richard Soderberg, financial analyst for the Jacobs Division of MacFadden Chemical
Company, was reviewing several complex issues related to possible investment in a
new product for the following year—2011. The product was a specialty coating
material that qualified for investment according to company guidelines. But Jacobs
Division Manager Mark Reynolds was fearful that it might be too risky. While regarding
the project as an attractive opportunity, Soderberg believed that the only practical way
to sell the product in the short run would place it in a weak competitive position over
the long run. He was also concerned that the estimates used in the probability analysis
were little better than educated guesses.
Company Background
MacFadden Chemical Company was one of the larger chemical firms in the world
whose annual sales were in excess of $10 billion. Its volume had grown steadily at the
rate of 10% per year throughout the 1980s until 1993; sales and earnings had grown
more rapidly. Beginning in 1993, the chemical industry began to experience
overcapacity, particularly in basic materials, which led to price cutting. Also, for firms
to remain competitive, more funds had to be spent in marketing and research. As a
consequence of the industry problems, MacFadden achieved only modest growth of 4%
in sales in the 1990s and experienced an overall decline in profits. Certain shortages
began developing in the economy in 2002, however, and by 2009, sales had risen 60%
and profits over 100%, as a result of price increases and near-capacity operations.
Most observers believed that the “shortage boom” would be only a short respite from
the intensely competitive conditions of the last decade.
Page 286
The 11 operating divisions of MacFadden were organized into three groups. Most
divisions had a number of products centered on one chemical, such as fluoride, sulfur,
or petroleum. The Jacobs Division was an exception.
It was the newest and—with sales of $100 million—the smallest division.
Its products were specialty industrial products with various chemical bases,
such as dyes, adhesives, and finishes, which were sold in relatively small lots to
diverse industrial customers. No single product had sales over $5 million, and many
had sales of only $500,000. There were 150 basic products in the division, each of
which had several minor variations. Jacobs was one of MacFadden’s more rapidly
growing divisions—12% per year prior to 2009—with a 13% return on total net assets.
Capital Budgeting for New Projects
Corporate-wide guidelines were used for analyzing new investment opportunities. In the
current environment, the long-term, risk-free rate was about 6%. At the firm level, the
return criteria were 8% for cost-reduction projects, 12% for expansion of facilities, and
16% for new products or processes. Returns were measured in terms of discounted cash
flows after taxes. Soderberg believed that these rates and methods were typical of those
used throughout the chemical industry.
Reynolds tended, however, to demand higher returns for projects in his division,
even though its earnings–growth stability in the past marked it as one of MacFadden’s
more reliable operations. Reynolds had three reasons for wanting better returns than
corporate required. First, one of the key variables used in appraising management
performance and compensation at MacFadden was the growth of residual income,
although such aspects as market share and profit margins were also considered.
Reynolds did not like the idea of investing in projects that were close to the target rate
of earnings embedded in the residual-income calculation.
Page 287
Second, many new projects had high start-up costs. Even though they might achieve
attractive returns over the long run, such projects hurt earnings performance in the short
run. “Don’t tell me what a project’s discount rate of return is. Tell me whether we’re
going to improve our return on total net assets within three years,” Reynolds would say.
Third, Reynolds was skeptical of estimates. “I don’t know what’s going to happen here
on this project, but I’ll bet we overstate returns by 2% to 5%, on average,” was a
typical comment. He therefore tended to look for at least 4% more than the company
standard before becoming enthusiastic about a project. “You’ve got to be hard-nosed
about taking risk,” he said. “By demanding a decent return for riskier opportunities, we
have a better chance to grow and prosper.”
Soderberg knew that Reynolds’s views were reflected in decisions throughout the
division. Projects that did not have promising returns, according to Reynolds’s
standards, were often dropped or shelved early in the decision process. Soderberg
guessed that, at Jacobs Division, almost as many projects with returns meeting the
company hurdle rates were abandoned as were ultimately approved. In fact, the projects
that were finally submitted to Reynolds were usually so promising that he rarely
rejected them. Capital projects from his division were accepted virtually unchanged,
unless top management happened to be unusually pessimistic about prospects for
business and financing in general.
The Silicone-X Project
A new product was often under study for several years after research had developed a
“test-tube” idea. The product had to be evaluated relative to market needs and
competition. The large number of possible applications of any product complicated this
analysis. At the same time, technological studies were undertaken to examine such
factors as material sources, plant location, manufacturing-process alternatives, and
economies of scale. While a myriad of feasible alternatives existed, only a few could be
actively explored, and they often required outlays of several hundred thousand dollars
before the potential of the project could be ascertained. “For every dollar of new
capital approved, I bet we spend $0.30 on the opportunities,” said Soderberg, “and that
doesn’t count the money we spend on research.”
The project that concerned Soderberg at the moment was called Silicone-X, a
special-purpose coating that added slipperiness to a surface. The coating could be used
on a variety of products to reduce friction, particularly where other lubricants might
imperfectly eliminate friction between moving parts. Its uniqueness lay in its hardness,
adhesiveness to the applied surface, and durability. The product was likely to have a
large number of buyers, but most of them could use only small quantities: Only a few
firms were likely to buy amounts greater than 5,000 pounds per year.
Test-tube batches of Silicone-X had been tested both inside and outside the Jacobs
Division. Comments were universally favorable, although $2.00 per pound seemed to
be the maximum price that would be acceptable. Lower prices were considered unlikely
to produce larger volume. For planning purposes, a price of $1.90 per pound had been
Demand was difficult to estimate because of the variety of possible applications.
The division’s market research group had estimated a first-year demand of 1 to 2
million pounds with 1.2 million pounds was cited as most likely. Soderberg said:
They could spend another year studying it and be more confident, but we wouldn’t
find them more believable. The estimates are educated guesses by smart people.
But they are also pretty wild stabs in the dark. They won’t rule out the possibility
of demand as low as 500,000 pounds, and 2 million pounds is not the ceiling.
Soderberg empathized with the problem facing the market-research group. “They
Page 288
tried to do a systematic job of looking at the most probable applications, but the data
were not good.” The market researchers believed that, once the product became
established, average demand would probably grow at a healthy rate, perhaps 10% per
year. But the industries served were likely to be cyclical with volume requirements
swinging 20% depending on market conditions. The market researchers concluded, “We
think demand should level off after 8 to 10 years, but the odds are very much against
someone developing a cheaper or markedly superior substitute.”
On the other hand, there was no patent protection on Silicone-X, and the
technological know-how involved in the manufacturing process could be duplicated by
others in perhaps as little as 12 months. “This product is essentially a commodity, and
someone is certainly going to get interested in it when sales volume reaches $3
million,” said Soderberg.
The cost estimates looked solid. Soderberg continued, “Basic chemicals, of
course, fluctuate in purchase price, but we have a captive source with stable
manufacturing costs. We can probably negotiate a long-term transfer price with Wilson
[another MacFadden division], although this is not the time to do so.”
Project Analysis
In his preliminary analysis, Soderberg used a discount rate of 20% and a project life of
15 years, because most equipment for the project was likely to wear out and need
replacement during that time frame. He said:
We also work with most likely estimates. Until we get down to the bitter end, there
are too many alternatives to consider, and we can’t afford probabilistic measures
or fancy simulations. A conservative definition of most likely values is good
enough for most of the subsidiary analyses. We’ve probably made over 200
present value calculations using our computer programs just to get to this decision
point, and heaven knows how many quick-and-dirty paybacks.
We’ve made a raft of important decisions that affect the attractiveness of this
project. Some of them are bound to be wrong. I hope not critically so. In any case,
these decisions are behind us. They’re buried so deep in the assumptions, no one
can find them, and top management wouldn’t have time to look at them anyway.
With Silicone-X, Soderberg was down to a labor-intensive, limited-capacity
approach and a capital-intensive method. “The analyses all point in one direction,” he
said, “but I have the feeling it’s going to be the worst one for the long run.”
The labor-intensive method involved an initial plant and equipment outlay of
$900,000. It could produce 1.5 million pounds per year.
According to Soderberg:
Even if the project bombs out, we won’t lose much. The equipment is very
adaptable. We could find uses for about half of it. We could probably sell the
balance for $200,000, and let our tax write-offs cover most of the rest. We should
salvage the working-capital part without any trouble. The start-up costs and losses
are our real risks. We’ll spend $50,000 debugging the process, and we’ll be lucky
to satisfy half the possible demand. But I believe we can get this project on stream
in one year’s time.
Exhibit 23.1 shows Soderberg’s analysis of the labor-intensive alternative. His
calculations showed a small net present value when discounted at 20% and a sizable net
present value at 8%. When the positive present values were compared with the negative
present values, the project looked particularly attractive.
EXHIBIT 23.1 | Analysis of Labor-Intensive Alternative for Silicone-X (dollars in thousands, except
per-unit data)
The capital-intensive method involved a much larger outlay for plant and equipment:
$3.3 million. Manufacturing costs would, however, be reduced by $0.35 per unit and
fixed costs by $100,000, excluding depreciation. The capital-intensive plant was
designed to handle 2.0 million pounds, the lowest volume for which appropriate
Source: All exhibits created by case writer.
Page 289
equipment could be acquired. Since the equipment was more specialized, only
$400,000 of this machinery could be used in other company activities. The balance
probably had a salvage value of $800,000. It would take two years to get the plant on
line, and the first year’s operating volume was likely to be low—perhaps 700,000
pounds at the most. Debugging costs were estimated to be $100,000.
Exhibit 23.2 presents Soderberg’s analysis of the capital-intensive method.
At a 20% discount rate, the capital-intensive project had a large negative
present value and thus appeared much worse than the labor-intensive alternative. But at
an 8% discount rate, it looked significantly better than the labor-intensive alternative.
EXHIBIT 23.2 | Analysis of Capital-Intensive Alternative for Silicone-X (dollars in thousands, except
per-unit data)
Problems in the Analysis
Several things concerned Soderberg about the analysis. Reynolds would only look at the
total return. Thus, the capital-intensive project would not be acceptable. Yet, on the
basis of the breakeven analysis, the capital-intensive alternative seemed the safest way
to start. It needed sales of just 369,333 pounds to break even, while the labor-intensive
method required 540,000 pounds (Exhibit 23.3).
Soderberg was concerned that future competition might result in price-cutting. If the
price per pound fell by $0.20, the labor-intensive method would not break even unless
900,000 pounds were sold. Competitors could, once the market was established, build a
capital-intensive plant that would put them in a good position to cut prices by $0.20 or
more. In short, there was a risk, given the labor-intensive solution, that Silicone-X might
not remain competitive. The better the demand proved to be, the more serious this risk
would become. Of course, once the market was established, Jacobs could build a
capital-intensive facility, but almost none of the labor-intensive equipment would be
useful in such a new plant. The new plant would still cost $3.3 million, and Jacobs
would have to write off losses on the labor-intensive facility.
The labor-intensive facility would be difficult to expand economically. It would
cost $125,000 for each 200,000 pounds of additional capacity. It was only practical in
200,000-pound increments). In contrast, an additional 100,000 pounds of capacity in the
EXHIBIT 23.3 | Breakeven Analysis for Silicone-X
Page 290
capital-intensive unit could be added for $75,000.
The need to expand, however, would depend on sales. If demand remained low, the
project would probably return a higher rate under the labor-intensive method. If demand
developed, the capital-intensive method would clearly be superior. This analysis led
Soderberg to believe that his breakeven calculations were somehow wrong.
Pricing strategy was another important element in the analysis. At $1.90 per pound,
Jacobs could be inviting competition. Competitors would be satisfied with a low rate of
return, perhaps 12%, in an established market. At a price lower than $1.90, Jacobs
might discourage competition. Even the labor-intensive alternative would not provide a
rate of return of 20% at any lower price. It began to appear to Soderberg that using a
high discount rate was forcing the company to make a riskier decision than would a
lower rate; it was also increasing the chance of realizing a lower rate of return than had
been forecast.
Soderberg was not sure how to incorporate pricing into his analysis. He knew he
could determine what level of demand would be necessary to encourage a competitor,
expecting a 50% share and needing a 12% return on a capital-intensive investment, to
enter the market at a price of $1.70, or $1.90, but this analysis did not seem to be
Finally, Soderberg was concerned about the volatility of demand estimates on which
he had based the analysis. He reviewed some analysts’ reports and found some
information on firms that were in businesses similar to Silicone-X. Based on those
firms’ stock market returns he estimated that the volatility of returns for this line of
business was around 0.35.
Soderberg’s job was to analyze the alternatives fully and to recommend one
of them to Reynolds. On the simplest analysis, the labor-intensive approach
seemed best. Even at 20%, its present value was positive. That analysis, however, did
not take other factors into consideration.
Page 294
Page 293
University of Virginia Health System: The Long-
Term Acute Care Hospital Project
On the morning of March 2, 2006, Larry Fitzgerald knew he had to complete all the lastminute
details for the board meeting the following day. Fitzgerald, the vice president for
business development and finance for the University of Virginia Health System (U.Va.
Health System), was eager to see the board’s reaction to his proposal for a new longterm
acute care (LTAC) hospital. His excitement was somewhat tempered that the board
had rejected the LTAC hospital concept when Fitzgerald had first joined the U.Va.
Health System in 1999. Since that time, however, the regulations regarding LTAC
facilities had changed, which gave Fitzgerald reason to give the project another chance.
The bottom line was that Fitzgerald thought that a LTAC hospital would improve patient
care and, at the same time, bring more money into the U.Va. Health System.
As he looked at the memo on his desk from his analyst Karen Mulroney regarding
the LTAC facility, Fitzgerald began to consider what guidance he could give her that
would lead to the best possible proposal to present to the hospital’s board of directors.
The U.Va. Health System
The University of Virginia (U.Va.) opened its first hospital in 1901, with a tripartite
mission of service, education, and research. At its inception, the hospital had only 25
beds and 3 operating rooms, but by 2005, it had expanded to more than 570 beds and 24
operating rooms, with 28,000 admissions and 65,000 surgeries per year. This first
hospital was the only Level 1 trauma center in the area and provided care for
Charlottesville residents as well as patients from across the state of Virginia
and the Southeast.1
For each patient admitted, the hospital was reimbursed a predetermined amount by a
private or public insurance company. For an open-heart surgery, for example, the
hospital typically received $25,000 regardless of how many days a patient stayed in the
hospital or which medications or interventions the patient needed during that time. But
the cost to the hospital varied considerably based on length of stay and level of care
received, which gave the hospital the incentive to help the patient recover and be
discharged as quickly as possible.
Numerous studies showed that it was also in the patient’s best interest to have a
short stay in the hospital; longer stays put patients at risk for infections, morbidity, and
mortality because there were more infectious diseases in hospitals than in patients’
homes or other facilities. Lengthier hospital stays also compromised patient morale,
which, in turn, was counterproductive to healing.
Like many hospital systems, U.Va.’s faced capacity issues due to its inadequate
number of patient beds. The sooner it was able to discharge a patient, the sooner its staff
could start caring for another; therefore, efficient patient turnover was beneficial to both
patients and U.Va.
Before coming to the U.Va. Health System, Fitzgerald had been the CFO of
American Medical International, a hospital ownership company that later became
known as Tenet. His experience in the for-profit sector had convinced him that LTAC
facilities brought value to a hospital system. Even though the idea of LTAC hospitals
was relatively new in the nonprofit sector, Fitzgerald had pitched the idea for opening
one when he first arrived at the U.Va. Health System in 1999. At that time, however, the
regulatory system required a LTAC facility to be built within the original hospital
structure. The project was rejected by the board partly because of anticipated disputes
from medical service units within the hospital that would be asked to forfeit some beds
to make room for the LTAC hospital. But in 2006, Fitzgerald still saw the advantages of
Page 295
having a LTAC facility and was certain he could justify building one within the U.Va.
Fitzgerald knew it was critical to gain approval for adding an LTAC facility at the
following day’s board meeting, because the Centers for Medicare & Medicaid Services
(CMS) had recently decided that, because LTAC hospitals were making so much money,
they were partly responsible for driving up health care costs. Reacting to this finding,
the CMS had decided to put a moratorium on the establishment of new LTAC facilities
beginning January 2007. For Fitzgerald, this meant that it was now or never to make his
case for establishing an LTAC as part of the U.Va. Health System.
The Advantages of LTAC Hospitals
LTAC hospitals were designed to service patients who required hospital stays of 25
days or more and at least some acute care during that time. LTACs especially benefited
patients who were diagnosed with infectious diseases and who needed to be weaned off
ventilators, required pulmonary care or wound care, and who had critical care issues. It
was often elderly patients who required these complex treatments, which were difficult
to perform in a normal hospital setting.
LTAC hospitals were financially attractive to medical centers, because having one
increased the amount of money available for patient care. Insurance companies
reimbursed hospitals set amounts of money for each patient in its facility based on the
patient’s diagnosis, regardless of the time involved the patient’s treatment and hospital
stay. Yet if the patient was transferred to a LTAC facility, the hospital could bill
insurance for the patient’s stay in the hospital as well as for time spent in the LTAC. The
LTAC facility also reduced patient care costs as the average daily hospital stay per
patient cost more than $3,000 compared to only $1,500 per day for an LTAC.
Another advantage of an LTAC facility was that it helped address the capacity issues
that the U.Va. Health System and most other hospital systems faced. By adding an LTAC
facility, a hospital gained an additional 25 bed days for each patient transferred to the
LTAC hospital. The average patient stay was five days in the hospital, compared to the
average patient stay of 25 days in an LTAC facility. Therefore, by adding an LTAC
facility, a hospital gained an additional 25 bed days for each patient transferred to the
LTAC hospital. Thus, the hospital could take five more admissions for each patient
transferred to an LTAC facility.
A stay in an LTAC facility had a number of advantages from the patient’s perspective
as well. The typical hospital setting was loud, the food could quickly become boring,
and patients usually had to share rooms. Because the LTAC facility was essentially an
extended-stay hospital, each patient had a private room, and the extended stay also
helped a patient become more familiar with the caregivers. Fitzgerald remembered how,
at one LTAC facility he had helped set up, a patient who was an avid bird watcher
missed not seeing birds outside his window. To fix the problem, the staff climbed the
tree outside his room and set up a bird feeder to allow him to enjoy his favorite pastime.
This experience was not feasible within a regular hospital setting that often suffered
from overcrowding of patients, understaffing, and an impersonal atmosphere. By
contrast, patients were generally delighted with the atmosphere of an LTAC hospital
with its attractive facilities, single rooms, fewer beds, and general lack of
overcrowding. Higher patient morale meant a better rate of recovery and a lower rate of
infection than in a typical hospital.
The U.Va. Health System comprised a large primary care network, a large hospital
center, a community hospital in nearby Culpepper, a home health agency, a rehabilitation
hospital, several nursing homes, an imaging center, and a physical therapy network. The
LTAC facility would be another important part of the U.Va. Health System’s network of
care. Having all their medical care provided by U.Va. was advantageous for patients
Page 296
because it facilitated better communication between physicians through its electronic
medical-records system.
Capital Investments at U.Va.
The U.Va. Health System’s mission was to provide the highest quality health care
service to the surrounding community while reinvesting in teaching and research. Unlike
the for-profit hospitals that ultimately had to earn a return for shareholders, nonprofits
such as the U.Va. Health System had to strike a balance across its various objectives. A
typical for-profit hospital required a pretax profit margin of 15% to justify a capital
investment, whereas a nonprofit could require a lower margin and still meet its
objective of providing excellent clinical care.
During Fitzgerald’s tenure, the U.Va. Health System had maintained an average net
profit margin of 4.9%. The board of directors considered a margin of 3.0% to be the
minimum needed to sustain the system. In order to be able to grow and develop the
system, however, the board wanted a 5.0% profit margin as the minimum for new
projects. The board reinvested any profits beyond the 5.0% level in the School of
Medicine to support the U.Va. Health System’s teaching and research missions.
When an investment proposal was brought forward, the board generally considered
three distinct sources of funding: cash, debt, and leasing. When analyzing a project, a
primary consideration for the board was to maintain an AA bond rating for the hospital.
This was the highest rating a hospital could receive due to associated business risk.
Maintaining the credit rating kept borrowing costs low and allowed the hospital to
effectively compete for debt dollars in the future. On the other hand, the desire for an
AA rating limited the total amount of debt the hospital could carry. Based on discussions
with several banks about the LTAC project, Fitzgerald was confident that he could
obtain the $15 million loan needed and that the added debt on the balance sheet would
not jeopardize the U.Va. Health System’s AA bond rating.
LTAC Project Analysis
Larry Fitzgerald looked at the memo and financial projections from his analyst
(Exhibits 24.1 and 24.2) and realized that much work needed to be done before the
board meeting the next day. But before he began to prepare his answers for Mulroney, he
notified his assistant that she should expect a late addition to the paperwork for the
board by early the next morning.
EXHIBIT 24.1 | Memo from Karen Mulroney

EXHIBIT 24.2 | Karen Mulroney’s LTAC Hospital Financial Projections
Source: Created by case writer.
EXHIBIT 24.3 | Financial Data of For-Profit Health Care Companies
Fitzgerald was pleased that Mulroney had gathered working capital data and
financial data from the for-profit hospital sector. But he was disappointed to see so
many omissions in her projections on the eve of the board meeting. Fitzgerald was
Data source: Value Line, December 2005.
EXHIBIT 24.4 | U.S. Treasury and Corporate Bond Yields for March 2, 2006
*Data source: (accessed March 2006).
**Data source: Bloomberg, “Fair Market Curve Analysis,” 10-Year Corporate Bonds, March 2, 2006.
convinced that the LTAC facility would be profitable for the U.Va. Health System, but to
get board approval, he would need to present an analysis that justified such a large
undertaking. Because of the size and risk of the project, the LTAC hospital would need
to have a profit margin well above the 5.0% level, and if it was to be debt-financed, he
would need to show an adequate coverage of the interest expense. Finally, he would
have to be ready to defend each of the assumptions used to create the financial
projections, because the financial acumen varied significantly across the board
Page 311
5 Management of the Firm’s Equity: Dividends
and Repurchases
Page 303
CASE 25 Star River Electronics Ltd.
On July 5, 2015, her first day as CEO of Star River Electronics Ltd., Adeline Koh
confronted a host of management problems. One week earlier, Star River’s president
and CEO had suddenly resigned to accept a CEO position with another firm. Koh had
been appointed to fill the position—starting immediately. Several items in her in-box
that first day were financial in nature, either requiring a financial decision or with
outcomes that would have major financial implications for the firm. That evening, Koh
asked to meet with her assistant, Andy Chin, to begin addressing the most prominent
Star River Electronics and the Optical-Disc-
Manufacturing Industry
Star River Electronics had been founded as a joint venture between Starlight
Electronics Ltd., United Kingdom, and an Asian venture-capital firm, New Era Partners.
Based in Singapore, Star River had a single business mission: to manufacture highquality
optical discs as a supplier to movie studios and video game producers.
When originally founded, Star River gained recognition for its production of
compact discs (CD), which were primarily used in the music recording industry and as
data storage for personal computers. As technological advances in disc storage and the
movie and video game markets began to grow, Star River switched most of its
production capacity to manufacturing DVD and Blu-ray discs and became one of the
leading suppliers in the optical-disc-manufacturing industry.
Storage media had proven to be a challenging industry for manufacturers. The
advent of the CD was the beginning of the optical storage media industry, which used
Page 304
laser light to read data, rather than reading data from an electromagnetic tape, such as a
cassette tape. In the mid-1990s the CD replaced cassette tapes and became the standard
media for music. CDs were also widely used for data storage in personal computers.
What followed was a rapid growth in demand and production of CD discs that led to
dramatic cost savings for users and tightening margins for manufacturers.
Manufacturers struggled to keep pace with changing formats and quality
enhancements that required substantial capital investments. As prices fell, many of the
smaller producers failed or were acquired by larger, more cost-efficient competitors.
While CDs continued to be used by the music industry, the movie and video game
industry required a much higher data density, which resulted in the development of the
DVD (digital versatile disc). A DVD held 4.7 gigabytes (GB) of data compared to a CD
with a capacity of 0.7 GB. As the entertainment industry evolved toward high-definition
video, the Blu-ray format emerged as the standard video format because it offered up to
50 GB of capacity.
Star River Electronics was one of the few CD manufacturers that had been able to
survive the many shakeouts created by the technological innovations in the industry. The
challenge in 2015 for all disc manufacturers was the movement of music and video
entertainment to online data streaming. Despite this challenge, however, Star River’s
volume sales had grown at a robust rate over the past two years. Sales to North
America had suffered, but sales to emerging-market countries had more than
compensated. Unit prices had declined because of price competition and the growing
popularity of streaming. Many industry experts were predicting declining demand and
further compression in margins in the CD and DVD segments, but stable-to-rising
demand for Blu-ray discs over the next few years. Star River management believed that
with its continued investment in production efficiency, the company was well positioned
to grow its Blu-ray revenues enough to offset the continuing declines in its DVD and CD
revenues over the next three to five years.
Financial Questions Facing Adeline Koh
That evening, Koh met with Andy Chin, a promising new associate whom she had
brought along from New Era Partners. Koh’s brief discussion with Chin went as
KOH: Back at New Era, we looked at Star River as one of our most promising
venture-capital investments. Now it seems that such optimism may not be
warranted—at least until we get a solid understanding of the firm’s past
performance and its forecast performance. Did you have any success on this?
CHIN: Yes, the bookkeeper gave me these: the historical income statements
(Exhibit 25.1) and balance sheets (Exhibit 25.2) for the last four years. The
accounting system here is still pretty primitive. However, I checked a number of
the accounts, and they look orderly. So I suspect that we can work with these
figures. From these statements, I calculated a set of diagnostic ratios
(Exhibit 25.3).
EXHIBIT 25.1 | Historical Income Statements for Fiscal Year Ended June 30 (in SGD
The expected corporate tax rate was 24.5%.
Data source: Author estimates.
EXHIBIT 25.2 | Historical Balance Sheets for Fiscal Year Ended June 30 (in SGD thousands)
Short-term debt was borrowed from City Bank at an interest rate equal to Singaporean prime lending rate
+1.5%. Current prime lending rate was 5.35%. The benchmark 10-year Singapore treasury bond currently
yielded 2.30%.
Two components made up the company’s long-term debt. One was a SGD10 million loan that had been
issued privately in 2010 to New Era Partners and to Star River Electronics Ltd., UK. This debt was subordinate to
Page 305
KOH: I see you have been busy. Unfortunately, I can’t study these right now. I need
you to review the historical performance of Star River for me, and to give me any
positive or negative insights that you think are significant.
CHIN: When do you need this?
KOH: At 7:00 A.M. tomorrow. I want to call on our banker tomorrow morning and
get an extension on Star River’s loan.
CHIN: The banker, Mr. Tan, said that Star River was “growing beyond its
financial capabilities.” What does that mean?
KOH: It probably means that he doesn’t think we can repay the loan within a
reasonable period. I would like you to build a simple financial forecast of our
any bank debt outstanding. The second component was a SGD8.2 million public bond issuance on July 1, 2014,
with a five-year maturity and a coupon of 5.75% paid semiannually. The bond had recently traded at a price of
Data source: Monetary Authority of Singapore and author estimates.
EXHIBIT 25.3 | Ratio Analyses of Historical Financial Statements Fiscal Year Ended June 30
Data source: Author calculations.
performance for the next two years (ignore seasonal effects), and show me what
our debt requirements will be at the fiscal years ending 2016 and 2017. I think it is
reasonable to expect that Star River’s sales will grow at 4% each year. Also, you
should assume capital expenditures of SGD54.6 million for DVD and Blu-ray
manufacturing equipment, spread out over the next two years and depreciated over
seven years. Use whatever other assumptions seem appropriate to you, based on
your analysis of historical results. For this forecast, you should assume that any
external funding is in the form of bank debt.
CHIN: But what if the forecasts show that Star River cannot repay the loan?
KOH: Then we’ll have to go back to Star River’s owners, New Era Partners and
Star River Electronics United Kingdom, for an injection of equity. Of course, New
Era Partners would rather not invest more funds unless we can show that the
returns on such an investment would be very attractive and/or that the survival of
the company depends on it. Thus, my third request is for you to examine what
returns on book assets and book equity Star River will offer in the next two years
and to identify the “key-driver” assumptions of those returns. Finally, let me have
your recommendations regarding operating and financial changes I should make
based on the historical analysis and the forecasts.
CHIN: The plant manager revised his request for a new packaging machine, which
would add SGD1.82 million to the 2016 capital expenditures budget. He believes
that these are the right numbers to make the choice between investing now or
waiting three years to buy the new packaging equipment (see the plant manager’s
memorandum in Exhibit 25.4). The new equipment can save significantly on labor
costs and will enhance the packaging options we can offer our customers.
However, adding SGD1.82 million to the capex budget may not be the best use of
our cash now. My hunch is that our preference between investing now versus
waiting three years will hinge on the discount rate.
EXHIBIT 25.4 | Lim’s Memo regarding New Packaging Equipment
KOH: [laughing] The joke in business school was that the discount rate was always
CHIN: That’s not what my business school taught me! New Era always uses a 40%
discount rate to value equity investments in risky start-up companies. But Star
River is well established now and shouldn’t require such a high-risk premium. I
managed to pull together some data on other Singaporean electronics companies
with which to estimate the required rate of return on equity (see Exhibit 25.5).
Descriptions of Companies
Sing Studios, Inc.
This company was founded 60 years ago. Its major business activities had been
production of original-artist recordings, management and production of rock-androll
road tours, and personal management of artists. It entered the CD-production
market in the 1980s, and only recently branched out into the manufacture of DVDs.
Wintronics, Inc.
This company was a spin-off from a large technology-holding corporation in 2001.
Although the company was a leader in the production of optical media, it has
recently suffered a decline in sales. Infighting among the principal owners has fed
EXHIBIT 25.5 | Data on Comparable Companies
Note: NMF means not a meaningful figure. This arises when a company’s earnings or projected earnings are
Singapore’s equity market risk premium could be assumed to be close to the global equity market premium of 6
percent, given Singapore’s high rate of integration into global markets.
Data source: Author estimates.
concerns about the firm’s prospects.
STOR-Max Corp.
This company, founded only two years ago, had emerged as a very aggressive
competitor in the area of DVD and Blu-ray production. It was Star River’s major
competitor and its sales level was about the same.
Digital Media Corp.
This company had recently been an innovator in the production of Blu-ray discs.
Although optical-media manufacturing was not a majority of its business (film
production and digital animation were its main focus), the company was a
significant supplier to several major movie studios and was projected to become a
major competitor within the next three years.
Wymax, Inc.
This company was an early pioneer in the CD and DVD industries. Recently,
however, it had begun to invest in software programming and had been moving
away from disc production as its main focus of business.
KOH: Fine. Please estimate Star River’s weighted average cost of capital and
assess the packaging-machine investment. I would like the results of your analysis
tomorrow morning at 7:00.
Page 361
PART 6 Management of the Corporate Capital Structure
Page 313
CASE 26 Rockboro Machine Tools Corporation
On September 15, 2015, Sara Larson, chief financial officer (CFO) of Rockboro
Machine Tools Corporation (Rockboro), paced the floor of her Minnesota office. She
needed to submit a recommendation to Rockboro’s board of directors regarding the
company’s dividend policy, which had been the subject of an ongoing debate among the
firm’s senior managers. Larson knew that the board was optimistic about Rockboro’s
future, but there was a lingering uncertainty regarding the company’s competitive
position. Like many companies following the “great recession” of 2008 and 2009,
Rockboro had succeeded in recovering revenues back to prerecession levels. Unlike
most other companies, however, Rockboro had not been able to recover its profit
margins, and without a much-improved cost structure, it would be difficult for Rockboro
to compete with the rising presence of foreign competition that had surfaced primarily
from Asia. The board’s optimism was fueled by the signs that the two recent
restructurings would likely return Rockboro to competitive profit margins and allow the
company to compete for its share of the global computer-aided design and
manufacturing (CAD/CAM) market.
There were two issues that complicated Larson’s dividend policy recommendation.
First, she had to consider that over the past four years Rockboro shareholders had
watched their investment return them no capital gain (i.e., the current stock price of
$15.25 was exactly the same as it had been on September 15, 2011). The only return
shareholders had received was dividends, which amounted to an average annual return
of 2.9% and compared poorly to an annual return of 12.9% earned by the average stock
over the same period. The second complication was that the 2008 recession had
prompted a number of companies to repurchase shares either in lieu of or in addition to
Page 314
paying a dividend. A share repurchase was considered a method for management and
the board to signal confidence in their company and was usually greeted with a stock
price increase when announced. Rockboro had repurchased $15.8 million of
shares in 2009, but had not used share buybacks since then. Larson recognized,
therefore, that her recommendation needed to include whether to use company funds to
buy back stock, pay dividends, do both, or do neither.
Background on the Dividend Question
Prior to the recession of 2008, Rockboro had enjoyed years of consistent earnings and
predictable dividend growth. As the financial crisis was unfolding, Rockboro’s board
decided to maintain a steady dividend and to postpone any dividend increases until
Rockboro’s future became more certain. That policy had proven to be expensive since
earnings recovered much more slowly than was hoped and dividend payout rose above
50% for the years 2009 through 2011. To address the profit-margin issue, management
implemented two extensive restructuring programs, both of which were accompanied by
net losses. Dividends were maintained at $0.64/share until the second restructuring in
2014, when dividends were reduced by half for the year. For the first two quarters of
2015, the board declared no dividend. But in a special letter to shareholders, the board
committed itself to resuming payment of the dividend “as soon as possible—ideally,
sometime in 2015.”
In a related matter, senior management considered embarking on a campaign of
corporate-image advertising, together with changing the name of the corporation to
“Rockboro Advanced Systems International, Inc.” Management believed that the name
change would help improve the investment community’s perception of the company.
Overall, management’s view was that Rockboro was a resurgent company that
demonstrated great potential for growth and profitability. The restructurings had
Page 315
revitalized the company’s operating divisions. In addition, a newly developed software
product promised to move the company beyond its machine- tool business into licensing
of its state-of-the-art design software that provided significant efficiencies for users and
was being well received in the market, with expectations of rendering many of the
competitors’ products obsolete. Many within the company viewed 2015 as the dawning
of a new era, which, in spite of the company’s recent performance, would turn
Rockboro into a growth stock.
Out of this combination of a troubled past and a bright future arose Larson’s
dilemma. Did the market view Rockboro as a company on the wane, a blue-chip stock,
or a potential growth stock? How, if at all, could Rockboro affect that perception?
Would a change of name help to positively frame investors’ views of the firm? Did the
company’s investors expect capital growth or steady dividends? Would a stock buyback
affect investors’ perceptions of Rockboro in any way? And, if those questions could be
answered, what were the implications for Rockboro’s future dividend policy?
The Company
Rockboro was founded in 1923 in Concord, New Hampshire, by two mechanical
engineers, James Rockman and David Pittsboro. The two men had gone to school
together and were disenchanted with their prospects as mechanics at a farm-equipment
In its early years, Rockboro had designed and manufactured a number of
machinery parts, including metal presses, dies, and molds. In the 1940s, the
company’s large manufacturing plant produced armored-vehicle and tank parts and
miscellaneous equipment for the war effort, including riveters and welders. After the
war, the company concentrated on the production of industrial presses and molds, for
plastics as well as metals. By 1975, the company had developed a reputation as an
innovative producer of industrial machinery and machine tools.
In the early 1980s, Rockboro entered the new field of computer-aided design and
computer-aided manufacturing (CAD/CAM). Working with a small software company, it
developed a line of presses that could manufacture metal parts by responding to
computer commands. Rockboro merged the software company into its operations and,
over the next several years, perfected the CAM equipment. At the same time, it
developed a superior line of CAD software and equipment that allowed an engineer to
design a part to exacting specifications on a computer. The design could then be entered
into the company’s CAM equipment, and the parts could be manufactured without the
use of blueprints or human interference. By the end of 2014, CAD/CAM equipment and
software were responsible for about 45% of sales; presses, dies, and molds made up
40% of sales; and miscellaneous machine tools were 15% of sales.
Most press-and-mold companies were small local or regional firms with a limited
clientele. For that reason, Rockboro stood out as a true industry leader. Within the
CAD/CAM industry, however, a number of larger firms, including Autodesk, Inc.,
Cadence Design, and Synopsys, Inc., competed for dominance of the growing market.
Throughout the 1990s and into the first decade of the 2000s, Rockboro helped set
the standard for CAD/CAM, but the aggressive entry of large foreign firms into
CAD/CAM had dampened sales. Technological advances and significant investments
had fueled the entry of highly specialized, state-of-the-art CAD/CAM firms. By 2009,
Rockboro had fallen behind its competition in the development of user-friendly
software and the integration of design and manufacturing. As a result, revenues had
barely recovered beyond the prerecession-level high of $1.07 billion in 2008, to $1.13
billion in 2014, and profit margins were getting compressed because the company was
having difficulty containing costs.
To combat the weak profit margins, Rockboro took a two-pronged approach. First, a
Page 316
much larger share of the research-and-development budget was devoted to CAD/CAM,
in an effort to reestablish Rockboro’s leadership in the field. Second, the company
underwent two massive restructurings. In 2012, it sold three unprofitable business lines
and two plants, eliminated five leased facilities, and reduced personnel. Restructuring
costs totaled $98 million. Then, in 2014, the company began a second round of
restructuring by refocusing its sales and marketing approach and adopting administrative
procedures that allowed for a further reduction in staff and facilities. The total cost of
the operational restructuring in 2014 was $134 million.
The company’s recent financial statements (Exhibits 26.1 and 26.2) revealed that
although the restructurings produced losses totaling $303 million, the projected results
for 2015 suggested that the restructurings and the increased emphasis on new product
development had launched a turnaround. Not only was the company becoming leaner,
but also the investment in research and development had led to a breakthrough in
Rockboro’s CAD/CAM software that management believed would redefine the
industry. Known as the Artificial Intelligence Workforce (AIW), the system
was an array of advanced control hardware, software, and applications that
continuously distributed and coordinated information throughout a plant. Essentially,
AIW allowed an engineer to design a part on CAD software and input the data into
CAM equipment that controlled the mixing of chemicals or the molding of parts from
any number of different materials on different machines. The system could also
assemble and can, box, or shrink-wrap the finished product. As part of the licensing
agreements for the software, Rockboro engineers provided consulting to specifically
adapt the software to each client’s needs. Thus regardless of its complexity, a product
could be designed, manufactured, and packaged solely by computer. Most importantly,
however, Rockboro’s software used simulations to test new product designs prior to
production. This capability was enhanced by the software’s capability to improve the
design based on statistical inferences drawn from Rockboro’s large proprietary
EXHIBIT 26.1 | Consolidated Income Statements (dollars in thousands, except per-share data)
Note: The dividends in 2015 assume a payout ratio of 40%.
Source: Author estimates.
EXHIBIT 26.2 | Consolidated Balance Sheets (dollars in thousands)
Rockboro had developed AIW applications for the chemicals industry and for the
oil- and gas-refining industries in 2014 and, by the next year, it would complete
applications for the trucking, automobile-parts, and airline industries. By October 2014,
when the first AIW system was shipped, Rockboro had orders totaling $115 million. By
year-end 2014, the backlog had grown to $150 million. The future for the product
looked bright. Several securities analysts were optimistic about the product’s impact on
Note: Projections assume a dividend-payout ratio of 40%.
Source: Author estimates.
the company. The following comments paraphrase their thoughts:
The Artificial Intelligence Workforce system has compelling advantages over competing
entries, which will enable Rockboro to increase its share of a market that, ignoring
periodic growth spurts, will expand at a real annual rate of about 5% over the next
several years.
Rockboro’s engineering team is producing the AIW applications at an impressive
rate, which will help restore margins to levels not seen in years.
The important question now is how quickly Rockboro will be able to sell licenses
in volume. Start-up costs, which were a significant factor in last year’s deficits, have
continued to penalize earnings. Our estimates assume that adoption rates will proceed
smoothly from now on and that AIW will have gained significant market share by yearend
Rockboro’s management expected domestic revenues from the Artificial Intelligence
Workforce series to total $135 million in 2015 and $225 million in 2016. Thereafter,
growth in sales would depend on the development of more system applications and the
creation of system improvements and add-on features. International sales through
Rockboro’s existing offices in Frankfurt, London, Milan, and Paris and new offices in
Hong Kong, Shanghai, Seoul, Manila, and Tokyo were expected to help meet foreign
competition head on and to provide additional revenues of $225 million by as early as
2017. Currently, international sales accounted for approximately 15% of total corporate
Two factors that could affect sales were of some concern to management. First,
although Rockboro had successfully patented several of the processes used by the AIW
system, management had received hints through industry observers that two strong
competitors were developing comparable systems and would probably introduce them
within the next 12 months. Second, sales of molds, presses, machine tools, and
CAD/CAM equipment and software were highly cyclical, and current Page 317
predictions about the strength of the United States and other major economies
were not encouraging. As shown in Exhibit 26.3, real GDP (gross domestic product)
growth was expected to expand to 2.9% by 2016, and industrial production, which had
improved significantly for 2014 to 4.2% growth, was projected to decline in 2015
before recovering to 3.6% by 2016. Despite the lukewarm macroeconomic environment,
Rockboro’s management remained optimistic about the company’s prospects because of
the successful introduction of the AIW series.
Corporate Goals
A number of corporate objectives had grown out of the restructurings and recent
technological advances. First and foremost, management wanted and expected revenues
to grow at an average annual compound rate of 15%. With the improved cost structure,
profit growth was expected to exceed top-line growth. A great deal of corporate
planning had been devoted to the growth goal over the past three years and, indeed,
second-quarter financial data suggested that Rockboro would achieve revenues of about
$1.3 billion in 2015. If Rockboro achieved a 15% compound rate of revenue growth
through 2021, the company would reach $3.0 billion in sales and $196 million in
EXHIBIT 26.3 | Economic Indicators and Projections (all numbers are percentages)
Data source: “Value Line Investment Survey,” August 2015.
net income.
In order to achieve their growth objective, Rockboro management proposed a
strategy relying on three key points. First, the mix of production would shift
substantially. CAD/CAM with emphasis on the AIW system would account for threequarters
of sales, while the company’s traditional presses and molds would account for
the remainder. Second, the company would expand aggressively in the global markets,
where it hoped to obtain half of its sales and profits by 2021. This expansion would be
achieved through opening new field sales offices around the world, including Hong
Kong, Shanghai, Seoul, Manila, and Tokyo. Third, the company would expand through
joint ventures and acquisitions of small software companies, which would provide half
of the new products through 2021; in-house research would provide the other half.
The company had had an aversion to debt since its inception. Management believed
that a small amount of debt, primarily to meet working-capital needs, had its place, but
anything beyond a 40% debt-to-equity ratio was, in the oft-quoted words of Rockboro
cofounder David Pittsboro, “unthinkable, indicative of sloppy management, and flirting
with trouble.” Senior management was aware that equity was typically more costly than
debt, but took great satisfaction in the company “doing it on its own.” Rockboro’s
highest debt-to-capital ratio in the past 25 years (28%) had occurred in 2014 and was
still the subject of conversations among senior managers.
Although 11 members of the Rockman and the Pittsboro families owned 13% of the
company’s stock and three were on the board of directors, management placed the
interests of the outside shareholders first (Exhibit 26.4). Stephen Rockman, board chair
and grandson of the cofounder, sought to maximize growth in the market value of the
company’s stock over time. At 61, Rockman was actively involved in all aspects of the
company’s growth. He dealt fluently with a range of technical details of Rockboro’s
products and was especially interested in finding ways to improve the company’s
domestic market share. His retirement was no more than four years away, and Page 318
he wanted to leave a legacy of corporate financial strength and technological
achievement. The Artificial Intelligence Workforce, a project that he had taken under his
wing four years earlier, was finally beginning to bear fruit. Rockman now wanted to
ensure that the firm would also soon be able to pay a dividend to its shareholders.
Rockman took particular pride in selecting and developing promising young
managers. Sara Larson had a bachelor’s degree in electrical engineering and had been a
systems analyst for Motorola before attending graduate school. She had been hired in
2005, fresh out of a well-known MBA program. By 2014, she had risen to the position
of CFO.
Dividend Policy
Before 2009, Rockboro’s earnings and dividends per share had grown at a relatively
steady pace (Exhibit 26.5). Following the recession, cost-control problems became
EXHIBIT 26.4 | Comparative Stockholder Data, 2004 and 2014 (in thousands of shares)
Note: The investor-relations department identified these categories from company records. The type of institutional
investor was identified from promotional materials stating the investment goals of the institutions. The type of individual
investor was identified from a survey of subsamples of investors.
Source: Author estimates.
apparent because earnings were not able to rebound to prerecession levels. The board
maintained dividends at $0.64 per year until 2014 when the restructuring expenses led
to the largest per-share earnings loss in the firm’s history. To conserve cash, the board
voted to pare back dividends by 50% to $0.32 a share—the lowest dividend since
1998. Paying any dividend with such high losses effectively meant that Rockboro had to
borrow to pay the dividend. In response to the financial pressure, the directors elected
to not declare a dividend for the first two quarters of 2015. In a special letter to
shareholders, however, the directors declared their intention to continue the annual
payout later in 2015.
In August 2015, Larson was considering three possible dividend policies to
EXHIBIT 26.5 | Per-Share Financial and Stock Data
nmf = not a meaningful figure.
Adjusted for a 3-for-2 stock split in January 1995 and a 50% stock dividend in June 2007.
EPS = earnings per share; CPS = cash earnings per share; DPS = dividend per share.
Source: Author estimates.
Zero-dividend payout: A zero payout could be justified in light of the firm’s strategic
emphasis on advanced technologies and CAD/CAM, which demanded huge cash
requirements to succeed. The proponents of this policy argued that it would signal that
the firm now belonged in a class of high-growth and high-technology firms. Some
securities analysts wondered whether the market still considered Rockboro a
traditional electrical-equipment manufacturer or a more technologically advanced
CAD/CAM company. The latter category would imply that the market expected strong
capital appreciation, but perhaps little in the way of dividends. Others cited
Rockboro’s recent performance problems. One questioned the “wisdom of ignoring
the financial statements in favor of acting like a blue chip.” Was a high dividend in the
long-term interests of the company and its stockholders, or would the strategy backfire
and make investors skittish?
Page 319
40% dividend payout or a quarterly dividend of around $0.10 a share: This option
would restore the firm to an implied annual dividend payment of $0.40 a share, higher
than 2014’s dividend of $0.32, but still less than the $0.64 dividend paid in 2013.
Proponents of this policy argued that such an announcement was justified by expected
increases in orders and sales. Rockboro’s investment banker suggested that the stock
market would reward a strong dividend that would bring the firm’s payout back in line
with the 40% average within the electrical-industrial-equipment industry. Some
directors agreed and argued that it was important to send a strong signal to
shareholders, and that a large dividend (on the order of a 40% payout) would
suggest that the company had conquered its problems and that its directors were
confident of its future earnings. Finally, some older directors opined that a growth rate
in the range of 10% to 20% should accompany a dividend payout of between 30% and
50%, but not all supported the idea of borrowing to fuel the growth and support that
level of dividend.
Larson recalled a recently published study reporting that firms had increased their
payout ratios to an average of 38% for Q2 2015, from a low of 27% in Q1 2011.
Also, the trend since the recession was for more companies to pay dividends. For the
S&P 500, about 360 companies paid dividends in Q1 2010 compared to 418 in Q2
2015. Viewed in that light, perhaps the market would expect Rockboro to follow the
crowd and would react negatively if Rockboro did not reinstitute a positive dividendpayout
Residual-dividend payout: A few members of the finance department argued that
Rockboro should pay dividends only after it had funded all the projects that offered
positive net present values (NPV). Their view was that investors paid managers to
deploy their funds at returns better than they could otherwise achieve, and that, by
definition, such investments would yield positive NPVs. By deploying funds into those
projects and returning otherwise unused funds to investors in the form of dividends,
the firm would build trust with investors and be rewarded through higher valuation
Another argument in support of that view was that the particular dividend policy
was “irrelevant” in a growing firm: any dividend paid today would be offset by
dilution at some future date by the issuance of shares needed to make up for the
dividend. This argument reflected the theory of dividends in a perfect market
advanced by two finance professors, Merton Miller and Franco Modigliani. To Sara
Larson, the main disadvantage of this policy was that dividend payments would be
unpredictable. In some years, dividends could even be cut to zero, possibly imposing
negative pressure on the firm’s share price. Larson was all too aware of Rockboro’s
own share-price collapse following its dividend cut. She recalled a study by another
finance professor, John Lintner, which found that firms’ dividend payments tended to
be “sticky” upward—that is, dividends would rise over time and rarely fall, and that
mature, slower-growth firms paid higher dividends, while high-growth firms paid
lower dividends.
Page 320
In response to the internal debate, Larson’s staff pulled together comparative
information on companies in three industries—CAD/CAM, machine tools, and
electrical-industrial equipment—and a sample of high- and low-payout companies
(Exhibits 26.6 and 26.7). To test the feasibility of a 40% dividend-payout rate,
Larson developed a projected sources-and-uses-of-cash statement
(Exhibit 26.8). She took an optimistic approach by assuming that the company would
grow at a 15% compound rate, that margins would improve steadily, and that the firm
would pay a dividend of 40% of earnings every year. In particular, the forecast assumed
that the firm’s net margin would gradually improve from 4.0% in 2015 to 6.5% in 2020
and 2021. The firm’s operating executives believed that this increase in profitability
was consistent with economies of scale and the higher margins associated with the
Artificial Intelligence Workforce series.
EXHIBIT 26.6 | Comparative Industry Data
nmf = not a meaningful figure.
Rockboro cash flow growth calculations use an adjusted cash flow for 2014 that omits the restructuring costs.
Based on book values.
Data source: “Value Line Investment Survey,” February 2016.
EXHIBIT 26.7 | Selected Healthy Companies with High- and Zero-Dividend Payouts
A master limited partnership (MLP) paid no corporate taxes. All income taxes were paid by shareholders 1 on their share
Image Advertising and Name Change
As part of a general review of the firm’s standing in the financial markets, Rockboro’s
director of investor relations, Maureen Williams, had concluded that investors
misperceived the firm’s prospects and that the firm’s current name was more consistent
with its historical product mix and markets than with those projected for the future.
Williams commissioned surveys of readers of financial magazines, which revealed a
relatively low awareness of Rockboro and its business. Surveys of stockbrokers
revealed a higher awareness of the firm, but a low or mediocre outlook on Rockboro’s
of taxable earnings.
Data source: “Value Line Investment Survey,” August 2015.
EXHIBIT 26.8 | Projected Sources-and-Uses Statement Assuming a 40% Payout Ratio (dollars in
This analysis ignores the effects of borrowing on interest expense.
Source: Author estimates.
likely returns to shareholders and its growth prospects. Williams retained a consulting
firm that recommended a program of corporate-image advertising targeted toward
guiding the opinions of institutional and individual investors. The objective was to
enhance the firm’s visibility and image. Through focus groups, the image consultants
identified a new name that appeared to suggest the firm’s promising new strategy:
Rockboro Advanced Systems International, Inc. Williams estimated that the imageadvertising
campaign and name change would cost approximately $15 million.
Stephen Rockman was mildly skeptical. He said, “Do you mean to raise our stock
price by ‘marketing’ our shares? This is a novel approach. Can you sell claims on a
company the way Procter & Gamble markets soap?” The consultants could give no
empirical evidence that stock prices responded positively to corporate-image
campaigns or name changes, though they did offer some favorable anecdotes.
Larson was in a difficult position. Board members and management disagreed on the
very nature of Rockboro’s future. Some managers saw the company as entering a new
stage of rapid growth and thought that a large (or, in the minds of some, any) dividend
would be inappropriate. Others thought that it was important to make a strong public
gesture showing that management believed that Rockboro had turned the corner and was
about to return to the levels of growth and profitability seen prior to the last five to six
years. This action could only be accomplished through a dividend. Then there was the
confounding question about the stock buyback. Should Rockboro use its funds to
repurchase stocks instead of paying out a dividend? As Larson wrestled with the
different points of view, she wondered whether Rockboro’s management might be
representative of the company’s shareholders. Did the majority of public shareholders
own stock for the same reason, or were their reasons just as diverse as those of
Page 330
Page 329
In this Internet age, the consumer is using music content more than ever before—
whether that’s playlisting, podcasting, personalizing, sharing, downloading or just
simply enjoying it. The digital revolution has caused a complete change to the
culture, operations, and attitude of music companies everywhere. It hasn’t been easy,
and we must certainly continue to fight piracy in all its forms. But there can be no
doubt that with even greater commitment to innovation and a true focus on the
consumer, digital distribution is becoming the best thing that ever happened to the
music business and the music fan.
—Eric Nicoli, CEO, EMI Group
In early spring of 2007, Martin Stewart drove through the darkened streets of
Kensington in West London. As chief financial officer (CFO) for global music giant
EMI, Stewart already knew most of the news that would break at the company’s April
18 earnings announcement. Annual underlying revenue for the company was down 16%
to GBP 1.8 billion (British pounds). Earnings per share (EPS) had also dropped from
10.9 pence (p) in 2006 to −36.3p in FY2007 (fiscal year). Those disappointing numbers
were roughly in line with the guidance Stewart had given investors in February. The
performance reflected the global decline in music industry revenues, as well as the
extraordinary cost of the restructuring program EMI was pursuing to realign its
investment priorities and focus its resources to achieve the best returns in the future.
The earnings announcement would include an announcement of the dividend amount,
which had not yet been determined. The board would meet soon to review EMI’s annual
results, and Stewart was to recommend an appropriate final dividend for the fiscal year.
On an annual basis, EMI had consistently paid an 8p-per-share dividend to
ordinary shareholders since 2002 (Exhibit 27.1). Now in light of EMI’s recent
performance, Stewart questioned whether EMI should continue to maintain what would
represent a combined GBP 63-million annual dividend payment. Although omitting the
dividend would preserve cash, Stewart appreciated the negative effect the decision
might have on EMI’s share price, which was currently at 227p. Stewart recognized that
EMI faced considerable threat of a takeover. Although its board had recently been able
to successfully reject an unsolicited 260p-per-share merger offer from U.S. rival
Warner Music, there remained considerable outside interest in taking over EMI. It
seemed that boosting EMI’s share price was imperative if EMI was to maintain its
With a storied history that included such names as the Beatles, the Beach Boys, Pink
Floyd, and Duran Duran, it was not difficult to understand why EMI considered its
current and historical catalog of songs and recordings among the best in the world. EMI,
Warner Music Group, Sony BMG Music Entertainment, and Universal Music Group,
collectively known as “the majors,” dominated the music industry in the early 21st
EXHIBIT 27.1 | Financial and Stock Data per Share (in pence)
Stock price data is for the fiscal year period. For example, 2007 data is from April 1, 2006 to March 31, 2007. Stock price
data was available for 2001 only from May 15, 2000 to March 31, 2001.
Sources of data: Company Web site and Yahoo! Finance.
century and accounted for more than two-thirds of the world’s recorded music and
publishing sales. Exhibit 27.2 contains a list of the global top-10 albums with their
respective record labels for the last four years.
EXHIBIT 27.2 | Top-10 Albums for 2003 to 2006 (physical sales only)

Recorded music and music publishing were the two main revenue drivers for the
music industry. EMI divided its organization into two corresponding divisions. EMI
Music, the recorded-music side, sought out artists it believed would be long-term
commercial recording successes. Each EMI record label marketed its artist’s recordings
to the public and sold the releases through a variety of retail outlets. EMI’s extensive
music catalog consisted of more than 3 million songs. Recorded-music division sales
came from both new and old recordings with existing catalog albums constituting 30%
to 35% of the division’s unit sales. Exhibit 27.3 contains a list of EMI’s most successful
recording artists in FY2007.
Source of data: International Federation of Phonographic Industry (IFPI) Web site.
EXHIBIT 27.3 | EMI Top Recording and Publishing Successes in Fiscal Year 2007
All sales figures are for the 12 months ended March 31, 2007. Unit sales include digital albums and 1 digital track album
Page 331
EMI Music Publishing focused not on recordings but on the songs themselves.
Generally, there were three categories of publishing-rights ownership in the music
industry: the lyric’s author, the music’s composer, and the publisher who acquired the
right to exploit the song. These publishing-rights owners were entitled to royalties
whenever and however their music was used. Music publishers categorized their
revenue streams as mechanical royalties (sales of recorded music), performance
royalties (performances of a song on TV, radio, at a live concert, or in other public
venues such as bars), and synchronization royalties (use of a song in audiovisual works
such as advertisements or computer games). EMI included a fourth category of royalties
labeled “other,” which included sales of sheet music and, increasingly, mobile ring
tones and ring backs. Similar to the recorded-music division, the music-publishing
division identified songwriters with commercial potential and signed them to
long-term contracts. The division then assisted the songwriters in marketing
their works to record companies and other media firms. EMI’s current publishing
catalog encompassed more than 1 million musical compositions. Exhibit 27.3 includes
a list of EMI’s most-successful songwriters in FY2007. EMI’s publishing business
generated one-fourth of the total group revenue. Revenue in the publishing business was
stable, and operating profits were positive.
In addition to seeking out and signing flourishing recording artists and songwriters
to long-term agreements, both EMI divisions also expanded and enhanced their
individual catalogs and artist rosters by strategic transactions. Two key acquisitions for
EMI’s recorded-music division were the 1955 acquisition of a leading American record
label, Capitol Records, and the 1992 acquisition of Virgin Music Group, then the largest
independent record label. Together the transactions added such key recording stars as
Source of data: Company annual report.
Frank Sinatra, Nat King Cole, Janet Jackson, and the Rolling Stones. The musicpublishing
division similarly targeted existing publishing assets with large, proven
commercial potential such as the purchase in various stages of Motown founder Berry
Gordy’s music catalog in 1997, 2003, and 2004.
Since the company’s founding in 1897, EMI’s model had been that of “constantly
seeking to expand their catalog, with the hits of today forming the classics of
tomorrow.” Both divisions pursued the goal of having the top-selling artists and
songwriters and the deepest, most-recognized catalog assets. EMI welcomed
technological innovations, which often drove increased music sales as consumers
updated their music collections with the latest music medium (e.g., replacing an LP or
cassette with the same recording on compact disc). But the latest technology, digital
audio on the Internet, was different and revolutionary. Digital audio on the Internet
demanded rethinking the business model of all the majors, including EMI.
Digital Audio and the Music Industry
Digital audio had been around since the advent of the compact disc (CD) in the early
1980s, but the 1990s combination of digital audio, Internet, and MP3 file format brought
the music industry to a new crossroads. The MP3 format had nearly the same sound
quality as CDs, but its small file size allowed it to be easily downloaded from the
Internet, stored on a computer hard drive, and transferred to a digital audio player,
generally referred to as an MP3 player.
Peer-to-peer file-sharing Internet services, most notably Napster, emerged in the late
1990s. First available in mid-1999, Napster facilitated the exchange of music files. The
use of Napster’s file-sharing program exploded, and Napster claimed 20 million users
by July 2000. Napster’s swift growth did not go unnoticed by the music industry. While
the Recording Industry Association of America (RIAA) was eventually successful in
Page 332
using the court system to force Napster to remove copyrighted material, it did not stop
peer-to-peer file sharing. New services were quickly developed to replace Napster.
The International Federation of the Phonographic Industry (IFPI), an organization
representing the recording industry worldwide, estimated that almost 20 billion
songs were downloaded illegally in 2005.
EMI was an early presence on the Internet in 1993. In 1999, EMI artist David
Bowie’s album, hours …, was the first album by a major recording artist to be released
for download from the Internet. None of the record labels were prepared, however, for
how quickly peer-to-peer file sharing would change the dynamics of the music industry
and become a seemingly permanent thorn in the music industry’s side. In the wake of
Napster’s demise, the music labels, including EMI, attempted various subscription
services, but most failed for such reasons as cost, CD-burning restrictions, and
incompatibility with available MP3 players. Only in the spring of 2003, when Apple
launched its user-friendly Web site, iTunes Music Store, did legitimate digital-audio
sales really take off in the United States, the world’s largest music market. Apple began
to expand iTunes globally in 2004 and sold its one-billionth download in February
2006. According to the IFPI, there were 500 legitimate on-line music services in more
than 40 countries by the beginning of 2007, with $2 billion in digital music sales in
Despite the rise of legally downloaded music, the global music market continued to
shrink due to the rapid decline in physical sales. Nielsen SoundScan noted that total
album units sold (excluding digital-track equivalents) declined almost 25% from 2000
to 2006. IFPI optimistically predicted that digital sales would compensate for the
decrease in physical sales in 2006, yet in early 2007, IFPI admitted that this “holy grail”
had not yet occurred, with 2006 overall music sales estimated to have declined by 3%.
IFPI now hoped digital sales would overtake the decline in physical sales in 2007.
Page 333
Credit Suisse’s Global Music Industry Forecasts incorporated this view with a
relatively flat music market in 2007 and minor growth of 1.1% to 1.5% in 2008 and
2009. The Credit Suisse analyst also noted that the music industry’s operating margins
were expected to rise as digital sales became more significant and related production
and distribution costs declined. Lehman Brothers was more conservative, assuming a
flat market for the next few years and commenting that the continued weakness in early
2007 implied that the “market could remain tough for the next couple of years.”
Many in the industry feared that consumers’ ability to unbundle their music
purchases—to purchase two or three favorite songs from an album on-line versus the
entire album at a physical retail store—would put negative pressure on music sales for
the foreseeable future. A Bear Stearns research report noted:
While music consumption, in terms of listening time, is increasing as the iPod and
other portable devices have become mass-market products, the industry has still
not found a way of monetizing this consumption. Instead, growing piracy and the
unbundling of the album, combined with the growing power of big
retailers in the physical and iTunes in the digital worlds, have left the
industry in a funk. There is no immediate solution that we are aware of on the
horizon and in our view, visibility on sales remains poor.
Recent Developments at EMI
The last few years had been incredibly difficult, particularly within EMI’s recordedmusic
division, where revenues had declined 27% from GBP 2,282 million in 2001 to
GBP 1,660 million in 2006. (Exhibits 27.4 and 27.5 show EMI’s financial statements
through FY2007.) Fortunately, downloadable digital audio did not have a similar
ruinous effect on the publishing division. EMI’s publishing sales were a small buffer for
the company’s performance and hovered in a tight range of GBP 420 million to GBP 391
million during that period. CEO Eric Nicoli’s address at the July 2006 annual general
meeting indicated good things were in store for EMI in both the short term and the long
term. Nicoli stressed EMI’s exciting upcoming release schedules, growth in digital
sales, and success with restructuring plans.
EXHIBIT 27.4 | Consolidated Income Statements (in millions GBP, except per-share data)
Underlying EBITDA is group profit from operations before depreciation, operating exceptional items and amortization.
Underlying profit before taxes (PBT) is before exceptional items and amortization.
Net borrowings is the sum of long-term and short-term borrowings including finance leases less cash, cash
equivalents, and liquid funds investments.
Interest cover is underlying EBITDA (before exceptional items) divided by finance charges (excluding nonstandard
Dividend cover is underlying diluted earnings per ordinary share divided by dividend declared per ordinary share.
EMI noted the company targeted an ongoing dividend cover of 2.0× in its 2004 annual report.
Sources of data: Company annual reports and Web site.
EXHIBIT 27.5 | Consolidated Balance Sheets (in millions GBP)
EMI’s digital sales were growing and represented an increasingly large percentage
of total revenues. In 2004, EMI generated group digital revenues of GBP 15 million,
which represented just less than 1% of total group revenues. By 2006, EMI had grown
the digital revenue to GBP 112 million, which represented 5.4% of total group
revenues. The expected 2007 digital sales for EMI were close to 10% of group
Given the positive expectations for its 2007 fiscal year, financial analysts had
expected EMI’s recorded-music division to see positive sales growth during the year.
EMI’s surprising negative earnings guidance on January 12 quickly changed its outlook.
EMI disclosed that the music industry and EMI’s second half of the year releases had
underperformed its expectations. While the publishing division was on track to achieve
its goals, EMI’s recorded-music division revenues were now expected to decline 6% to
10% from one year ago. The market and investor community reacted swiftly to the news.
With trading volume nearly 10 times the previous day’s volume, EMI’s market
capitalization ended up down more than 7%.
EMI further shocked the investment community with another profit warning just one
month later. On February 14, the company announced that the recorded-music division’s
FY2007 revenues would actually decrease by about 15% year-over-year. EMI based its
new dismal forecast on worsening market conditions in North America, where
SoundScan had calculated that the physical music market had declined 20% in 2007.
The investment community punished EMI more severely after this second surprise profit
warning, and EMI’s stock price dropped another 12%. British newspaper The Daily
Sources of data: Company annual reports and Web site.
Page 334
Telegraph reported shareholders were increasingly disgruntled with performance
surprises. One shareholder allegedly said, “I think [Nicoli]’s a dead duck. [EMI] is now
very vulnerable to a [takeover] bid, and Nicoli is not in any position to defend anything.
I think the finance director [Martin Stewart] has also been tainted because it suggests
they did not get to the bottom of the numbers.” EMI analyst Redwan Ahmed of Oriel
Securities also decried EMI management’s recent news: “It’s disastrous . . . they give
themselves a big 6% to 10% range and a month later say it’s 15%. They have
lost all credibility. I also think the dividend is going to get slashed to about
5p.” Exhibit 27.6 contains information on EMI’s shareholder profile.
As its fiscal year came to a close, EMI’s internal reports indicated that its February
14 forecast was close to the mark. The recorded-music division’s revenue was down,
EXHIBIT 27.6 | Analysis of Ordinary Shareholdings on May 18, 2006
Substantial shareholders are defined as owning three or more of ordinary shares and/or three or more of the voting
rights of ordinary shares.
Source of data: Company annual reports.
and profits were negative. The publishing-division revenue was essentially flat, and its
division’s margin improved as a result of a smaller cost base. The company expected
underlying group earnings before interest, taxes, depreciation, and amortization
(EBITDA), before exceptional items, to be GBP 174 million, which exceeded analysts’
estimates. Digital revenue had grown by 59% and would represent 10% of revenue.
EMI management planned to make a joint announcement with Apple in the next few days
that it was going to be the first major music company to offer its digital catalog free
from digital-rights management and with improved sound quality. The new format
would sell at a 30% premium. EMI management expected this move would drive
increased digital sales.
Management was pleased with the progress of the restructuring program announced
with the January profit warning. The plan was being implemented quicker than expected
and, accordingly, more cost savings would be realized in FY2008. The program was
going to cost closer to GBP 125 million, as opposed to the GBP 150 million previously
announced. Upon completion, the program was expected to remove GBP 110 million
from EMI’s annual cost base, with the majority of savings coming from the recordedmusic
division. The plan reduced layers in the management structure and encouraged the
recorded-music and publishing divisions to work more closely together for revenue and
cost synergies. One headline-worthy change in the reorganization was the surprise
removal of the recorded-music division head, Alain Levy, and Nicoli taking direct
responsibility for the division.
The Dividend Decision
Since the board had already declared an interim dividend of 2p per share in November
2006, the question was whether to maintain the past payout level by recommending that
an additional 6p final EMI dividend be paid. Considering EMI’s struggling financial
situation, there was good reason to question the wisdom of paying a dividend.
Exhibit 27.7 provides a forecast of the cash flow effects of maintaining the dividend,
based on market-based forecasts of performance. Omitting the dividend, however, was
likely to send a message that management had lost confidence, potentially accelerating
the ongoing stock price decline—the last thing EMI needed to do. (Exhibit 27.9
depicts trends in the EMI share price from May 2000 to May 2006.)
EXHIBIT 27.7 | EMI Projected Sources-and-Uses Statement Assuming Annual 8.0p Dividend Is
Maintained (in GBP; end 1, 2 of year March 31)
The dividend use in 2007 reflects the 8.0p dividend declared in total for the fiscal year 2006, which was actually paid in
the fiscal year 2007. The impact of the board’s decision would be in the 2008 fiscal year.
2008 and 2009 forecasts are from ABN AMRO Equity Research and case writer’s estimates. Lehman Brothers
forecasted net profit of GBP (110) million and GBP 81 million for 2008 and 2009, respectively.
Sources of data: Company annual reports and Web site; Bridie Barrett, Justin Diddams, and Paul Gooden, ABN AMRO
Bank NV, “EMI, A Special Situation,” February 16, 2007; Richard Jones and Tamsin Garrity, Lehman Brothers Equity
Research, “EMI Group,” February 15, 2007.
EXHIBIT 27.8 | Excerpt from Fischer Black’s “The Dividend Puzzle”1
Fischer Black, “The Dividend Puzzle,” Journal of Portfolio Management 1 (Winter 1976).
EXHIBIT 27.9 | EMI Share Price Performance
Many believed that music industry economics were on the verge of turning Page 335
the corner. A decision to maintain the historical 8p dividend would emphasize
management’s expectation of business improvement despite the disappointing recent
financial news. Forecasts for global economic growth continued to be strong
(Exhibit 27.10), and reimbursements to shareholders through dividends and repurchases
were on the upswing among media peers (Exhibit 27.11).
Source: Company annual reports and Web site
EXHIBIT 27.10 | Global Economic Indicators and Projections
Source of data: Société Générale Economic Research, “Global Economic Outlook,” March 14, 2007.
EXHIBIT 27.11 | Comparative Global Media Data
As Stewart navigated his way home, the radio played another hit from a well-known
EMI artist. Despite the current difficulties, Stewart was convinced there was still a lot
going for EMI.
Bertelsmann is a private German company.
Viacom split into two companies, Viacom and CBS Corporation, on December 31, 2005.
Warner Music completed its initial public offering (IPO) in May 2005.
Dividends-paid and share-repurchases data is sourced from the individual company’s cash flow statement.
Average dividend yield calculated as dividends declared per share for a year divided by the average annual price of the
stock in the same year.
Payout ratio calculated as the sum of all cash dividends declared but not necessarily yet paid for a company’s fiscal
year, divided by net profit for that year.
Sources of data: Value Line Investment Survey and company Web sites.
Page 347
CASE 28 Autozone, Inc.
On February 1, 2012, Mark Johnson, portfolio manager at Johnson & Associates, an
asset management company, was in the process of reviewing his largest holdings, which
included AutoZone, an aftermarket auto-parts retailer. AutoZone shareholders had
enjoyed strong price appreciation since 1997, with an average annual return of 11.5%
(Exhibit 28.1). The stock price stood at $348, but Johnson was concerned about the
recent news that Edward Lampert, AutoZone’s main shareholder, was rapidly
liquidating his stake in the company.
EXHIBIT 28.1 | Edward Lampert’s Position in AutoZone
Since 1998, AutoZone shareholders had received distributions of the company’s
cash flows in the form of share repurchases. When a company repurchased its own
shares, it enhanced earnings per share by reducing the shares outstanding, and it also
served to reduce the book value of shareholders’ equity (see AutoZone financial
statements in Exhibits 28.2, 28.3, 28.4, and 28.5). Johnson felt that Lampert was likely
a driving force behind AutoZone’s repurchase strategy because the repurchases started
around the time Lampert acquired his stake and accelerated as he built up his position.
Data source: Bloomberg.
Now that Lampert was reducing his stake, however, Johnson wondered if AutoZone
would continue to repurchase shares or if the company would change its strategy and
use its cash flows for initiating a cash dividend or reinvesting the cash in the company
to grow its core business. In addition, given its large debt burden (Exhibit 28.6),
AutoZone could choose to repay debt to improve its credit rating and increase its
financial flexibility.
EXHIBIT 28.2 | AutoZone Income Statement (August FY, in thousands of dollars, except ratios and
per-share data)
Data source: AutoZone annual reports.
EXHIBIT 28.3 | AutoZone Balance Sheet (August FY, in thousands of dollars)
Data source: AutoZone annual reports.
EXHIBIT 28.4 | AutoZone Statement of Cash Flows (August FY, in thousands of dollars)
Data source: AutoZone annual reports.
EXHIBIT 28.5 | AutoZone 2011 Statement of Stockholders’ Equity (dollars in thousands)
Data source: AutoZone annual reports.
EXHIBIT 28.6 | AutoZone Capital Structure and Coverage Ratio
Page 348
With AutoZone potentially changing its strategy for the use of its cash flows,
Johnson needed to assess the impact of the change on the company’s stock price and then
decide whether he should alter his position on the stock.
The Auto Parts Business
Aftermarket auto-parts sales were split into Do-It-Yourself (DIY) and Do-It-For-Me
(DIFM) segments. In the DIY segment, automobile parts were sold directly to vehicle
owners who wanted to fix or improve their vehicles on their own. In the DIFM
segment, automobile repair shops provided the parts for vehicles left in their
care for repair. DIY customers were serviced primarily through local retail storefronts
where they could speak with a knowledgeable sales associate who located the
necessary part. Because of their expertise in repairing vehicles, DIFM service
providers generally did not require storefront access or the expertise of a sales
associate. DIFM customers, however, were concerned with pricing, product
Note: Coverage ratio is defined as EBITDA divided by interest expense.
Data source: AutoZone annual reports.
availability, and efficient product delivery.
Sales in both segments were strongly related to the number of miles a vehicle had
been driven. For the DIY segment, the number of late-model cars needing repair was
also a strong predictor of auto-parts sales. As the age of a car increased, more repairs
were required, and the owners of older cars were more likely to repair these senior
vehicles themselves (Exhibit 28.7).
The number of miles a car was driven was affected by several economic
fundamentals, the most important of which was the cost of gasoline. The number of
older cars on the road increased during those times when fewer consumers bought new
cars. New car purchases were subject to the same general economic trends applicable
to most durable goods. As a result, in periods of strong economic growth and low
unemployment, new car sales increased. Conversely, when the economy struggled and
unemployment was high, fewer new cars were purchased, and older cars were kept on
the road longer, requiring more frequent repairs.
Overall, when the economy was doing well, gas prices and new car sales both
increased, decreasing the number of older cars on the road and also the amount of
EXHIBIT 28.7 | Miles Driven and Average Vehicle Age
Data sources: U.S. Department of Transportation (miles driven) and Polk Research (vehicle age).
Page 349
additional mileage accumulated. When the economy did poorly, gas prices and new car
sales were more likely to be depressed, increasing the utilization of older cars and
adding to their mileage. Because of these dynamics, auto-parts sales, especially in the
DIY segment, were somewhat counter-cyclical.
The auto-parts business consisted of a large number of small, local operations as
well as a few large, national retailers, such as AutoZone, O’Reilly Auto Parts, Advance
Auto Parts, and Pep Boys. The national chains had sophisticated supply-chain
operations to ensure that an appropriate level of inventory was maintained at each store
while managing the tradeoff between minimizing inventory stock outs and maximizing
the number of stock-keeping units (SKUs). This gave the large, national retailers an
advantage because customers were more likely to find the parts they wanted at one of
these stores. Counterbalancing the inventory advantage, however, was the expertise of
sales associates, which allowed the smaller, local stores to enhance the customer
service experience in DIY sales.
Recent Trends
In 2008, the U.S. economy had gone through the worst recession since the Great
Depression, and the recovery that followed had been unusually slow. As a result, the
auto-parts retail business enjoyed strong top-line growth. The future path of the U.S.
economy was still highly uncertain as was the potential for a disconnect between GDP
growth and gas price increases and between gas prices and miles driven. Furthermore,
as auto-parts retailers operated with high-gross margins and significant fixed costs,
profits varied widely with the level of sales, making the near-term earnings in
the auto-parts retail segment particularly difficult to predict.
The auto-parts retail business experienced more competition as national retailers
continued to expand their operations. Most of their expansion was at the expense of
local retailers, but competition between major national retailers was heating up. If the
economy strengthened and the auto-parts retail business was negatively affected by the
replacement of older cars with new ones, competition between large, national retailers
could make a bad situation worse.
Linked to high levels of industry competition and the expansion of the major
retailers was the possibility that growth would eventually hit a wall if the market
became oversaturated with auto-parts stores. Despite this concern, by 2012, AutoZone
management had stated that it was not seeing any signs of oversaturation, implying that
expansion opportunities still remained.
The industry was also seeing an increase in sales via online channels as consumers
enjoyed the flexibility of purchasing online and either picking up an order at the most
convenient location or having it delivered to their doorstep. Given the high operating
leverage provided by selling through online channels, especially given the preexisting
supply chains that already were built for storefront operations, as well as the growth in
this channel, the national retail chains continued to invest in their online solutions and
looked at that channel for future earnings growth.
Finally, another trend was the expansion of the large, U.S. auto-parts retailers into
adjacent foreign markets, such as Mexico, Canada, and Puerto Rico. Thus far, the
national retail companies were successful using this strategy, but their ability to continue
to succeed and prosper in these markets, as well as in new, attractive locations such as
Brazil, was not yet a reality.
AutoZone’s first store opened in 1979, under the name of Auto Shack in Forrest City,
Arkansas. In 1987, the name was changed to AutoZone, and the company implemented
the first electronic auto-parts catalog for the retail industry. Then in 1991, after four
Page 350
years of steady growth, AutoZone went public and was listed on the New York Stock
Exchange under the ticker symbol AZO.
By 2012, AutoZone had become the leading retailer of automotive replacement parts
and accessories in the United States, with more than 65,000 employees and 4,813 stores
located in every state in the contiguous United States, Puerto Rico, and Mexico.
AutoZone also distributed parts to commercial repair shops. In addition, a small but
growing portion of AutoZone sales came through its online channel.
From the beginning, AutoZone had invested heavily in expanding its retail footprint
via both organic and inorganic growth. It had also developed a sophisticated hub-andfeeder
inventory system that kept the inventories of individual stores low as well as
reduced the likelihood of stock outs. The expansion of its retail footprint had driven topline
revenue growth. AutoZone’s success in developing category-leading
distribution capabilities had resulted in both the highest operating margin for
its industry and strong customer service backed by the ability of its distribution network
to supply stores with nearly all of the AutoZone products on a same-day basis
(Exhibit 28.8).
EXHIBIT 28.8 | Merchandise Listing (as of October 17, 2011)
AutoZone’s management focused on after-tax return on invested capital (ROIC) as
the primary way to measure value creation for the company’s capital providers. As a
result, while AutoZone management invested in opportunities that led to top-line
revenue growth and increased margins, it also focused on capital stewardship. What
resulted was an aggressively managed working capital at the store level through the
efficient use of inventory as well as attractive terms from suppliers.
Starting in 1998, AutoZone had returned capital to its equity investors through share
repurchases. Although share-repurchase programs were common among U.S.
companies, the typical result was a modest impact on shares outstanding. AutoZone’s
consistent use of share repurchases, however, had resulted in a significant reduction of
both the shares outstanding and the equity capital. In particular, shares outstanding had
dropped 39% from 2007 to 2011, and shareholders’ equity had been reduced to a
negative $1.2 billion in 2011. The repurchases had been funded by strong operating cash
Data source: AutoZone annual report.
flows and by debt issuance. The net result was that AutoZone’s invested capital had
remained fairly constant since 2007, which, combined with increased earnings, created
attractive ROIC levels (Exhibit 28.9).
Operating Cash Flow Options
While AutoZone had historically repurchased shares with operating cash flow, Mark
Johnson felt that Edward Lampert’s reduced stake in the company could prompt
management to abandon repurchases and use the cash flows for other purposes. For
example, AutoZone could distribute cash flows through cash dividends, reinvest the
cash flows back into the core business, or use the funds to acquire stores. The company
could also invest further in its operational capabilities to stay on the leading edge of the
retail auto-parts industry. Finally, given a negative book-equity position and a
continually growing debt load, AutoZone might consider using its cash flows to pay
down debt to increase its future financial flexibility.
EXHIBIT 28.9 | Share Repurchases and ROIC 1996–2011
Note: ROIC is calculated as the sum of net income and tax-adjusted interest and rent expenses divided by the sum of
average debt, average equity, six times rent expense (to approximate capitalizing rent), and average capital lease
Data source: AutoZone annual reports.
Page 351
Dividends versus Share Repurchases
Assuming that AutoZone decided to distribute some of its operating cash flows to
shareholders, the company had the choice of distributing the cash through dividends,
share repurchases, or some combination of the two. Dividends were seen as a way to
provide cash to existing shareholders, whereas only those shareholders who happened
to be selling their shares would receive cash from a share-repurchase program. On the
other hand, dividends were taxed at the shareholder level in the year received, whereas
if a share-repurchase program succeeded in increasing the share price, the nonselling
shareholders could defer paying taxes until they sold the stock.
Dividends were also generally considered to be “sticky,” meaning that the
market expected a company to either keep its dividend steady or raise it each
year. Because of this mindset, the implementation of a dividend or an increase of the
dividend was usually interpreted by the market as a positive signal of the firm’s ability
to earn enough to continue paying the dividend far into the future. Conversely, any
decrease in the dividend was normally viewed by the market as a very negative signal.
Therefore, the stock price tended to change according to the dividend news released by
the firm, which would be favorable for AutoZone shareholders so long as management
was able to continue or increase the dividend each year.
Share repurchases were not viewed as sticky by the market because the amount of
the repurchase often varied each year. The variance in the shares purchased might be
caused by economic headwinds or tailwinds or differences in the quantity and size of
investment opportunities that management believed would create shareholder value.
Also, share repurchases were seen by some as a way to signal management’s belief that
the stock was undervalued and thus represented a good investment for the company.
Some companies chose to return shareholder capital through both dividends and
share repurchases. In most of these cases, the company provided a stable but relatively
small cash dividend and then repurchased shares at varying levels according to the
circumstances each year. The benefit of this approach was to give shareholders the
benefit of a sticky dividend while also receiving the price support of share repurchases.
Organic Growth
AutoZone could consider using its operating cash flow to increase the number of new
stores it opened each year. Although the retail auto-parts industry was competitive and
relatively mature, AutoZone’s CEO had recently indicated that he did not see
oversaturation of retail auto-parts stores in any of the company’s markets. Therefore,
AutoZone could seize the opportunity to expand more rapidly and perhaps preempt
competition from gaining a foothold in those markets.
Rapid expansion came with a number of risks. First, Johnson was not sure that
AutoZone had the managerial capacity to expand that swiftly. The company’s growth in
recent years had been substantial as were the returns on investment, but it was not
apparent if further growth would necessarily continue to create value. In addition,
Johnson reasoned that the best retail locations were already covered and that remaining
areas would have lower profitability. This could be exacerbated if AutoZone expanded
into areas that were less well served by its distribution network.
Johnson thought that there were some very attractive overseas investment
opportunities as evidenced by successful store openings in Mexico and Puerto Rico.
AutoZone’s 2011 annual report indicated work was underway to expand into Brazil
over the next several years. The company could increase its global presence by
aggressively opening multiple stores in Brazil and other international locations. Hasty
expansion into foreign markets, however, brought with it not only the risks of rapid store
expansion but also the difficulties inherent in transferring and translating the
domestically successful supply model.
Growth by Acquisition Page 352
Johnson noted that in 1998 AutoZone had acquired over 800 stores from competitors
and reasoned that another way to swiftly increase revenues would be for AutoZone to
acquire other auto-parts retail stores. While this strategy would require some
postmerger integration investment, such stores would be productive much more quickly
than greenfield stores and shorten the return time on AutoZone’s investment. This was an
interesting strategy, but Johnson also knew that industry consolidation (Exhibit 28.10)
had removed most of the viable takeover targets from the market; therefore, it was
unclear whether a merger of two of the large players would be allowed by the U.S.
Department of Justice.
Debt Retirement
A final consideration was whether AutoZone might use part or all of its operating cash
flows to retire some of the debt that the company had accumulated over the years. Much
EXHIBIT 28.10 | Aftermarket Auto Parts Industry Structure
Note: The top 10 companies (stores) as of August 2010: AutoZone (4,728), O’Reilly Auto Parts (3,657), Advance Auto
Parts (3,627), General Parts/CARQUEST (1,500), Genuine Parts/NAPA (1,035), Pep Boys (630), Fisher Auto Parts (406),
Uni-Select (273), Replacement Parts (155), and Auto-Wares Group (128).
Data sources: AAIA Factbook and SEC filings.
of the debt had been used to fund the share repurchases, but with a negative book-equity
position and such a large debt position, Johnson wondered whether it was prudent to
continue adding debt to the balance sheet. If AutoZone ran into trouble, it could struggle
under the strain of making the interest payments and rolling over maturing debt. At some
point, it was conceivable that AutoZone could lose its investment-grade credit rating,
which would only make future debt financing more difficult to secure and more
The Decision
Johnson had to decide what to do with his AutoZone investment. He was impressed
with the company’s history of strong shareholder returns and its leading position in the
industry. Still he wondered if Lampert’s reduced influence and the potential for less
favorable economic trends for auto-parts retailers were enough uncertainty for him to
consider selling some or all of his position in the stock. As an analyst, Johnson’s first
consideration regarding the value of a company was to determine how well management
was using the operating cash flow to maximize value for shareholders. Based on the
ROIC (Exhibit 28.9), AutoZone was earning high returns on the capital invested in the
company, which was undoubtedly the primary driver of stock returns. The extent to
which share repurchases had contributed to the stock’s performance, however, was less
How would the market react to the news that AutoZone was reducing or eliminating
its share repurchases after years of consistently following that strategy? Did the market
view AutoZone’s share repurchases as a cash dividend or was it indifferent about
whether cash flows were distributed by repurchasing shares or paying a cash dividend?
In any case, Johnson wondered if any move away from repurchasing shares after so
many years might cause the stock price to fall, regardless of how the cash flows were
ultimately spent. Or would AutoZone’s stock price continue to appreciate as it had in the
past so long as it continued to produce strong cash flows?
Page 363
CASE 29 An Introduction to Debt Policy and Value
Many factors determine how much debt a firm takes on. Chief among them ought to be
the effect of the debt on the value of the firm. Does borrowing create value? If so, for
whom? If not, then why do so many executives concern themselves with leverage?
If leverage affects value, then it should cause changes in either the discount rate of
the firm (that is, its weighted-average cost of capital) or the cash flows of the firm.
1. Please fill in the following:
Why does the value of assets change? Where, specifically, do those Page 364
changes occur?
2. In finance, as in accounting, the two sides of the balance sheet must be equal. In the
previous problem, we valued the asset side of the balance sheet. To value the other
side, we must value the debt and the equity, and then add them together.
As the firm levers up, how does the increase in value get apportioned Page 365
between the creditors and the shareholders?
3. In the preceding problem, we divided the value of all the assets between two classes
of investors: creditors and shareholders. This process tells us where the change in
value is going, but it sheds little light on where the change is coming from. Let’s
divide the free cash flows of the firm into pure business flows and cash flows
resulting from financing effects. Now, an axiom in finance is that you should discount
cash flows at a rate consistent with the risk of those cash flows. Pure business flows
should be discounted at the unlevered cost of equity (i.e., the cost of capital for the
unlevered firm). Financing flows should be discounted at the rate of return required by
the providers of debt.
Page 366
The first three problems illustrate one of the most important theories in finance.
This theory, developed by two professors, Franco Modigliani and Merton Miller,
revolutionized the way we think about capital-structure policies.
The M&M theory says:
4. What remains to be seen, however, is whether shareholders are better or worse off
with more leverage. Problem 2 does not tell us because there we computed total value
of equity, and shareholders care about value per share. Ordinarily, total value will be
a good proxy for what is happening to the price per share, but in the case of a
relevering firm, that may not be true. Implicitly, we assumed that, as our firm in
problems 1–3 levered up, it was repurchasing stock on the open market (you will note
that EBIT did not change, so management was clearly not investing the proceeds from
the loans into cash-generating assets). We held EBIT constant so that we could see
clearly the effect of financial changes without getting them mixed up in the effects of
investments. The point is that, as the firm borrows and repurchases shares, the total
value of equity may decline, but the price per share may rise.
Now, solving for the price per share may seem impossible because we are dealing
with two unknowns—share price and the change in the number of shares:
But by rewriting the equation, we can put it in a form that can be solved:
Referring to the results of problem 2, let’s assume that all the new debt is equal to
the cash paid to repurchase shares. Please complete the following table:
5. In this set of problems, is leverage good for shareholders? Why? Is Page 367
levering/unlevering the firm something that shareholders can do for
themselves? In what sense should shareholders pay a premium for shares of levered
6. From a macroeconomic point of view, is society better off if firms use more than zero
debt (up to some prudent limit)?
7. As a way of illustrating the usefulness of the M&M theory and consolidating your
grasp of the mechanics, consider the following case and complete the worksheet. On
March 3, 1988, Beazer PLC (a British construction company) and Shearson Lehman
Hutton, Inc. (an investment-banking firm) commenced a hostile tender offer to
purchase all the outstanding stock of Koppers Company, Inc., a producer of
construction materials, chemicals, and building products. Originally, the raiders
offered $45 a share; subsequently, the offer was raised to $56 and then finally to $61 a
share. The Koppers board asserted that the offers were inadequate and its management
was reviewing the possibility of a major recapitalization.
To test the valuation effects of the recapitalization alternative, assume that
Koppers could borrow a maximum of $1,738,095,000 at a pretax cost of debt of
10.5% and that the aggregate amount of debt will remain constant in perpetuity. Thus,
Koppers will take on additional debt of $l,565,686,000 (that is, $1,738,095,000
minus $172,409,000). Also assume that the proceeds of the loan would be paid as an
extraordinary dividend to shareholders. Exhibit 29.1 presents Koppers’ book- and
market-value balance sheets, assuming the capital structure before recapitalization.
Please complete the worksheet for the recapitalization alternative.
EXHIBIT 29.1 | Koppers Company, Inc. (values in thousands)

Page 369
M&M Pizza
Twenty-nine-year-old Moe Miller had recently been appointed managing director at
M&M Pizza, a premium pizza producer in the small country of Francostan. As a thirdgeneration
director of M&M Pizza, Miller was anxious to make his mark on the
company with which he had grown up. The business was operating well, with full
penetration of the Francostani market, but Miller felt that the financial policies of the
company were overly conservative. Despite generating strong and steady profitability of
about F$125 million per year over recent memory, M&M Pizza’s stock price had been
flat for years, at about F$25 per share.
His new office, Miller discovered, had an unobstructed view of the nearby marble
quarry. How wonderfully irrelevant, he thought to himself as he turned to the financial
analysis on his desk. With borrowing costs running at only 4%, he felt confident that
recapitalizing the balance sheet would create sustained value for M&M owners. His
plan called for issuing F$500 million in new company debt and using the proceeds to
repurchase F$500 million in company shares. The plan would leave assets, profits, and
operations of the business unchanged but allow M&M to borrow at the relatively low
prevailing market yields on debt and increase dividends per share. Committed to raising
the share price, Miller felt it was time to slice up the company’s capital structure a little
The Mediterranean island nation of Francostan had a long tradition of political and
economic stability. The country had been under the benevolent rule of a single family for
Page 370
generations. The national economy maintained few ties with neighboring countries, and
trade was almost nonexistent. The population was stable, with approximately 12 million
prosperous, well-educated inhabitants. The country was known for its exceptional IT
and regulation infrastructure; citizens had unrivaled access to business and economic
information. Economic policies in the country supported stability. Price
inflation for the national currency, the Franco dollar, had been near zero for
some time and was expected to remain so for the foreseeable future. Short- and longterm
interest rates for government and business debt were steady at 4%. Occasionally,
the economy experienced short periods of economic expansion and contraction.
The country’s population was known for its high ethical standards. Business
promises and financial obligations were considered fully binding. To support the
country’s practices, the government maintained no bankruptcy law, and all contractual
obligations were fully and completely enforced. To encourage economic development,
the government did not tax business income. Instead, government tax revenue was levied
through personal income taxes. There was a law under consideration to alter the tax
policy by introducing a 20% corporate income tax. To maintain business investment
incentives under the plan, interest payments would be tax deductible.
The Recapitalization Decision
Miller’s proposed recapitalization involved raising F$500 million in cash by issuing
new debt at the prevailing 4% borrowing rate and using the cash to repurchase company
shares. Miller was confident that shareholders would be better off. Not only would
they receive F$500 million in cash, but Miller expected that the share price would rise.
M&M maintained a dividend policy of returning all company profits to equity holders in
the form of dividends. Although total dividends would decline under the new plan,
Miller anticipated that the reduction in the number of shares would allow for a net
increase in the dividends paid per remaining share outstanding. With a desire to set the
tone of his leadership at M&M, Miller wanted to implement the initiative immediately.
The accounting office had provided a set of pro forma M&M financial statements for the
coming year (Exhibit 30.1).
Based on a rudimentary knowledge of corporate finance, Miller estimated the
current cost of equity (and WACC) for M&M with the current no-debt policy at 8%
based on a market risk premium of 5% and a company beta of 0.8. Miller appreciated
that, because equity holders bore the business risk, they deserved to receive a higher
return. Nonetheless, from a simple comparison of the 8% cost of equity with the 4%
cost of debt, equity appeared to be an expensive source of funds. To Miller, substituting
debt for equity was a superior financial policy because it gave the company cheaper
capital. With other business inputs, the company was aggressive in sourcing quality
materials and labor at the lowest available cost. Shouldn’t M&M do the same for its
EXHIBIT 30.1 | Pro Forma Financial Statement (in millions of Franco dollars, except per-share
Source: Created by case writer.

Page 373
Structuring Corporate Financial Policy:
Diagnosis of Problems and Evaluation of
This note outlines a diagnostic and prescriptive way of thinking about corporate
financial policy. Successful diagnosis and prescription depend heavily on thoughtful
creativity and careful judgment, so the note presents no cookie-cutter solutions. Rather,
it discusses the elements of good process and offers three basic stages in that process:
Description: The ability to describe a firm’s financial policies (which have been
chosen either explicitly or by default) is an essential foundation of diagnosis and
prescription. Part I of this note defines “financial structure” and discusses the design
elements by which a senior financial officer must make choices. This section illustrates
the complexity of a firm’s financial policies.
Diagnosis: One develops a financial policy relative to the world around you,
represented by three “benchmark” perspectives. You compare the financial policy for
your firm to the benchmarks and look for opportunities for improvement. Part II of this
note is an overview of three benchmarks by which you can diagnose problems and
opportunities: (1) the expectations of investors, (2) the policies and behavior of
competitors, and (3) the internal goals and motivations of corporate management itself.
Other perspectives may also exist. Parts III, IV, and V discuss in detail the estimation
and application of the three benchmarks. These sections emphasize artful homework and
economy of effort by focusing on key considerations, questions, and information. The
goal is to derive insights unique to each benchmark, rather than to churn data endlessly.
Page 374
Prescription: Action recommendations should spring from the insights gained in
description and diagnosis. Rarely, however, do unique solutions or ideas exist; rather,
the typical chief financial officer (CFO) must have a view about competing suggestions.
Part VI addresses the task of comparing competing proposals. Part VII presents the
Part I: Identifying Corporate Financial Policy: The
Elements of Its Design
You can observe a lot just by watching.
—Yogi Berra
The first task for financial advisers and decision makers is to understand the firm’s
current financial policy. Doing so is a necessary foundation for diagnosing problems
and prescribing remedies. This section presents an approach for identifying the firm’s
financial policy, based on a careful analysis of the tactics by which that policy is
The Concept of Corporate Financial Policy
The notion that firms have a distinct financial policy is startling to some analysts and
executives. Occasionally, a chief financial officer will say, “All I do is get the best deal
I can whenever we need funds.” Almost no CFO would admit otherwise. In all
probability, however, the firm has a more substantive policy than the CFO admits to.
Even a management style of myopia or opportunism is, after all, a policy.
Some executives will argue that calling financing a “policy” is too fancy. They say
that financing is reactive: it happens after all investment and operational decisions have
been made. How can reaction be a policy? At other times, one hears an executive say,
“Our financial policy is simple.” Attempts to characterize a financial structure as
reactive or simplistic overlook the considerable richness of choice that confronts the
financial manager.
Finally, some analysts make the mistake of “one-size-fits-all” thinking; that is, they
assume that financial policy is mainly driven by the economics of a certain industry and
they overlook the firm-specific nature of financial policy. Firms in the same, wellPage
defined industry can have very different financial policies. The reason is that financial
policy is a matter of managerial choice.
“Corporate financial policy” is a set of broad guidelines or a preferred style to
guide the raising of capital and the distribution of value. Policies should be set to
support the mission and strategy of the firm. As the environment changes, policies
should adapt.
The analyst of financial policy must come to terms with its ambiguity. Policies are
guidelines; they are imprecise. Policies are products of managerial choice rather than
the dictates of an economic model. Policies change over time. Nevertheless, the
framework in this note can help the analyst define a firm’s corporate financial policy
with enough focus to identify potential problems, prescribe remedies, and make
The Elements of Financial Policy
Every financial structure reveals underlying financial policies through the following
seven elements of financial-structure design1:
1. Mix of classes of capital (such as debt versus equity, or common stock versus
retained earnings): How heavily does the firm rely on different classes of
capital? Is the reliance on debt reasonable in light of the risks the firm faces and
the nature of its industry and technology? Mix may be analyzed through
capitalization ratios, debt-service coverage ratios, and the firm’s sources-and-uses-offunds
statement (where the analyst should look for the origins of the new additions to
capital in the recent past). Many firms exhibit a pecking order of financing: they seek
to fulfill their funding needs through the retention of profits, then through debt, and,
finally, through the issuance of new shares. Does the firm observe a particular
pecking order in its acquisition of new capital?
2. Maturity structure of the firm’s capital: To describe the choices made about the
maturity of outstanding securities is to be able to infer the judgments the firm made
about its priorities—for example, future financing requirements and opportunities or
relative preference for refinancing risk versus reinvestment risk. A risk-neutral
position with respect to maturity would be where the life of the firm’s assets equals
the life of the firm’s liabilities. Most firms accept an inequality in one direction or the
other. This might be due to ignorance or to sophistication: managers might have a
strong internal “view” about their ability to reinvest or refinance. Ultimately, we want
managers to maximize value, not minimize risk. The absence of a perfect maturity
hedge might reflect managers’ better-informed bets about the future of the firm and
markets. Measuring the maturity structure of the firm’s capital can yield insights into
the bets that the firm’s managers are apparently making. The standard measures of
maturity are term to maturity, average life, and duration. Are the lives of the firm’s
assets and liabilities roughly matched? If not, what gamble is the firm taking (i.e.,
is it showing an appetite for refunding risk or interest-rate risk)?
2 3
Page 376
3. Basis of the firm’s coupon and dividend payments: In simplest terms, basis addresses
the firm’s preference for fixed or floating rates of payment and is a useful tool in
fathoming management’s judgment regarding the future course of interest rates.
Interest-rate derivatives provide the financial officer with choices conditioned by
caps, floors, and other structured options. Understanding management’s basis choices
can reveal some of the fundamental bets management is placing, even when it has
decided to “do nothing.” What is the firm’s relative preference for fixed or floating
interest rates? Are the firm’s operating returns fixed or floating?
4. Currency addresses the global aspect of a firm’s financial opportunities: These
opportunities are expressed in two ways: (a) management of the firm’s exposure to
foreign exchange-rate fluctuations, and (b) the exploitation of unusual financing
possibilities in global capital markets. Exchange-rate exposure arises when a firm
earns income (or pays expenses) in a variety of currencies. Whether and how a firm
hedges this exposure can reveal the “bets” that management is making regarding the
future movement of exchange rates and the future currency mix of the firm’s cash
flows. The financial-policy analyst should look for foreign-denominated securities in
the firm’s capital and for swap, option, futures, and forward contracts—all of which
can be used to manage the firm’s foreign-exchange exposure. The other way that
currency matters to the financial-policy analyst is as an indication of the management’s
willingness to source its capital “offshore.” This is an indication of sophistication and
of having a view about the parity of exchange rates with security returns around the
world. In a perfectly integrated global capital market, the theory of interest rate parity
would posit the futility of finding bargain financing offshore. But global capital
markets are not perfectly integrated, and interest rate parity rarely holds true
everywhere. Experience suggests that financing bargains may exist temporarily.
Offshore financing may suggest an interest in finding and exploiting such bargains. Is
the currency denomination of the firm’s capital consistent with the currency
denomination of the firm’s operating cash flows? Do the balance sheet footnotes
show evidence of foreign-exchange hedging? Also, is the company, in effect,
sourcing capital on a global basis or is it focusing narrowly on the domestic capital
5. Exotica: Every firm faces a spectrum of financing alternatives, ranging from plainvanilla
bonds and stocks to hybrids and one-of-a-kind, highly tailored securities. This
element considers management’s relative preference for financial innovation. Where a
firm positions itself on this spectrum can shed light on management’s openness to new
ideas, intellectual originality and, possibly, opportunistic tendencies. As a general
matter, option-linked securities often appear in corporate finance where there is some
Page 377
disagreement between issuers and investors about a firm’s prospects. For instance,
managers of high-growth firms will foresee rapid expansion and vaulting stock prices.
Bond investors, not having the benefit of inside information, might see only high risk
—issuing a convertible bond might be a way to allow the bond investors to capitalize
the risk and to enjoy the creation of value through growth in return for accepting a
lower current yield. Also, the circumstances under which exotic securities were
issued are often fascinating episodes in a company’s history. Based on past
financings, what is the firm’s appetite for issuing exotic securities? Why have the
firm’s exotic securities been tailored as they are?
6. External control: Any management team probably prefers little outside control. One
must recognize that, in any financial structure, management has made choices about
subtle control trade-offs, including who might exercise control (for example,
creditors, existing shareholders, new shareholders, or a raider) and the control trigger
(for example, default on a loan covenant, passing a preferred stock dividend, or a
shareholder vote). How management structures control triggers (for example, the
tightness of loan covenants) or forestalls discipline (perhaps through the adoption of
poison pills and other takeover defenses) can reveal insights into management’s fears
and expectations. Clues about external control choices may be found in credit
covenants, collateral pledges, the terms of preferred shares, the profile of the firm’s
equity holders, the voting rights of common stock, corporate bylaws, and antitakeover
defenses. In what ways has management defended against or yielded to external
7. Distribution: seeks to determine any patterns in (a) the way the firm markets its
securities (i.e., acquires capital), and (b) the way the firm delivers value to its
investors (i.e., returns capital). Regarding marketing, insights emerge from knowing
where a firm’s securities are listed for trading, how often the shares are sold, and who
A Comparative Illustration
The value of looking at a firm’s financial structure through these seven design elements
is that the insights they provide can become a basis for developing a broad, detailed
picture of the firm’s financial policies. Also, the seven elements become an
organizational framework for the wealth of financial information on publicly owned
Consider the examples of FedEx Corporation (FedEx) and United Parcel Service,
Inc. (UPS), both leading firms in the express-delivery industry. Sources such as Factset,
Yahoo! Finance, and the Value Line Investment Survey distill information from annual
reports and regulatory filings and permit the analyst to draw conclusions about the seven
elements of each firm’s financial policy. Drawing on the latest financial results as of
2016, analysts could glean the insights about the policies of FedEx and UPS from
Table 31.1.
advises the sale of securities (the adviser that a firm attracts is one indication of its
sophistication). Regarding the delivery of value, the two generic strategies involve
dividends or capital gains. Some companies will pay low or no dividends and force
their shareholders to take returns in the form of capital gains. Other companies will
pay material dividends, even borrowing to do so. Still others will repurchase shares,
split shares, and declare extraordinary dividends. Managers’ choices about delivering
value yield clues about management’s beliefs regarding investors and the company’s
ability to satisfy investors’ needs. How have managers chosen to deliver value to
shareholders, and with whose assistance have they issued securities?
TABLE 31.1 | Financial Policies for FedEx Corporation and United Parcel Service, Inc.
Page 378
As Table 31.1 shows, standard information available on public companies yields
important contrasts in their financial policies. Note that the insights are informed
guesses: neither of those firms explicitly describes its financial policies. Nonetheless,
with practice and good information, the validity of the guesses can be high.
FedEx and UPS present different policy profiles. FedEx relies somewhat
more on debt financing, with a longer maturity, greater commitment to operating
leases, and a more aggressive program of returning cash to shareholders through
Source: Created by author.
Page 379
dividends and share repurchases. UPS is somewhat more conservative (as reflected in
its higher debt rating): a higher times-interest-earned ratio, a more balanced maturity
structure, more reliance on capital leases and less on operating leases, larger return to
shareholders through dividend payments, and a distinctive classified common equity
structure that gives strong control rights to the holders of the “A” shares. The
UPS “A” shares are held “primarily by UPS employees and retirees, as well as
trusts and descendants of the Company’s founders.”6
Part II: General Framework for Diagnosing Financial-
Policy Opportunities and Problems
Having parsed the choices embedded in the firm’s financial structure, one must ask,
“Were these the right choices?” What is “right” is a matter of the context and the
clientele to which management must respond. A firm has many potential claimants. The
discussion that follows will focus on the perspectives of competitors, investors, and
senior corporate managers.
1. Does the financial policy create value?
From the standpoint of investors, the best financial structure will (a) maximize
shareholder wealth, (b) maximize the value of the entire firm (i.e., the market value of
assets), and (c) minimize the firm’s weighted-average cost of capital (WACC). When
those conditions occur, the firm makes the best trade-offs among the choices on each
of the seven dimensions of financial policy. This analysis is all within the context of
the market conditions.
2. Does the financial policy create a competitive advantage?
Competitors should matter in the design of corporate financial policy. Financial
structure can enhance or constrain competitive advantage mainly by opening or
foreclosing avenues of competitive response over time. Thus, a manager should
critically assess the strategic options created or destroyed by a particular financial
structure. Also, assuming that they are reasonably well managed, competitors’
financial structures are probably an indicator of good financial policy in a particular
industry. Thus a manager should want to know how his or her firm’s financial structure
compares with the peer group. In short, this line of thinking seeks to evaluate the
relative position of the firm in its competitive environment on the basis of financial
The next three sections will discuss these perspectives in more detail. All three
perspectives are unlikely to offer a completely congruent assessment of financial
structure. The investor’s view looks at the economic consequences of a financial
structure; the competitor’s view considers strategic consequences; the internal view
addresses the firm’s survival and ambitions. The three views ask entirely different
questions. An analyst should not be surprised when the answers diverge.
Rather like estimating the height of a distant mountain through the haze, the analyst
develops a concept of the best financial structure by a process of triangulation.
Triangulation involves weighing the importance of each of the perspectives as each one
complements the other rather than as it substitutes for the other, identifying points of
consistency, and making artful judgments where the perspectives diverge.
The goal of this analysis should be to articulate concretely the design of the firm’s
financial structure, preferably in terms of the seven elements discussed in Part I. This
exercise entails developing notes, comments, and calculations for every one of the cells
of this analytical grid:
Page 380
3. Does the financial policy sustain senior management’s vision?
The internal perspective tests the appropriateness of a capital structure from the
standpoint of the expectations and capacities of the corporate organization itself. The
analyst begins with an assessment of corporate strategy and the resulting
stream of cash requirements and resources anticipated in the future. The
realism of the plan should be tested against expected macroeconomic variations, as
well as against possible but unexpected financial strains. A good financial structure
meets the classic maxim of corporate finance, “Don’t run out of cash”: in other words,
the ideal financial structure adequately funds the growth goals and dividend payouts of
the firm without severely diluting the firm’s current equity owners. The concept of
self-sustainable growth provides a straightforward test of this ideal.
Page 381
No chart can completely anticipate the difficulties, quirks, and exceptions that the
analyst will undoubtedly encounter. What matters most, however, is the way of thinking
about the financial-structure design problem that encourages both critical thinking and
organized, efficient digestion of information.
Figure 31.1 summarizes the approach presented in this section. Good financialstructure
analysis develops three complementary perspectives on financial structure,
and then blends those perspectives into a prescription.
FIGURE 31.1 | Overview of Financial-Structure Analysis.
Source: Created by author.
Page 382
Part III: Analyzing Financial Policy from the Investors’
In finance theory, the investors’ expectations should influence all managerial decisions.
This theory follows the legal doctrine that firms should be managed in the interests of
their owners. It also recognizes the economic idea that if investors’ needs are satisfied
after all other claims on the firm are settled, then the firm must be healthy. The investors’
view also confronts the reality of capital market discipline. The best defense against a
hostile takeover (or another type of intrusion) is a high stock price. In recent years, the
threat of capital market discipline has done more than any academic theory to rivet the
management’s attention to value creation.
Academic theory, however, is extremely useful in identifying value-creating
strategies. Economic value is held to be the present value of expected future cash flows
discounted at a rate consistent with the risk of those cash flows. Considerable care must
be given to the estimation of cash flows and discount rates (a review of discounted cash
flow [DCF] valuation is beyond the scope of this note). Theory suggests that leverage
can create value through the benefits of debt tax shields and can destroy value through
the costs of financial distress. The balance of those costs and benefits depends upon
specific capital market conditions, which are conveyed by the debt and equity costs that
capital providers impose on the firm. Academic theory’s bottom line is as follows:
An efficient (i.e., value-optimizing) financial structure is one that simultaneously minimizes the weightedaverage
cost of capital and maximizes the share price and value of the enterprise.
The investors’ perspective is a rigorous approach to evaluating financial
structures: valuation analysis of the firm and its common stock under existing
and alternative financial structures. The best structure will be one that creates the most
The phrase alternative financial structures is necessarily ambiguous, but should be
interpreted to include a wide range of alternatives, including leveraged buyouts,
leveraged recapitalizations, spin-offs, carve-outs, and even liquidations. However
radical the latter alternatives may seem, the analyst must understand that investment
bankers and corporate raiders routinely consider those alternatives. To anticipate the
thinking of those agents of change, the analyst must replicate their homework.
Careful analysis does not rest with a final number, but rather considers a range of
Cost of debt: The analysis focuses on yields to maturity and the spreads of those
yields over the Treasury yield curve. Floating rates are always effective rates of
Cost of equity: The assessment uses as many approaches as possible, including the
capital asset pricing model, the dividend discount model, the financial leverage
equation, the earnings/price model, and any other avenues that seem appropriate.
Although it is fallible, the capital asset pricing model has the most rigor.
Debt/equity mix: The relative proportions of types of capital in the capital structure
are important factors in computing the weighted-average cost of capital. All capital
should be estimated on a market value basis.
Price/earnings ratio, market/book ratio, earnings before interest and taxes (EBIT)
multiple: Comparing those values to the average levels of the entire capital market or
to an industry group can provide an alternative check on the valuation of the firm.
Bond rating: The creditors’ view of the firm is important. S&P and Moody’s publish
average financial ratios for bond-rating groups. Even for a firm with no publicly rated
debt outstanding, a simple ratio analysis can reveal a firm’s likely rating category and
Page 383
its current cost of debt.
Ownership: The relative mix of individual and institutional owners and the presence
of block holders with potentially hostile intentions can help shed light on the current
pricing of a firm’s securities.
Short position: A large, short-sale position on the firm’s stock can indicate that some
traders believe a decline in share price is imminent.
To conclude, the first rule of financial-policy analysis is: Think like an investor.
The investors’ view assesses the value of a firm’s shares under alternative financial
structures and the existence of any strongly positive or negative perceptions in the
capital markets about the firm’s securities.
Part IV: Analyzing Financial Policy from a Competitive
The competitive perspective matters to senior executives for two important reasons.
First, it gives an indication about (1) standard practice in the industry, and (2) the
strategic position of the firm relative to the competition. Second, it implies rightly that
finance can be a strategic competitive instrument.
The competitive perspective may be the hardest of the three benchmarks to assess.
There are few clear signposts in industry dynamics, and, as most industries become
increasingly global, the comparisons become even more difficult to make. Despite the
difficulty of this analysis, however, senior executives typically give an inordinate
amount of attention to it. The well-versed analyst must be able to assess the ability of
the current policy (and its alternatives) to maintain or improve its competitive position.
This analysis does not proceed scientifically, but rather evolves iteratively toward
an accurate assessment of the situation. The steps might be defined as follows:
As the information grows, the questions will become more probing. What is the
historical growth pattern? Why did the XYZ company suddenly increase its leverage or
1. Define the universe of competitors.
2. Spread the data and financial ratios on the firm and its competitors in comparative
3. Identify similarities and, more importantly, differences. Probe into anomalies.
Question the data and the peer sample.
4. Add needed information, such as a foreign competitor, another ratio, historical
normalization, etc.
5. Discuss or clarify the information with the CFO or industry expert.
Page 384
keep a large cash balance? Did the acquisition of a new line actually provide access to
new markets? Are the changes in debt mix and maturity or in the dividend policy related
to the new products and markets?
Economy of effort demands that the analyst begin with a few ratios and data that can
be easily obtained (from annual reports and other sources). If a company is in several
industries and does not have pure competitors, choose group-divisional competitors
and, to the extent possible, use segment information to devise ratios that will be valid,
which is to say, operating income to sales, rather than an after-tax equivalent). Do not
forget information that may be outside the financial statements and may be critical to
competitive survival, such as geographic diversification, research and development
expenditures, and union activity. For some industries, other key ratios are
available through trade groups, such as same-store sales and capacity analyses.
Whatever the inadequacy of the data, the comparisons will provide direction for
subsequent analysis.
The ratios and data to be used will depend on the course of analysis. An analyst
could start with the following general types of measures with which to compare a
competitor group:
1. Size: sales, market value, number of employees or countries, market share
2. Asset productivity: return on assets (ROA), return on invested capital, market to book
3. Shareholder wealth: price/earnings (P/E), return on market value
4. Predictability: Beta, historical trends
5. Growth: 1- to 10-year compound growth of sales, profits, assets, and market value of
6. Financial flexibility: debt-to-capital, debt ratings, cash flow coverage, estimates of
One of the key issues to resolve in analyzing the comparative data is whether all the
peer-group members display the same results and trends. Inevitably, they will not—
which begs the question, why not? Trends in asset productivity and globalization have
affected the competitors differently and elicited an assortment of strategic responses.
These phenomena should stimulate further research.
The analyst should augment personal research efforts with the work of industry
analysts. Securities analysts, consultants, academicians, and journalists—both through
their written work and via telephone conversations—can provide valuable insights
based on their extensive, personal contacts in the industry.
Analyzing competitors develops insights into the range of financial structures in the
industry and the appropriateness of your firm’s structure in comparison. Developing
those insights is more a matter of qualitative judgment than of letting the numbers speak
for themselves. For instance:
the cost of capital
7. Other significant industry issues: unfunded pension liabilities, postretirement
medical benefit obligations, environmental liabilities, capacity, research and
development expense to sales, percentage of insider control, etc.
1. Suppose your firm is a highly leveraged computer manufacturer with an uneven record
of financial performance. Should it unlever? You discover that the peer group of
computer manufacturers is substantially equity financed, owing largely to the rapid
rate of technological innovation and the predation of a few large players in the
industry. The strategic rationale for low leverage is to survive the business and short
product lifecycles. Yes, it might be good to unlever.
Page 385
2. Suppose your firm is an airline that finances its equipment purchases with flotations of
commercial paper. The average life of the firm’s liabilities is 4 years, while the
average life of the firm’s assets is 15 years. Should the airline refinance its
debt using securities with longer maturity? You discover that the peer group of airlines
finances its assets with leases, equipment-trust certificates, and project-finance deals
that almost exactly match the economic lives of assets and liabilities. The strategic
rationale for lengthening the maturity structure of liabilities is to hedge against yieldcurve
changes that might adversely affect your firm’s ability to refinance, yet still
leave its peer competitors relatively unaffected.
3. Here is a trickier example. Your firm is the last nationwide supermarket chain that is
publicly held. All other major supermarket chains have gone private in leveraged
buyouts. Should your firm lever up through a leveraged share repurchase? Competitor
analysis reveals that other firms are struggling to meet debt service payments on
already thin margins and that a major shift in customer patronage may be under way.
You conclude that price competition in selected markets would trigger realignment in
market shares in your firm’s favor, because the competitors have little pricing
flexibility. In that case, adjusting to the industry-average leverage would not be
Part V: Diagnosing Financial Policy from an Internal
Internal analysis is the third major screen of a firm’s financial structure. It accounts for
the expected cash requirements and resources of a firm, and tests the consistency of a
firm’s financial structure with the profitability, growth, and dividend goals of the firm.
The classic tools of internal analysis are the forecast cash flow, financial statements,
and sources-and-uses of funds statements. The standard banker’s credit analysis is
consistent with this approach.
The essence of this approach is a concern for (1) the preservation of the firm’s
financial flexibility, (2) the sustainability of the firm’s financial policies, and (3) the
feasibility of the firm’s strategic goals. For example, the firm’s long-term goals may call
for a doubling of sales in five years. The business plan for achieving that goal may call
for the construction of a greenfield plant in year one, and then regional distribution
systems in years two and three. Substantial working capital investments will be
necessary in years two through five. How this growth is to be financed has huge
implications for your firm’s financial structure today. Typically, an analyst addresses
this problem by forecasting the financial performance of the firm, experimenting with
different financing sequences and choosing the best one, then determining the structure
that makes the best foundation for that financing sequence. This analysis implies the
need to maintain future financial flexibility.
Financial Flexibility
Financial flexibility is easily measured as the excess cash and unused debt capacity on
which the firm might call. In addition, there may be other reserves, such as unused land
or excess stocks of raw materials, that could be liquidated. All reserves that could be
mobilized should be reflected in an analysis of financial flexibility. Illustrating Page 386
with the narrower definition (cash and unused debt capacity), one can measure financial
flexibility as follows:
The amount estimated by this formula indicates the financial reserves on which the firm
can call to exploit unusual or surprising opportunities (for example, the chance to
acquire a competitor) or to defend against unusual threats (for example, a price war,
sudden product obsolescence, or a labor strike).
Self-Sustainable Growth
A shorthand test for sustainability and internal consistency is the self-sustainable growth
model. This model is based on one key assumption: over the forecast period, the firm
sells no new shares of stock (this assumption is entirely consistent with the actual
behavior of firms over the long run). As long as the firm does not change its mix of
debt and equity, the self-sustainable model implies that assets can grow only as fast as
equity grows. Thus, the issue of sustainability is significantly determined by the firm’s
1. Select a target minimum debt rating that is acceptable to the firm. Many CFOs will
have a target minimum in mind, such as the BBB/Baa rating.
2. Determine the book value debt/equity mix consistent with the minimum rating.
Standard & Poor’s, for instance, publishes average financial ratios, including
debt/equity, that are associated with each debt-rating category.
3. Determine the book value of debt consistent with the debt/equity ratio from step 2.
This gives the amount of debt that would be outstanding, if the firm moved to the
minimum acceptable bond rating.
4. Estimate financial flexibility using the following formula:
Financial flexibility
= Excess cash + (debt at minimum rating − current debt outstanding).
Page 387
return on equity (ROE) and dividend payout ratio (DPO):
Self-sustainable growth rate of assets = ROE × (1 − DPO)
The test of feasibility of any long-term plan involves comparing the growth rate
implied by this formula and the targeted growth rate dictated by management’s plan. If
the targeted growth rate equals the implied rate, then the firm’s financial policies are in
balance. If the implied rate exceeds the targeted rate, the firm will gradually become
more liquid, creating an asset deployment opportunity. If the targeted rate exceeds the
implied rate, the firm must raise more capital by selling stock, levering up, or
reducing the dividend payout.
Management policies can be modeled finely by recognizing that ROE can be
decomposed into various factors using two classic formulas:
Inserting either of those formulas into the equation for the self-sustainable growth rate
gives a richer model of the drivers of self-sustainability. One sees, in particular, the
importance of internal operations. The self-sustainable growth model can be expanded
to reflect explicitly measures of a firm’s operating and financial policies.
The self-sustainable growth model tests the internal consistency of a firm’s
operating and financial policies. This model, however, provides no guarantee that a
DuPont system of ratios: ROE = P/S × S/A × A/E
P/S = profit divided by sales or net margin; a measure of profitability
S/A = sales divided by assets; a measure of asset productivity
A/E = assets divided by equity; a measure of financial leverage
Financial-leverage equation : ROE = ROTC + [(13 ROTC − Kd) × (D/E)]
ROTC = return on total capital
Kd = cost of debt
D/E = debt divided by equity; a measure of leverage
strategy will maximize value. Value creation does not begin with growth targets; growth
per se does not necessarily lead to value creation, as the growth-by-acquisition
strategies of the 1960s and ’70s abundantly illustrated. Also, the adoption of growth
targets may foreclose other, more profitable strategies. Those targets may invite
managers to undertake investments yielding less than the cost of capital. Meeting sales
or asset growth targets can destroy value. Thus, any sustainable growth analysis must be
augmented by questions about the value-creation potential of a given set of corporate
policies. These questions include: (1) What are the magnitude and duration of
investment returns as compared with the firm’s cost of capital? and (2) With what
alternative set of policies is the firm’s share price maximized? With questions such as
those, the investor orientation discussed in Part III is turned inward to double-check the
appropriateness of any inferences drawn from financial forecasts of the sources-anduses
of funds statements and from the analysis of the self-sustainable growth model.
Page 388
Part VI: What Is Best?
Any financial structure evaluated against the perspectives of investors, competitors, and
internal goals will probably show opportunities for improvement. Most often, CFOs
choose to make changes at the margin rather than tinkering radically with a financial
structure. For changes large and small, however, the analyst must develop a
framework for judgment and prescription.
The following framework is a way of identifying the trade-offs among “good” and
“bad,” rather than finding the right answer. Having identified the trade-offs implicit in
any alternative structure, it remains for the CFO and the adviser to choose the structure
with the most attractive trade-offs.
The key elements of evaluation are as follows:
Flexibility: the ability to meet unforeseen financing requirements as they arise—those
requirements may be favorable (for example, a sudden acquisition opportunity) or
unfavorable (such as the Source Perrier and the benzene scare). Flexibility may
involve liquidating assets or tapping the capital markets in adverse market
environments or both. Flexibility can be measured by bond ratings, coverage
ratios, capitalization ratios, liquidity ratios, and the identification of salable
Risk: the predictable variability in the firm’s business. Such variability may be due to
both macroeconomic factors (such as consumer demand) and industry- or firmspecific
factors (such as product lifecycles, or strikes before wage negotiations).
To some extent, past experience may indicate the future range of variability in
EBIT and cash flow. High leverage tends to amplify those predictable business
swings. The risk associated with any given financial structure can be assessed by
EBIT–EPS (earnings per share) analysis, break-even analysis, the standard
This framework of flexibility, risk, income, control, and timing (FRICT) can be used
to assess the relative strengths and weaknesses of alternative financing plans. To use a
simple example, suppose that your firm is considering two financial structures: (1) 60%
debt and 40% equity (i.e., debt will be issued), and (2) 40% debt and 60% equity (i.e.,
equity will be issued). Also, suppose that your analysis of the two structures under the
investor, competitor, and internal-analysis screens leads you to make this basic
deviation of EBIT, and beta. In theory, beta should vary directly with leverage.14
Income: this compares financial structures on the basis of value creation. Measures
such as DCF value, projected ROE, EPS, and the cost of capital indicate the
comparative value effects of alternative financial structures.
Control: alternative financial structures may imply changes in control or different
control constraints on the firm as indicated by the percentage distribution of share
ownership and by the structure of debt covenants.
Page 389
Timing: asks the question whether the current capital-market environment is the right
moment to implement any alternative financial structure, and what the implications
for future financing will be if the proposed structure is adopted. The current
market environment can be assessed by examining the Treasury yield curve, the
trend in the movement of interest rates, the existence of any windows in the market
for new issues of securities, P/E multiple trends, etc. Sequencing considerations
are implicitly captured in the assumptions underlying the alternative DCF
value estimates, and can be explicitly examined by looking at annual EPS
and ROE streams under alternative financing sequences.
The 60% debt structure is favored on the grounds of income, control, and today’s
market conditions. The 40% debt structure is favored on the grounds of flexibility, risk,
and the long-term financial sequencing. This example boils down to a decision between
“eating well” and “sleeping well.” It remains up to senior management to make the
difficult choice between the two alternatives, while giving careful attention to the views
of the investors, competitors, and managers.
Page 390
Part VII: Conclusion
Description, diagnosis, and prescription in financial structuring form an iterative
process. It is quite likely that the CFO in the eat-well/sleep-well example would send
the analyst back for more research and testing of alternative structures. Figure 31.2
presents an expanded view of the basic cycle of analysis and suggests more about the
complexity of the financial-structuring problem. With time and experience, the analyst
develops an intuition for efficient information sources and modes of analysis. In the long
run, this intuition makes the cycle of analysis manageable.
FIGURE 31.2 | An Expanded Illustration of the Process of Developing a Financial Policy.
Source: Created by author.

Page 391
California Pizza Kitchen
Everyone knows that 95% of restaurants fail in the first two years, and a lot of people think it’s “location,
location, location.” It could be, but my experience is you have to have the financial staying power. You could
have the greatest idea, but many restaurants do not start out making money—they build over time. So it’s really
about having the capital and the staying power.
—Rick Rosenfield, Co-CEO, California Pizza Kitchen
In early July 2007, the financial team at California Pizza Kitchen (CPK), led by Chief
Financial Officer Susan Collyns, was compiling the preliminary results for the second
quarter of 2007. Despite industry challenges of rising commodity, labor, and energy
costs, CPK was about to announce near-record quarterly profits of over $6 million.
CPK’s profit expansion was explained by strong revenue growth with comparable
restaurant sales up over 5%. The announced numbers were fully in line with the
company’s forecasted guidance to investors.
The company’s results were particularly impressive when contrasted with many
other casual dining firms, which had experienced sharp declines in customer traffic.
Despite the strong performance, industry difficulties were such that CPK’s share price
had declined 10% during the month of June to a current value of $22.10. Given the price
drop, the management team had discussed repurchasing company shares. With little
money in excess cash, however, a large share repurchase program would require debt
financing. Since going public in 2000, CPK’s management had avoided putting any debt
on the balance sheet. Financial policy was conservative to preserve what co-CEO Rick
Rosenfeld referred to as staying power. The view was that a strong balance sheet would
maintain the borrowing ability needed to support CPK’s expected growth trajectory. Yet
with interest rates on the rise from historical lows, Collyns was aware of the benefits of
moderately levering up CPK’s equity. Page 392
California Pizza Kitchen
Inspired by the gourmet pizza offerings at Wolfgang Puck’s celebrity-filled restaurant,
Spago, and eager to flee their careers as white-collar criminal defense attorneys, Larry
Flax and Rick Rosenfield created the first California Pizza Kitchen in 1985 in Beverly
Hills, California. Known for its hearth-baked barbecue-chicken pizza, the “designer
pizza at off-the-rack prices” concept flourished. Expansion across the state, country, and
globe followed in the subsequent two decades. At the end of the second quarter of 2007,
the company had 213 locations in 28 states and 6 foreign countries. While still very
California-centric (approximately 41% of the U.S. stores were in California), the casual
dining model had done well throughout all U.S. regions with its family-friendly
surroundings, excellent ingredients, and inventive offerings.
California Pizza Kitchen derived its revenues from three sources: sales at companyowned
restaurants, royalties from franchised restaurants, and royalties from a
partnership with Kraft Foods to sell CPK-branded frozen pizzas in grocery stores.
While the company had expanded beyond its original concept with two other restaurant
brands, its main focus remained on operating company-owned full-service CPK
restaurants, of which there were 170 units.
Analysts conservatively estimated the potential for full-service company-owned
CPK units at 500. Both the investment community and management were less certain
about the potential for the company’s chief attempt at brand extension, its ASAP
restaurant concept. In 1996, the company first developed the ASAP concept in a
franchise agreement with HMSHost. The franchised ASAPs were located in airports
and featured a limited selection of pizzas and “grab-n-go” salads and sandwiches.
While not a huge revenue source, management was pleased with the success of the
Page 393
airport ASAP locations, which currently numbered 16. In early 2007, HMSHost and
CPK agreed to extend their partnership through 2012. But the sentiment was more mixed
regarding its company-owned ASAP locations. First opened in 2000 to capitalize on the
growth of fast casual dining, the company-owned ASAP units offered CPK’s mostpopular
pizzas, salads, soups, and sandwiches with in-restaurant seating. Sales and
operations at the company-owned ASAP units never met management’s expectations.
Even after retooling the concept and restaurant prototype in 2003, management decided
to halt indefinitely all ASAP development in 2007 and planned to record roughly
$770,000 in expenses in the second quarter to terminate the planned opening of one
ASAP location.
Although they had doubts associated with the company-owned ASAP restaurant
chain, the company and investment community were upbeat about CPK’s success and
prospects with franchising full-service restaurants internationally. At the beginning of
July 2007, the company had 15 franchised international locations, with more openings
planned for the second half of 2007. Management sought out knowledgeable franchise
partners who would protect the company’s brand and were capable of growing the
number of international units. Franchising agreements typically gave CPK an initial
payment of $50,000 to $65,000 for each location opened and then an estimated 5% of
gross sales. With locations already in China (including Hong Kong), Indonesia, Japan,
Malaysia, the Philippines, and Singapore, the company planned to expand its global
reach to Mexico and South Korea in the second half of 2007.
Management saw its Kraft partnership as another initiative in its pursuit of
building a global brand. In 1997, the company entered into a licensing
agreement with Kraft Foods to distribute CPK-branded frozen pizzas. Although
representing less than 1% of current revenues, the Kraft royalties had a 95% pretax
margin, one equity analyst estimated. In addition to the high-2 margin impact on the
company’s bottom line, management also highlighted the marketing requirement in its
Kraft partnership. Kraft was obligated to spend 5% of gross sales on marketing the CPK
frozen pizza brand, more than the company often spent on its own marketing.
Management believed its success in growing both domestically and internationally,
and through ventures like the Kraft partnership, was due in large part to its “dedication
to guest satisfaction and menu innovation and sustainable culture of service.” A
creative menu with high-quality ingredients was a top priority at CPK, with the two cofounders
still heading the menu-development team. Exhibit 32.1 contains a selection of
CPK menu offerings. “Its menu items offer customers distinctive, compelling flavors to
commonly recognized foods,” A Morgan Keegan analyst wrote. While the company had
a narrower, more-focused menu than some of its peers, the chain prided itself on
creating craved items, such as Singapore Shrimp Rolls, that distinguished its menu and
could not be found at its casual dining peers. This strategy was successful, and internal
research indicated a specific menu craving that could not be satisfied elsewhere
prompted many patron visits. To maintain the menu’s originality, management reviewed
detailed sales reports twice a year and replaced slow-selling offerings with new items.
Some of the company’s most recent menu additions in 2007 had been developed and
tested at the company’s newest restaurant concept, the LA Food Show. Created by Flax
and Rosenfield in 2003, the LA Food Show offered a more upscale experience and
expansive menu than CPK. CPK increased its minority interest to full ownership of the
LA Food Show in 2005 and planned to open a second location in early 2008.
EXHIBIT 32.1 | Selected Menu Offerings
Avocado Club Egg Rolls: A fusion of East and West with fresh avocado, chicken, tomato,
Monterey Jack cheese, and applewood smoked bacon, wrapped in a crispy wonton roll. Served with ranchito
sauce and herb ranch dressing.
In addition to crediting its inventive menu, analysts also pointed out that its average
check of $13.30 was below that of many of its upscale dining casual peers, such as P.F.
Chang’s and the Cheesecake Factory. Analysts from RBC Capital Markets labeled the
chain a “Price–Value–Experience” leader in its sector.
CPK spent 1% of its sales on advertising, far less than the 3% to 4% of sales that
casual dining competitors, such as Chili’s, Red Lobster, Olive Garden, and Outback
Steakhouse, spent annually. Management felt careful execution of its company model
Singapore Shrimp Rolls: Shrimp, baby broccoli, soy-glazed shiitake mushrooms, romaine,
carrots, noodles, bean sprouts, green onion, and cilantro wrapped in rice paper. Served chilled with a sesame
ginger dipping sauce and Szechuan slaw.
The Original BBQ Chicken: CPK’s most-popular pizza, introduced in their first restaurant in Beverly Hills
in 1985. Barbecue sauce, smoked gouda and mozzarella cheeses, BBQ chicken, sliced red onions, and
Carne Asada: Grilled steak, fire-roasted mild chilies, onions, cilantro pesto, Monterey Jack, and mozzarella
cheeses. Topped with fresh tomato salsa and cilantro. Served with a side of tomatillo salsa.
Thai Chicken: This is the original! Pieces of chicken breast marinated in a spicy peanut ginger and sesame
sauce, mozzarella cheese, green onions, bean sprouts, julienne carrots, cilantro, and roasted peanuts.
Milan: A combination of grilled spicy Italian sausage and sweet Italian sausage with sautéed wild
mushrooms, caramelized onions, fontina, mozzarella, and parmesan cheeses. Topped with fresh herbs.
Shanghai Garlic Noodles: Chinese noodles wok-stirred in a garlic ginger sauce with snow peas, shiitake
mushrooms, mild onions, red and yellow peppers, baby broccoli, and green onions. Also available with
chicken and/or shrimp.
Chicken Tequila Fettuccine: The original! Spinach fettuccine with chicken, red, green, and yellow peppers,
red onions, and fresh cilantro in a tequila, lime, and jalapeño cream sauce.
Source: California Pizza Kitchen Web site, (accessed on August 12, 2008).
Page 394
resulted in devoted patrons who created free, but far more-valuable word-of-mouth
marketing for the company. Of the actual dollars spent on marketing, roughly 50% was
spent on menu-development costs, with the other half consumed by more typical
marketing strategies, such as public relations efforts, direct mail offerings,
outdoor media, and on-line marketing.
CPK’s clientele was not only attractive for its endorsements of the chain, but also
because of its demographics. Management frequently highlighted that its core customer
had an average household income of more than $75,000, according to a 2005 guest
satisfaction survey. CPK contended that its customer base’s relative affluence sheltered
the company from macroeconomic pressures, such as high gas prices, that might lower
sales at competitors with fewer well-off patrons.
Restaurant Industry
The restaurant industry could be divided into two main sectors: full service and limited
service. Some of the most popular subsectors within full service included casual dining
and fine dining, with fast casual and fast food being the two prevalent limited-service
subsectors. Restaurant consulting firm Technomic Information Services projected the
limited-service restaurant segment to maintain a five-year compound annual growth rate
(CAGR) of 5.5%, compared with 5.1% for the full-service restaurant segment. The
five-year CAGR for CPK’s subsector of the full-service segment was projected to grow
even more at 6.5%. In recent years, a number of forces had challenged restaurant
industry executives, including:
Increasing commodity prices;
Higher labor costs;
Softening demand due to high gas prices;
Deteriorating housing wealth; and
Page 395
High gas prices not only affected demand for dining out, but also indirectly pushed a
dramatic rise in food commodity prices. Moreover, a national call for the creation of
more biofuels, primarily corn-produced ethanol, played an additional role in driving up
food costs for the restaurant industry. Restaurant companies responded by raising menu
prices in varying degrees. The restaurants believed that the price increases would have
little impact on restaurant traffic given that consumers experienced higher price
increases in their main alternative to dining out—purchasing food at grocery stores to
consume at home.
Restaurants not only had to deal with rising commodity costs, but also rising labor
costs. In May 2007, President Bush signed legislation increasing the U.S. minimum
wage rate over a three-year period beginning in July 2007 from $5.15 to $7.25 an hour.
While restaurant management teams had time to prepare for the ramifications of this
gradual increase, they were ill-equipped to deal with the nearly 20 states in late 2006
that passed anticipatory wage increases at rates higher than those proposed by
In addition to contending with the rising cost of goods sold (COGS),
restaurants faced gross margins that were under pressure from the softening
demand for dining out. A recent AAA Mid-Atlantic survey asked travelers how they
might reduce spending to make up for the elevated gas prices, and 52% answered that
food expenses would be the first area to be cut. Despite that news, a Deutsche Bank
analyst remarked, “Two important indicators of consumer health—disposable income
and employment—are both holding up well. As long as people have jobs and incomes
are rising, they are likely to continue to eat out.”
The current environment of elevated food and labor costs and consumer concerns
highlighted the differences between the limited-service and full-service segments of the
Intense interest in the industry by activist shareholders.
Page 396
restaurant industry. Franchising was more popular in the limited-service segment and
provided some buffer against rising food and labor costs because franchisors received a
percentage of gross sales. Royalties on gross sales also benefited from any pricing
increases that were made to address higher costs. Restaurant companies with large
franchising operations also did not have the huge amount of capital invested in locations
or potentially heavy lease obligations associated with company-owned units. Some
analysts included operating lease requirements when considering a restaurant
company’s leverage. Analysts also believed limited-service restaurants would benefit
from any consumers trading down from the casual dining sub-sector of the full-service
sector. The growth of the fast-casual subsector and the food-quality improvements in
fast food made trading down an increasing likelihood in an economic slowdown.
The longer-term outlook for overall restaurant demand looked much stronger. A
study by the National Restaurant Association projected that consumers would increase
the percentage of their food dollars spent on dining out from the 45% in recent years to
53% by 2010. That long-term positive trend may have helped explain the extensive
interest in the restaurant industry by activist shareholders, often the executives of private
equity firms and hedge funds. Activist investor William Ackman with Pershing Square
Capital Management initiated the current round of activist investors forcing change at
major restaurant chains. Roughly one week after Ackman vociferously criticized the
McDonald’s corporate organization at a New York investment conference in late 2005,
the company declared it would divest 1,500 restaurants, repurchase $1 billion of its
stock, and disclose more restaurant-level performance details. Ackman advocated all
those changes and was able to leverage the power of his 4.5% stake in McDonald’s by
using the media. His success did not go unnoticed, and other vocal minority investors
aggressively pressed for changes at numerous chains including Applebee’s, Wendy’s,
and Friendly’s. These changes included the outright sale of the company, sales
of noncore divisions, and closure of poor-performing locations.
In response, other chains embarked on shareholder-friendly plans including
initiating share repurchase programs; increasing dividends; decreasing corporate
expenditures; and divesting secondary assets. Doug Brooks, chief executive of Brinker
International Inc., which owned Chili’s, noted at a recent conference:
There is no shortage of interest in our industry these days, and much of the recent
news has centered on the participation of activist shareholders . . . but it is my job
as CEO to act as our internal activist.
In April 2007, Brinker announced it had secured a new $400 million unsecured,
committed credit-facility to fund an accelerated share repurchase transaction in which
approximately $300 million of its common stock would be repurchased. That followed
a tender offer recapitalization in 2006 in which the company repurchased $50 million
worth of common shares.
Recent Developments
CPK’s positive second-quarter results would affirm many analysts’ conclusions that the
company was a safe haven in the casual dining sector. Exhibits 32.2 and 32.3 contain
CPK’s financial statements through July 1, 2007. Exhibit 32.4 presents comparable
store sales trends for CPK and peers. Exhibit 32.5 contains selected analysts’ forecasts
for CPK, all of which anticipated revenue and earnings growth. A Morgan Keegan
analyst commented in May:
Despite increased market pressures on consumer spending, California Pizza
Kitchen’s concept continues to post impressive customer traffic gains.
Traditionally appealing to a more discriminating, higher-income clientele, CPK’s
creative fare, low check average, and high service standards have uniquely
positioned the concept for success in a tough consumer macroeconomic
EXHIBIT 32.2 | Consolidated Balance Sheets (in thousands of dollars)
Sources of data: Company annual and quarterly reports.
EXHIBIT 32.3 | Consolidated Income Statements (in thousands of dollars, except per-share data)
For the years ended December 31, 2006, January 1, 2006, and January 2, 2005, December 28, 2003.
Severance charges represent payments to former president/CEO and former senior vice president/senior
development officer under the terms of their separation agreements.
Data for company-owned restaurants.
Sources of data: Company annual and quarterly reports and quarterly company earnings conference calls.
EXHIBIT 32.4 | Selected Historical Comparable Store Sales (calendarized)
While other restaurant companies experienced weakening sales and earnings growth,
CPK’s revenues increased more than 16% to $159 million for the second quarter of
2007. Notably, royalties from the Kraft partnership and international franchises were up
37% and 21%, respectively, for the second quarter. Development plans for opening a
total of 16 to 18 new locations remained on schedule for 2007. Funding CPK’s 2007
growth plan was anticipated to require $85 million in capital expenditures.
The company was successfully managing its two largest expense items in an
environment of rising labor and food costs. Labor costs had actually declined from
36.6% to 36.3% of total revenues from the second quarter of 2006 to the second quarter
Brinker’s comparable store sales were a blended rate for its various brands.
Source of data: KeyBanc Capital Markets equity research.
EXHIBIT 32.5 | Selected Forecasts for California Pizza Kitchen
Source of data: Selected firms’ equity research.
Page 397
of 2007. Food, beverage, and paper-supply costs remained constant at roughly 24.5% of
total revenue in both the second quarter of 2006 and 2007. The company was
implementing a number of taskforce initiatives to deal with the commodity
price pressures, especially as cheese prices increased from $1.37 per pound in April to
almost $2.00 a pound by the first week of July. Management felt that much of the cost
improvements had been achieved through enhancements in restaurant operations.
Capital Structure Decision
CPK’s book equity was expected to be around $226 million at the end of the second
quarter. With a share price in the low 20s, CPK’s market capitalization stood at
$644 million. The company had recently issued a 50% stock dividend, which had
effectively split CPK shares on a 3-for-2 shares basis. CPK investors received one
additional share for every two shares of common stock held. Adjusted for the stock
dividend, Exhibit 32.6 shows the performance of CPK stock relative to that of industry
EXHIBIT 32.6 | Stock Price Comparison
Note: Adjusted for the June 2007 50% stock dividend. With such a dividend, an owner of two shares of CPK stock was
Despite the challenges of growing the number of restaurants by 38% over the last
five years, CPK consistently generated strong operating returns. CPK’s return on equity
(ROE), which was 10.1% for 2006, did not benefit from financial leverage. Financial
policy varied across the industry, with some firms remaining all equity capitalized and
others levering up to half debt financing. Exhibit 32.7 depicts selected financial data for
peer firms. Because CPK used the proceeds from its 2000 initial public offering (IPO)
to pay off its outstanding debt, the company completely avoided debt financing. CPK
maintained borrowing capacity available under an existing $75 million line of credit.
Interest on the line of credit was calculated at LIBOR plus 0.80%. With LIBOR
currently at 5.36%, the line of credit’s interest rate was 6.16% (see Exhibit 32.8).
given an additional share. The effect was to increase CPK shares by one-third, yet maintain the overall capitalization of
the equity.
Sources of data: Yahoo! Finance and Datastream.
EXHIBIT 32.7 | Comparative Restaurant Financial Data, 2006 Fiscal Year (in millions of dollars,
except per-share data)
For the years ended December 31, 2006, January 1, 2006 and January 2, 2005, December 28, 2003.
Severance charges represent payments to former president/CEO and former senior vice president/senior
development officer under the terms of their separation agreements.
Data for company-owned restaurants.
Sources of data: Company annual and quarterly reports and conference calls.
EXHIBIT 32.8 | Interest Rates and Yields
The recent 10% share price decline seemed to raise the question of whether this
was an ideal time to repurchase shares and potentially leverage the company’s balance
sheet with ample borrowings available on its existing line of credit. One gain from the
leverage would be to reduce the corporate income-tax liability, which had been almost
$10 million in 2006. Exhibit 32.9 provides pro forma financial summaries of CPK’s tax
shield under alternative capital structures. Still, CPK needed to preserve its ability to
fund the strong expansion outlined for the company. Any use of financing to return
capital to shareholders needed to be balanced with management’s goal of growing the
Sources of data: Economic Report of the President and Fannie Mae Web site.
EXHIBIT 32.9 | Pro Forma Tax Shield Effect of Recapitalization Scenarios (dollars in thousands,
except share data; figures based on end of June 2007)
Interest rate of CPK’s credit facility with Bank of America: LIBOR + 0.80%. Earnings before interest and taxes (EBIT)
include interest income.
Market values of debt equal book values.
Actual market value of equity equals the share price ($22.10) multiplied by the current number of shares outstanding
(29.13 million).
Source: Case writer analysis based on CPK financial data.
(1) (2)

Page 409
Dominion Resources: Cove Point
Scott Hetzer quickly shoved his notes and laptop into his briefcase and slipped on his
suit jacket. Hetzer, the treasurer for Dominion Resources Inc. (Dominion), a major U.S.
diversified producer and distributor of energy, was heading to meet with the company’s
investment bankers to discuss the impact of a large project upon Dominion’s financing
strategy for the next five years. The Cove Point liquefied natural gas (LNG) project
would require $3.6 billion to build and represented one of the largest single capital
investments in the company’s 100-plus year history.
It was February 15, 2013, and the recent boom in hydraulic fracturing (fracking) had
turned the U.S. natural gas market on its head, creating the opportunity to transform
Cove Point from an importer of natural gas into primarily an exporter. In the company’s
2012 annual report, CEO Thomas F. Farrell II highlighted the project by stating, “We
firmly believe that incorporating liquefaction and export capability into our Cove Point
LNG import terminal located on the Chesapeake Bay in Lusby, Maryland, can be
beneficial both to you, our shareholders, and to gas producers operating in the eastern
half of the U.S.” The surging demand for LNG had allowed Dominion to sign long-term
contracts for 100% of Cove Point’s projected capacity, mitigating the project’s financial
risk. By mid-2014, Dominion expected to receive the final required permits from the
regulatory authorities, such that the only remaining task was to revise the company’s
financing strategy from 2013 to 2017 to include the $3.6 billion investment.
After settling into the meeting room in Dominion’s Richmond, Virginia, headquarters
across from the team of bankers, Hetzer began, “Cove Point represents a terrific
opportunity for all Dominion’s stakeholders, but financing this sizable project clearly
Page 410
presents multiple financial challenges.” Hetzer knew that determining the optimal mix of
debt and equity would need to address the impact upon Dominion’s credit ratings as
well as regulators and Dominion’s existing stock and debt holders.
The Utilities Industry
While utilities had historically been considered a safe investment, the introduction of
regulatory changes, new competition, demand fluctuations, and commodity price
volatility over the last few decades had altered the landscape. No longer did big
regional monopolies manage the entire industry from power generation to transmission
and through to retail supply. The industry was now characterized by four segments:
power generation, energy network operators, energy traders, and energy service
The electric-utility industry provided indispensable energy to factories, commercial
establishments, homes, and even most recreational facilities. According to the U.S.
Department of Energy, 2012 U.S. electricity was generated by coal (37%), natural gas
(30%), nuclear (19%), hydropower (7%), and other renewable energy (7%). Lack of
electricity caused not only inconvenience to the end users, but also economic loss for
companies that suffered production reductions. Because of their importance to the
economy, utilities were closely regulated by local and national authorities such as the
State Corporation Commission in Virginia and the Federal Energy Regulatory
The Electricity Market
The demand for electricity had consistently grown along with the growth of the
population and the economy. The U.S. Energy Information Administration (EIA)
projected that 355 gigawatts of new electric generating capacity—more than 40% more
Page 411
than the industry currently supplied—would be needed by 2020. Energy demand was
dependent on numerous variables—particularly the climate such as an unusually cold
winter or hot summer—that rendered determining short-term demand tricky, although
long-term consumption growth was a sure thing. And demand for electricity changed
from day to day and season to season, often adding pressure to the capacity-generating
side or squeezing the revenue side of the industry.
This risk of large price swings began in 1996, when wholesale electricity prices
were deregulated and prices were allowed to fluctuate with supply and demand. For
example, although $10 to $20 per megawatt hour was average, demand spikes had led to
prices as high as $5,000 or $10,000 per megawatt hour. Utility managers and other
energy buyers managed this risk by using forwards and futures options to hedge against
unexpected price swings. Although the U.S. wholesale energy market had been
deregulated, state regulatory authorities, such as a public service commission, still
determined the prices a utility could charge its retail customers. The regulatory process
was complex and prone to regulatory missteps and disputes between consumers, special
interest groups, and political actors.
In addition to price regulatory authorities, environmental regulation
affected power generation. The lead governmental agency, the Environmental
Protection Agency (EPA), helped create and enforce compliance of the Clean Air Act
among utilities. In an effort to limit air pollution and climate change, and to move
toward a clean-energy economy, approvals for new power plants faced intense scrutiny.
Even with new technologies replacing old energy, such as clean coal or the cleaner
alternative of natural gas or nuclear power, renewable energy source standards were
often favored (i.e., biomass, biofuels, hydro, solar, and wind).
The Natural Gas Market
Natural gas was a fossil fuel found in deep underground rock formations. Industrially
extracted to supply energy since 1825, by 2012 natural gas fueled 30% of U.S.
electricity generation, up from 16% in 2000. For roughly 30 years, the increased growth
in demand for natural gas had surpassed all other fossil fuels and was estimated to
continue to grow.
Much of the gas supply was being produced by fracking, which was an extraction
method that combined horizontal drilling with hydraulic fracturing to remove natural gas
from shale formations. Regulators predicted that, by 2040, 50% of total natural gas
production in the United States would be shale gas. As U.S. supply exceeded demand,
the European market price grew until it was double that in the United States, and the
Asian market price tripled (Exhibit 33.1). And as technology allowed for more
productivity and less expense around shale gas extracting, U.S. natural gas prices were
expected to remain lower than in other regions of the world.
To transport natural gas to the higher-priced markets required that the gas be
converted to a liquid by cooling it to approximately −162°C (−260°F), which
EXHIBIT 33.1 | Global Natural Gas Prices 1996–2013 ($ per million Btu)
Data source: “BP Statistical Review of World Energy June 2015,”
Page 412
condensed the volume to only 1/600th of the volume of natural gas. The LNG was
created at the exporting terminal and then regasified at the importing terminal. Most
Asian economies were LNG dependent. Because of the global pricing disparity, by
2016, the United States was predicted to become a net exporter of natural gas exporting
roughly 3 Tcf (trillion cubic feet) of natural gas per year by 2030. Within 10 years,
LNG exports were expected to increase to more than 6 Tcf per year.
Dominion Resources
With corporate roots dating back to the Colonial era, Dominion had become one of the
largest producers and transporters of energy in the United States, providing electricity,
natural gas, and related services in the eastern region of the United States
(Exhibit 33.2). Dominion’s portfolio of assets included 23,600 megawatts of
generating capacity; 6,400 miles of electric transmission lines; 57,000 miles of
electric distribution lines; and 32,800 miles of natural gas transmission and distribution
pipeline. In 15 different states, the company supplied 6 million utility and retail energy
clients and operated one of the nation’s largest underground natural gas storage systems,
totaling 947 billion cubic feet of storage capacity. In 2012, Dominion had grown into a
$12 billion revenue company (Exhibit 33.3) with assets of $47 billion (Exhibit 33.4)
and a $53 billion enterprise value composed of three primary business segments:
EXHIBIT 33.2 | Map of Power and Natural Gas Infrastructure
Source: Company document. Used with permission.
EXHIBIT 33.3 | Dominion Corporate Income Statements, 2011–2012 (non-GAAP) ($ in millions
except per share amounts)
“Dominion management believed non-GAAP operating earnings provided a more meaningful representation of the
company’s fundamental earnings power.” Dominion Resource report: “GAAP Reconciliation Operating Earnings to
Generation (52% of earnings)—Dominion Generation managed the company’s
portfolio of merchant and regulated utility electric-power-generation assets along with
its energy trading and marketing activities. The electric-generation mix included coal,
nuclear, gas, oil, renewables, and purchased power.
Energy (27% of earnings)—Dominion Energy managed the company’s natural gas
transmission, gathering, distribution, and storage pipeline, and a natural gas storage
network, which was the largest in North America. Dominion Energy operated the Cove
Reported 2007–2012,”
Source: Company document. Used with permission.
EXHIBIT 33.4 | Dominion Corporate Balance Sheet, 2011–2012 ($ in millions)
Source: Company document. Used with permission.
Page 413
Point LNG facility on the Chesapeake Bay. At the time, Cove Point was limited to
operating solely as an import and storage facility.
Dominion Virginia Power (21% of earnings)—Dominion Virginia Power managed
the company’s “regulated electric distribution and electric-transmission operations in
Virginia and northeastern North Carolina, as well as the nonregulated retail energy
marketing and regulated and nonregulated customer service operations.” The company
managed its 6,400 miles of electric transmission lines and 57,000 miles of distribution
lines in this segment.
Before 2006, Dominion had been a highly regulated utility and a nonregulated
exploration and production (E&P) oil and gas company. Because of the different risk
profiles inherent in the two core businesses, investors held very different visions for the
company. Investors who viewed the firm as a utility wanted Dominion to maximize cash
flow, while those who owned it for its oil and gas production assets wanted the firm to
increase investment in E&P in order to realize the value of its energy reserves. Some
equity analysts thought this split in the investor base caused Dominion to be undervalued
against both E&P and utility peers. From the standpoint of energy investors, Dominion
was viewed as not being a big enough risk taker, while utility investors preferred a risk
profile more similar to a stable, regulated utility.
When Farrell became CEO in 2007, he sold off billions of dollars of E&P oil and
gas assets to create a business that would derive the majority of its earnings from
regulated or “regulated-like” businesses. Under Farrell’s leadership, Dominion grew
investments in “regulated electric generation, transmission and distribution and
regulated natural gas transmission and distribution infrastructure” in and near locations
where it already conducted business. In general, regulated rates were approved based
on a “cost-plus method” that set the price of the commodity high enough to
cover the utility’s operating costs as well as providing a sufficient return on
equity (ROE) to attract capital. Thus to grow its earnings, Dominion sought investment
opportunities in regulated/regulated-like businesses that would lock in future profits.
The newly abundant supply of U.S. natural gas gave Dominion an opportunity with the
Cove Point facility.
Because of the changing natural gas markets, Dominion had requested regulatory
approval to convert and operate Cove Point as a bi-directional facility (i.e., to export as
well as import LNG). Investor interest in the new potential use for Cove Point had
buoyed Dominion’s stock in recent months (Exhibit 33.5) as Wall Street analysts
attempted to quantify the potential value of the opportunity.
Cove Point
Dominion acquired the Cove Point LNG terminal in 2002 for $217 million. In 2012,
contracts related to the import, storage, and transportation of natural gas at Cove Point
produced $293 million in revenues and $196.05 million in EBITDA. Anticipating the
opportunity to export LNG, Dominion began renegotiating existing LNG import
contracts in 2010 to free up pipeline and storage capacity utilization beginning in 2017.
EXHIBIT 33.5 | Dominion Stock Price versus Utility Index 2006–2013
Source: Company document. Used with permission.
Page 414
Subject to regulatory approvals and financing, Dominion planned to start construction
on the project in 2014 and open the facilities by late 2017.
While the gas liquefied at Cove Point could be sourced from various places, the
facility would offer direct access to the Marcellus and Utica shale plays, which were
among the most productive natural gas basins in North America. If approved, about
750 million standard cubic feet of inlet feed gas would be processed each day. LNG
would be produced through natural gas–fired turbines powering the main refrigerant
compressors. The liquefaction facilities would “connect with the existing facility and
share common facilities such as the LNG tanks, pumps, piping, and pier in order to
support both importing and exporting LNG.”
Due to the robust global demand for natural gas, Dominion had been able to fully
subscribe Cove Point’s production capacity by signing 20-year agreements with two
large investment-grade companies. Each company had contracted for half of Cove
Point’s LNG capacity and, in turn, had announced agreements to sell gas and power to
other companies with the LNG supplied by Cove Point. Under the service agreements,
Dominion would not be responsible for inlet gas supply and would not be required to
take ownership of any of the LNG. These agreements allowed Cove Point to mitigate
commodity risk over the life of the contracts and positioned the project to fit within
Dominion’s “regulated-like” growth focus.
Meeting with Investment Bankers
After Scott Hetzer finished his opening statements, the investment bankers began their
presentation by pointing out that Dominion compared well to other utilities in terms of
credit rating, profitability, and capital structure (Exhibit 33.6). The discussion then
turned to the costs and benefits of using an “all-debt” financing strategy versus a “debtand-
equity” strategy.
The bankers listed a variety of issues for Dominion to consider when choosing
between these two strategies, which all were related to Dominion’s access to capital in
the future and the cost of that capital. As a capital intensive company, Dominion was
frequently using the debt markets to raise money to either fund new investments or to
refund outstanding debt that was maturing. Debt issuances occurred almost every year,
whereas Dominion rarely accessed the equity market for funding. In fact, the last public
issuances of equity occurred in the years 2002 and 2003, when the company issued 98
million shares to raise $2,395 million.
A primary consideration was Dominion’s credit rating, which would determine the
EXHIBIT 33.6 | Financial Data for Comparable Utility Companies (year-end 2012, $ in millions)
Sources: Company 10-K reports, Value Line, and case writer estimates.
interest rate Dominion would pay to borrow. A credit rating was assigned by rating
agencies such as Standard and Poor’s (S&P). S&P’s current rating for Dominion was an
A−, which S&P had assigned based on the combination of an “excellent” business risk
and a “significant” financial risk. A variety of financial ratios were used to determine
the level of financial risk. For utilities, the two core ratios were funds from operations
(FFO)-to-debt and debt-to-EBITDA. The bankers judged that if debt-to-EBITDA were
to persistently remain above 4.5 (currently 4.6) and if FFO-to-debt were to persistently
fall below 13% (currently 13.6%), S&P would change Dominion’s financial risk to
“aggressive,” which would result in a credit downgrade to BBB+. (See Exhibit 33.7 for
S&P’s credit rating information.)
EXHIBIT 33.7 | Standard and Poor’s Credit Ratings Definitions and Benchmark Ranges for Cash
Flow and Leverage Ratios of Medial Volatility Utilities
Standard & Poor’s Credit Rating Definitions
AAA Extremely strong capacity to meet financial commitments. Highest rating.
AA Very strong capacity to meet financial commitments.
A Strong capacity to meet financial commitments, but somewhat susceptible to adverse economic
conditions and changes in circumstances.
BBB Adequate capacity to meet financial commitments, but more subject to adverse economic
BB Less vulnerable in the near term but faces major ongoing uncertainties to adverse business,
financial, and economic conditions.
CCC Currently vulnerable and dependent on favorable business, financial, and economic conditions to
meet financial commitments.
S&P Benchmark Ranges for Cash Flow and Leverage Ratios of Medial Volatility Utilities
There were several costs to Dominion associated with a credit downgrade. First,
Hetzer estimated that in the current interest rate environment, a downgrade to BBB+
would result in the borrowing rate being about 0.40% (40 basis points) higher. In
addition, a BBB+ rating would be below Dominion’s target rating of A. Dominion had
committed to maintaining an A rating in order to be among the highest-rated utilities in
the industry. A strong credit rating gave Dominion access to a larger market of lenders
who wanted high-grade utility debt in their portfolios. Debt rated below the A level
would make Dominion’s risk profile less attractive to many institutional investors that
currently held Dominion debt. Moreover, a downgrade would result in a price decline
for Dominion’s outstanding bonds that would compromise years of good relations with
existing bondholders.
The bankers argued that Wall Street viewed Dominion as primarily a regulated
business (i.e., equity analysts and shareholders were highly focused on Dominion’s
earnings per share (EPS) and dividend per share growth). Because utilities were
regulated, the market viewed utility stocks as having a low-risk profile that allowed
EBITDA: Earnings before interest, taxes, depreciation, and amortization.
Funds from Operations (FFO): EBITDA, minus net interest expense minus current tax expense.
Debt: All interest-bearing debt instruments plus finance leases minus surplus cash.
Data source: Standard & Poor’s Ratings Services, “Corporate Methodology,” November 19, 2013, Table 18: htt
Page 415
utilities to have consistent growth in EPS and distribute most of those earnings as
dividends to the shareholders. Therefore, any interruption in the expected EPS growth
would signal that future earnings and dividends were less reliable, and the
utility’s stock price would suffer. Dominion had a strong record of EPS growth.
From 1999 to 2012, the average compound growth rate had been 4.8%, which was due
to the combination of good earnings and the reduction of shares outstanding. Beginning
in 2006, Dominion used a series of stock repurchases to reduce the shares outstanding
from 698 million in 2006 to 576 million in 2012. The bankers cautioned that an equity
issuance would be a surprise to shareholders, and the resulting EPS dilution would
likely prompt a significant share price decline. For example, if Dominion issued new
shares at its current stock price of $55, raising $2 billion of equity would create a 6.3%
dilution. In general, the bankers cautioned that with Dominion’s stock trading at a priceto-
earnings ratio (P/E) of 18 and 574 million shares outstanding, just a $0.10 reduction
in EPS would result in a loss of $1 billion of market value for the company.
As the bankers finished their presentation, they assured Hetzer that Dominion would
be able to raise enough money to refund all its debt coming due over the next five years
plus finance all the expected capital expenditures, including Cove Point. The question,
however, was whether Dominion wanted to rely solely upon the debt market or if the
company should use the equity market for some portion of the external funds needed. To
help understand the impact of the funding mix, Hetzer had asked his financial analysts to
prepare a financial model (Exhibit 33.8) that estimated the amount of external financing
needed and the impact upon the company’s financial profile based on using debt as the
sole source of external funding (Exhibit 33.9). To see the impact of issuing equity,
Hetzer could simply input the equity amount in the “New Equity” row. Hetzer was
meeting with Dominion’s CFO the next morning and wanted to be ready with a specific
financing plan that included the amount of equity issued, if any, and the timing of that
EXHIBIT 33.8 | Financial Planning Model: All-Debt Financing ($ in millions except per share amounts
and ratios)
Source: Case writer estimates.
EXHIBIT 33.9 | Financial Ratios and EPS for All-Debt Financing Strategy
Source: Case writer estimates.
As another alternative to the funding problem, Hetzer could propose to cancel or
postpone Cove Point. At $3.6 billion, Cove Point was an expensive undertaking. In fact,
this single project represented 24% of Dominion’s total capital expenditures for the next
five years (Exhibit 33.10). Therefore, canceling Cove Point would reduce Dominion’s
funding needs enough to allow the company to raise all the funding with debt without
jeopardizing the credit rating. On the other hand, the project was expected to contribute
in excess of $200 million in net operating profit after tax by 2018 and create more than
$600 million of value for the enterprise (Exhibit 33.11).
EXHIBIT 33.10 | Dominion Resources Capital Expenditures 2013–2017 ($ in millions)
Source: Case writer estimates.
EXHIBIT 33.11 | Cash Flows of Cove Point Liquefaction Project ($ in millions)
Source: Case writer estimates.

Page 425
Nokia OYJ: Financing the WP Strategic Plan
I have learned we are standing on a burning platform.
—Stephen Elop, Nokia CEO
In January 2012, Stephen Elop reflected on his tumultuous first year and a half as
president and CEO of Nokia. During that time, he had completed a review of the
company’s performance and strategic direction and been forced to admit to employees
that they were “standing on a burning platform,” threatened by intense competition in the
mobile phone market (Exhibit 34.1).
EXHIBIT 34.1 | Elop’s “Burning Platform” Memo to Employees, February 8, 2011
“There is a pertinent story about a man who was working on an oil platform in the North Sea. He woke up
one night from a loud explosion, which suddenly set his entire oil platform on fire. In mere moments, he was
surrounded by flames. Through the smoke and heat, he barely made his way out of the chaos to the
platform’s edge. When he looked down over the edge, all he could see were the dark, cold, foreboding
Atlantic waters. As the fire approached him, the man had mere seconds to react. He could stand on the
platform, and inevitably be consumed by the burning flames. Or, he could plunge 30 meters in to the freezing
waters. The man was standing upon a “burning platform,” and he needed to make a choice.
Over the past few months, I’ve shared with you what I’ve heard from our shareholders, operators,
developers, suppliers and from you. Today, I’m going to share what I’ve learned and what I have come to
believe. I have learned that we are standing on a burning platform. And, we have more than one explosion –
we have multiple points of scorching heat that are fuelling a blazing fire around us.
Apple disrupted the market by redefining the smartphone and attracting developers to a closed, but very
powerful ecosystem. Apple demonstrated that if designed well, consumers would buy a high-priced phone
with a great experience and developers would build applications. They changed the game, and today, Apple
owns the high-end range.
In the preceding years, Nokia, the world’s leading producer of mobile phones, had
In about two years, Android came in at the high-end, they are now winning the mid-range, and quickly they
are going downstream to phones under €100. Google has become a gravitational force, drawing much of the
industry’s innovation to its core.
[In] the low-end price range, manufacturers in the Shenzhen region of China produce phones at an
unbelievable pace. By some accounts, this ecosystem now produces more than one third of the phones sold
globally – taking share from us in emerging markets.
The battle of devices has now become a war of ecosystems. Our competitors aren’t taking our market share
with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to
decide how we either build, catalyse, or join an ecosystem.
In the meantime, we’ve lost market share, we’ve lost mind share and we’ve lost time.
On Tuesday, Standard & Poor’s informed that they will put our A long term and A-1 short term ratings on
negative credit watch. This is a similar rating action to the one that Moody’s took last week. Why are these
credit agencies contemplating these changes? Because they are concerned about our competitiveness.
How did we get to this point? Why did we fall behind when the world around us evolved? This is what I
have been trying to understand. I believe at least some of it has been due to our attitude inside Nokia. We
had a series of misses. We haven’t been delivering innovation fast enough. We’re not collaborating internally.
Nokia, our platform is burning.
The burning platform, upon which the man found himself, caused the man to shift his behaviour, and take a
bold and brave step into an uncertain future. He was able to tell his story. Now, we have a great opportunity
to do the same.
Source: Quoted passages from e-mail obtained by and also reprinted
in full at
(accessed June 8, 2011).
Page 426
seen its market share and profits eroded by rival products such as Apple’s iPhone and
phones featuring Google’s Android operating system. At the same time, its dominance in
the larger, lower-priced phone segment had been threatened by competition from
Samsung, LG, and ZTE. In February 2011, Elop had made his first major decision to
correct the company’s course, a broad strategic plan and partnership with Microsoft
(“the plan”) in which, among other initiatives, Windows would serve as Nokia’s
primary smartphone platform. Rather than quell the concerns as he had hoped, the plan’s
announcement had seemed only to raise more questions about the scope and timing of
the transition involved.
Reinforcing those concerns, the company had reported a net loss of earnings in July
2011, which was followed by a downgrade of the company’s credit rating the following
month. In late 2011, the first Windows smartphones appeared in Europe and Asia (under
the trade name Lumia), but the biggest challenge still awaited Nokia. Beginning
in 2012, the new phones would roll out in the United States, the most important
market for smartphones. Only after the rollout would Elop know whether his decision to
join forces with Microsoft would improve Nokia’s competive position.
It was left to Nokia’s CFO Timo Ihamuotila to assess the firm’s financing needs over
the critical next two years. He estimated that the firm might need as much as (U.S.
dollars) USD5.6 billion (equivalent to [euros] EUR4.3 billion) in external financing to
see it through these years. At the moment, none of the alternatives to raise that amount of
funding was particularly appealing. With its newly lowered credit rating, any new issue
of debt would have to consider the impact of a potential loss of investment-grade rating.
On the other hand, with the firm’s stock price hovering near USD5 (EUR4) per share, an
equity issue would raise concerns about share dilution and the negative signal it might
send to the market. The firm had adequate cash reserves at the moment and, depite the
decline in earnings, had maintained its dividend over the past few years. Ihamuotila
Page 427
would have to carefully assess these alternatives and devise a plan that would allow
Nokia to complete its restructuring plan and give the firm a chance to put out the flames.
Company Background
Nokia began in 1865 as a paper company situated near the river Nokianvirta in Finland
after which the company was named. It grew from a little-known company to be a
leading mobile phone manufacturer in the 1990s, and by 2011, it was a global leader in
mobile communications. The company operated in more than 150 countries and had
more than 130,000 employees. Nokia managed its operations across three operating
segments: Devices and Services (D&S), Nokia Siemens Networks (NSN), and Location
and Commerce (L&C). In 2011, D&S accounted for 62%, NSN for 36%, and L&C for
3% of its net sales.
The D&S segment comprised three business groups: mobile phones, multimedia,
and enterprise solutions. It developed and managed the company’s mobile devices
portfolio and also designed and developed services and applications (apps) to
customize mobile device users’ experience.
NSN was a joint venture formed by combining Nokia’s carrier networks business
and Siemens’s carrier-related operations for both fixed and mobile networks. It began
operations on April 1, 2007, and although it was jointly owned by Nokia and Siemens,
it was governed and consolidated by Nokia. NSN provided fixed and mobile network
infrastructure, communications, and network service platforms to operators and service
L&C was one of the main providers of comprehensive digital map information,
mapping applications, and related location-based content services (i.e., GPS). It grew
out of Nokia’s $8 billion acquisition of NAVTEQ in July 2008.
As recently as October 2007, Nokia’s stock price had hit a six-year high of
USD39.72 (EUR27.45) before falling to USD4.82 (EUR3.72) at the end of December
2011 (Exhibit 34.2). From that lofty performance of 2007, few could have imagined
how quickly the company’s fortunes would change.
Recent Financial Performance
Nokia’s success historically had been rooted in its technical strength and reliability. Its
products required a great deal of technical sophistication, and the company spent large
amounts on R&D to maintain its edge. The company was credited with a number of
“firsts” in the wireless industry and maintained intellectual property rights to over
10,000 patent families. In 2011, the company employed approximately 35,000 people in
R&D, roughly 27% of its total work force, and R&D expenses amounted to EUR5.6
billion, or 14.5% of net sales. Few companies could match the scale of Nokia’s R&D or
EXHIBIT 34.2 | Cumulative Stock Returns on Nokia versus S&P500: Jan. 2006–Dec. 2011
Nokia’s shares were traded in U.S. dollars as an American Depositary Receipt on the New York Stock Exchange. Share
prices were converted into euros at the daily exchange rate. Nokia’s year-end 2011 share price was USD4.82:EUR3.72.
The December 31, 2011, exchange rate was EUR1.2959 per U.S. dollar.
Data source: Center for Research in Security Prices (CRSP).
the size of its distribution network—the largest in the industry, with over 850,000 points
of sale globally.
Although Nokia’s stock price performance was adversely affected by the global
financial crisis in 2008–09, which weakened consumer and corporate spending in the
mobile device market, its problems extended beyond that (Exhibit 34.2). Analysts
estimated that only about 50% of Nokia’s decline could be attributed to market
conditions; the rest was said to be due to a loss of competitive position to established
industry players and new entrants. Since 2007, the firm had experienced a loss of
market share in most of its core markets (Exhibit 34.3). The mobile phone market was
broken into two segments: smartphones and mobile phones. Mobile phones had more
basic functionality and generally sold at prices under EUR100. Smartphones had
enhanced functionality and also higher average selling prices (ASPs). Traditionally,
Nokia had concentrated on the mobile phone market, which helped facilitate its high
market share in emerging markets. In 2007, mobile phones accounted for 86% of the
firm’s total handset production. For many years, this strategy had paid off because as
late as 2007, smartphones made up only 10% of the total handset market. But since
2007, the growth in smartphones had accelerated so that by 2011, their share had
increased to 30% of the market and accounted for 75% of total industry revenue.
EXHIBIT 34.3 | Nokia Market Share by Geographic Area and Mobile Device Volume (units in millions)
Page 428
Although Nokia introduced the first smartphone in 1996 (the Nokia 9000), since then,
the market had become highly competitive. Exhibit 34.4 details competitor offerings in
the smartphone market and key features that differentiated their products. Nokia’s
smartphones featured the Symbian operating system (OS), which was introduced in
1998, as an open-source OS developed by several mobile phone vendors. Symbian,
though reliable, proved to be an inflexible platform that had slow upgrade
cycles and lacked an appealing user interface (UI). The two most significant
events for Nokia were the launch of Apple’s iPhone in July 2007 and Google’s Android
system in October 2008. From a standing start, both products had made strong inroads in
the marketplace and had captured significant market share from Nokia in just a few short
years (Exhibit 34.5). Apple and Google were software companies that focused on ease
of use and apps that enabled users to customize and personalize their phones. Nokia’s
origins were in manufacturing and engineering, and it had vertically integrated forward
into software and design. Neither Apple nor Google manufactured their handsets, and
analysts speculated about which trend would dominate the future smartphone market.
One analyst said, “Nokia is not a software company. Apple is a software company, and
Data source: Nokia 20-F filings, 2007–11.
what distinguishes the iPhone and the iPod Touch is their software. I think Nokia is just
on the losing side.”6
EXHIBIT 34.4 | Competing Smartphones and Operating System Platforms
Data source: “Nokia Corporation, Initiating Coverage,” Wedbush research note, February 11, 2011.
EXHIBIT 34.5 | Smartphone Operating System and OEM Market Share
The Next Billion
Entry-level devices were one of the fastest-growing segments of the mobile phone
market. With far fewer infrastructure requirements than traditional land lines, rural
communities in emerging-market countries, such as China and India, were essentially
skipping wired communications and moving straight to wireless networks. The same
trend was also being observed in developed markets as individuals increasingly chose
to rely on wireless phones and dropped land-line phones. It had taken until 2002,
approximately 20 years after the first mobile phone was introduced, to achieve the first
billion mobile phone subscribers but only 3 years to add the next billion subscribers,
and 2 years for the next billion. As mobile phones were adopted globally, it took
successively less time to add the next billion subscribers. In 2010, there were an
Data through third quarter, September 2011.
Data sources: Gartner Research and company reports.
Page 429
estimated 3.2 billion people who lived within range of a mobile signal who did not own
a mobile phone. Because Nokia had one of the widest product portfolios, with devices
spanning from super-low-cost phones (<EUR50) to smartphones (+EUR300), and the
most extensive distribution channel, it was well positioned to capture this growth. But
even with a strategy in place for the next billion, Nokia faced intense competition,
particularly from Samsung, LG, and ZTE, which also had targeted phones at lower price
points for these markets. Also troubling, the Asia-Pacific region saw a large increase in
iPhone use in 2010, the strongest take-up of Apple devices in the world.
Change at the Top
Throughout 2009, Nokia introduced a number of initiatives in response to the
competitive pressures it faced. These included, among others, the opening of Ovi stores,
one-stop shops for applications and content, similar to Apple’s popular iTunes stores;
introducing a new Linux-based operating system (MeeGo) and high-end tablets; placing
an executive in charge of user-friendliness on the board; and pursuing several costcutting
and restructuring measures. Despite its efforts, Nokia could not shake the
perception that it was being upstaged by a “Cupertino competitor,” and as global
demand picked up in late 2009 and 2010, Nokia’s performance was a notable laggard.
In September 2010, Nokia’s board asked long-term president and CEO Olli-Pekka
Kallasvuo to step aside and replaced him with Stephen Elop, who was the first outsider
and non-Finn (he was Canadian) to head the company.
Strategic Plan with Microsoft
During his first few months with the company, Elop carefully weighed three alternatives
to be the firm’s primary platform for smartphones: staying the course with
Symbian/MeeGo, which meant building something new from MeeGo; adopting Android,
Page 430
which had volume share; or adopting Windows and becoming a key partner for
Microsoft, which needed a hardware provider that would put it first. On February 11,
2011, he announced the plan with Microsoft to build a new ecosystem with Windows
Phone (WP) serving as Nokia’s primary smartphone platform and a combined approach
to capture the next billion in emerging growth markets. Main features of the plan
included the following:
Over the next few years, the Symbian platform would be gradually phased out as the
WP platform was phased in. Underscoring the scope of this transition, in 2010, Symbian
phones amounted to 104 million units (ASP EUR143.6), virtually all of
Nokia’s smart phone production (Exhibit 34.3). The plan would bolster
Nokia’s presence in North America (where individuals and businesses were familiar
with Windows) while benefiting Microsoft in emerging markets (where Nokia was well
known). Gross margins would be lower as a result of the royalty payments paid to
Microsoft, but Nokia stood to gain from materially reduced R&D expenses and
Microsoft’s expected support for sales and marketing.
Analysts were mixed on whether this plan would solve Nokia’s problems. Some
saw the plan as facilitating strong complementarities between the companies and
Nokia delivering mapping and navigation capabilities to the WP ecosystem
Microsoft bringing Bing search, apps, advertising, and social media to Nokia devices
Both companies combining forces to develop apps and content for the WP platform
Nokia’s Ovi Store merging with Windows Marketplace
Microsoft receiving a royalty from Nokia for each product shipped10
Nokia contributing technical support, including hardware design, language support,
and other help developing WP for a wider range of prices
Nokia receiving payments from Microsoft to support marketing and development
eliminating the lagging Symbian/MeeGo product lines. Reducing the number of OS
platforms and trimming its diverse product portfolio would improve focus and reduce
the cost of maintaining multiple product lines. Other analysts were skeptical that the
plan would halt Nokia’s slide. In pursuing this plan, Nokia was all but admitting that
Symbian had failed, and analysts feared a rapid fall-off in Symbian-based units during
the transition. Further, as Nokia attempted to catch up in smartphones, other competitors
would become stronger from a distribution and product perspective. Meanwhile,
Microsoft itself was an unproven player in the mobile phone market. The market, too,
seemed skeptical as Nokia’s stock price declined 14% on announcement of the plan.
In the first months following the announcement of the plan, the skeptics’ view
seemed to prevail. Having announced that the company would cease support for
Symbian phones after 2016, customers and app developers quickly turned their attention
elsewhere, and Nokia suffered a more rapid decline in market share and profits than
expected. As a result, in July 2011, the company reported a net loss of EUR368 million
versus a net profit of EUR227 million a year earlier, and the loss widened considerably
by year end. Prior to 2008, Nokia had issued relatively little long-term debt, but in
response to the NAVTEQ acquisition and the global financial crisis in 2009, it raised
EUR2.25 billion and another USD1.5 billion in debt at a rating of A. Citing severe
weakening of the company’s business position, in the summer of 2011, Moody’s
downgraded the company’s debt two notches from A– to Baa2, and Fitch downgraded it
to BBB–, the lowest notch before losing investment-grade.
Overshadowed by questions about the transition were several positive
developments that helped stabilize the situation. Beginning in November 2011, the first
million units of Lumia 710 and 800 phones finally rolled out in Europe and Asia. In
early 2012, the Lumia 900 won the presitigous Consumer Electronics Show award for
“Best Smartphone of 2012.” Gartner Research, an influential commentator in the
sector, predicted that Nokia’s WP units would be second in market share Page 431
behind Android phones by 2015, displacing Apple as the current number two.
While Lumia phones did not yet rival the top Android or Apple phones, analysts viewed
them as a significant improvement over Symbian phones. The number of Windows
apps materially improved over 2011 as developers showed renewed interest in the
platform; this was also an important factor for the ecosystem that supported
smartphones. By 2012, there were 60,000 Windows apps available compared with
460,000 for Apple and 320,000 for Android. As the market narrowed to two OS
platforms (Apple and Android), there was also growing consensus among procurers of
mobile phones that a third competitor would be a positive development for the industry.
Over the course of 2011, the tough competitive environment also took its toll on
Nokia’s competitors. Research in Motion (RIM), a company with a strong platform
among commercial users of mobile phones, fell behind in its product development and
experienced significant losses in earnings and market share during 2011. Management
turmoil at Ericsson frustrated attempts to gain market share. On August 15, 2011, after
earlier slamming the Nokia–Microsoft merger, Google reversed course and announced
plans to purchase Motorola Mobility, a financially struggling spin-off of Motorola, for
$12.5 billion. On October 5, 2011, the world was shaken by the death of Steve Jobs,
the CEO and legendary founder of Apple. While analysts speculated about what the loss
of his leadership might mean for the company longer term, consumers reacted favorably
to the iPhone 4S, announced the day before his death. The Apple juggernaut seemed
poised to continue, at least for the foreseeable future.
Implications for Financing
Nokia had always financed itself conservatively, and it fell to Nokia’s CFO Timo
Ihamuotila and his team to consider the implications of the plan on the firm’s external
financing requirements for 2012 and 2013. Exhibits 34.6 and 34.7 give the historical
income statement and balance sheet used as reference points to help prepare the
forecasts. From these, the team chose initially to set cash and short-term securities at a
minimum of 27% of sales, which reflected the average of Nokia’s cash-to-sales ratio in
the years outside of the global downturn (i.e., 2007, 2010, and 2011). In addition, the
team wanted some cash reserves for acquisitions to be able to respond quickly to a
changing competitive landscape as the firm had done with its acquisition of NAVTEQ in
2008. Information on Nokia’s peers is given in Exhibit 34.8.
EXHIBIT 34.6 | Historical Income Statement (in millions of euros, except per share)
Data source: Nokia 20-F filings.
EXHIBIT 34.7 | Historical Balance Sheet (in millions of euros)
Data source: Nokia 20-F filings.
EXHIBIT 34.8 | Financial Information on Peer Companies (in millions of euros, except per share)
Google interest expense not reported; the immaterial amount is estimated by case writers.
Market Cap information is from Yahoo! Finance. Financial data are on an as reported basis for most recent fiscal yearend.
Cash includes short-term liquid investments. Dollar-per-euro exchange rate was 1.30 on December 31, 2011.
Notes: NA = not available, NR = not rated.
Nokia, like other technology companies, typically chose to maintain high Page 432
cash balances to bolster perceptions of its financial strength. Bill Wetreich, a
managing director at S&P, explained:
Tech companies tend to self-impose high cash balances on themselves for a couple reasons. For one, a lot of
cash is tied up in the business because they tend to self-fund working capital, R&D, etc., and don’t make
much use of commercial paper. Another reason is that they can miss a product cycle and need cash to see
them through. You don’t have to look very far to see examples of tech companies that miss a product cycle
and have their financial strength rapidly disappear.
Academics routinely prescribed more debt for these companies to take advantage of
the tax savings on interest to lower the firm’s cost of capital and to control agency costs
by reducing the free cash flow at managers’ discretion. The financial press simply
called for higher dividends or share repurchases. In the meantime, companies appeared
to reject those arguments and increased cash holdings, from an average cash-to-assets
ratio of 10.5% in 1980 to 23.5% in 2006. The secular increase in cash holdings
seemed to suggest firms were responding to increased risk and using cash as some kind
of buffer.
Given the uncertainty regarding Nokia’s future market share, Ihamuotila forecasted a
range of possible outcomes. Exhibit 34.9 gives a forecast of Nokia’s income statement
by segment for 2012 and 2013 for one representative upside and downside scenario. He
believed the downside scenario should be given greater weight for planning purposes
given the firm’s recent financial performance and the difficult transition ahead.
Data sources: S&P’s Compustat and Mergent Online.
EXHIBIT 34.9 | Forecasted Income Statement (in millions of euros)
For Devices and Services (D&S), the largest division, the downside scenario
recognized the continued significant loss of Symbian and overall Nokia market share,
while under the upside scenario, sales stabilized and were predicted to exceed 2011
levels by 2013. Gross margins were expected to decline due to competitive price
pressure, royalty payments to Microsoft, and aggressive pricing of Nokia phones,
reflecting Elop’s strategy for capturing market share. Under the downside scenario,
these factors were expected to erode gross margins to 24% in the near term, while under
the upside scenario, gross margins were projected at 26% over the next two years.
Lower margins lead to lower adjusted operating profits in the pro formas, but these
were partially offset by savings from restructuring efforts and Microsoft’s support for
Total Sales and Total Operating Profits in 2010 (2011) include EUR356 (EUR416) and EUR319 (EUR181) in
intercompany sales and profits, respectively. Projections subtract from EUR230 segment sales and EUR230 from
operating profit each year to account for intercompany sales and profits.
Data sources: Case writer estimates, which are broadly consistent with range of analyst estimates; historical divisional
sales and operating profit numbers from Oppenheimer Analyst Report, dated February 28, 2012; historical amortization
and goodwill impairment from 20-F filings.
Page 433
R&D and marketing functions, with the upside scenario reflecting more and faster
realization of savings.
Nokia Siemens Networks (NSN) announced a major restructuring effort in late
2011, which would result in significant near-term restructuring charges and a phase-out
of less-profitable operations. Profits had been slow to materialize at NSN, and the
downside scenario reflected continuing struggles for this division. Location and
Commerce (L&C) had also been a disappointing performer (resulting in a significant
writedown of goodwill in 2011), and the forecasts pointed to continued subpar
Based on the forecasted assumptions, in the event the downside scenario
materialized, Nokia estimated it would need to raise EUR4.3 billion through
2013 (Exhibit 34.10).
EXHIBIT 34.10 | Forecasted Balance Sheet (in millions of euros)
Data source: Case writer estimates based on company filings and analyst reports.
Financing Alternatives
Ihamuotila and his team would have to carefully weigh the pros and cons of funding
EUR4.3 billion through debt, equity, or a reduction in dividends or cash. Although some
combination of these alternatives could be employed, the team chose to evaluate each
separately to assess their relative impact on the firm’s financial situation.
Issue Long-Term Debt
Sluggish demand keeps inventories at a higher percentage in the downside versus upside scenario.
Accounts receivable are 19% of sales in the downside scenario but 18% in the upside scenario.
Forecasted at the four-year average percentage of sales.
Forecasted at the five-year average percentage of sales in the downside scenario and at a slightly higher rate in the
upside scenario.
Assumed 2011 was an outlier.
Common equity and long-term liabilities are held constant.
Short-term debt from 2011 is paid off.
Amortization on intangible assets of EUR850 in 2012 and EUR556 in 2013 at which point the 2011 EUR1,406 million
balance is fully amortized. The forecasts assume no new acquisitions so goodwill, intangibles, and other assets remain
constant except for the above amortization.
Capital expenditures are expected to roughly equal depreciation of EUR700 million.
Assumes dividends remain the same as the past three years at EUR0.40 per share.
Data source: Case writer estimates based on company filings and analyst reports.
At its current rating, Nokia’s debt was still investment-grade, but it did not want to risk
a further deterioration in rating. Credits rated BBB–/Baa3 or higher were considered
investment-grade, whereas those rated below that grade (BB+/Ba1 or lower) were
considered noninvestment-grade and were often referred to as high-yield or junk debt.
Some institutional investors (such as pension funds and charitable trusts) were limited
in the amount of noninvestment-grade debt they could hold, and many individual
investors also avoided it. For that reason, there was typically a large increase in
spreads (i.e., the yield on debt over the yield on a comparable-maturity U.S. Treasury)
when ratings dropped from investment- to noninvestment-grade (Exhibit 34.11).
Currently, interest rates were at historic lows following rate reductions by the U.S.
Federal Reserve and European Union monetary authorities in response to the financial
crisis of 2008. As the global economy recovered, rates were expected to increase, but
when that might happen and the extent of the increase were widely debated.
EXHIBIT 34.11 | Capital Market Rates
Aaa and Baa Spread over 10-Year U.S. Treasury Rate
In addition, the ability to issue noninvestment-grade debt depended to a greater
degree on the strength of the economy and on favorable credit market conditions
compared with investment-grade debt. The high-yield debt market had grown
significantly from its origins in the mid-1980s as a primary source of financing for
leveraged buyouts to be a more diversified source of financing for many companies. For
two decades, high-yield debt made up 20.5% of the total volume of debt raised, but in
any given year, the percentage ranged from a low of 2.3% in 1991, a recessionary year,
to highs above 29% in 2004 and 2005, before dropping to 8.8% in 2008 as the financial
crisis took hold (Exhibit 34.12). Consequently, investment-grade debt was typically
Data source: St. Louis Federal Reserve Archival Federal Reserve Economic Data (ALFRED) database.
easier to raise in economic downturns when firms might have greater need to borrow.
Although Nokia’s debt-to-equity ratio rose considerably from 2007 to 2011, when
its cash position was factored in, it had negative net debt throughout this period. The
ability to pay off its debt with cash seemed at odds with the notion of deteriorating
creditworthiness. For the purposes of credit rating, the rating agencies did not look at
leverage on a net debt (or net interest) basis. In their view, cash was at the discretion of
management—a firm could have a high cash balance today but decide to do an
acquisition, and the remaining cash might not be sufficient to pay off the debt when it
matured. Credit ratings were therefore a combination of financial strength and business
risk, which due to competitive pressures was currently high for Nokia. Exhibit 24.13
provides credit metrics helpful in assessing the impact of a EUR4.3 billion debt issue
on Nokia’s credit rating.
EXHIBIT 34.12 | Total Volume of Investment-Grade and High-Yield Debt Issued: 1991–2011
Note: Data all fixed-rate non-convertible debt issues with maturities of three years or more made by North American and
European companies. “%HY” is the percentage of high-yield debt issued to total debt in a given year.
Data source: Thomson Reuters Security Data Corporation.
Issue Equity Page 434
An equity issue would help support for the firm’s existing debt and improve its credit
rating. It also would provide the most flexible form of financing. But with the firm’s
share price at 5USD:4EUR, management was loath to raise EUR4.3 billion in new
equity. It feared serious dilution of EPS and was concerned that the market might react
negatively to an equity issue as a signal that management believed future earnings could
deteriorate further.
Eliminate Dividends
Given the difficulty of making any external issue at this time, the team also considered
eliminating the firm’s dividend to fund the EUR4.3 billion. Over 2007 to 2011, Nokia
had paid at least EUR0.40 dividends per share (DPS) despite the precipitous decline in
EXHIBIT 34.13 | Credit Metrics by Rating Category
Note: S&P defined the ratios on the book value of these items as follows:
EBIT interest coverage = EBIT/interest expense
Debt/EBITDA = (Short-term + Long-term debt)/(EBIT + depreciation and amortization)
Debt/debt + equity = (Short-term debt + Long-term debt)/(Short-term debt + Long-term debt + Stockholders’ equity)
Data source: S&P CreditStats “2009 Adjusted Key U.S. and European Industrial and Utility Financial Ratios,” S&P Credit
Research, August 20, 2010.
earnings over the same period. Investors typically reacted negatively to reductions in
dividends, and management worried about the implications of cutting dividends when
its stock price was already so low. By comparison to the contractual obligations of
debt, however, there was no legal requirement to pay dividends.
Reduce Cash
Finally, another alternative might be to reduce the cash balance of the firm to fund the
entire EUR4.3 billion. Historically, Nokia had maintained high cash balances to
preserve flexibility and financial strength. Some argued that if now was not the time to
use the cash, when would be? On the other hand, RIM had chosen this course in dealing
with its current difficulties, depleting 40% of its cash balance in 2011.
At the beginning of 2012, the members of the team had little visibility into what its
financing needs might turn out to be. If they were prepared for the worst, they were
confident more favorable outcomes could be managed. To the extent that not all of the
funding was needed in a given year, the remainder could be held in cash, given Nokia’s
great need for flexibility and reserves.
Over its 145-year history, Nokia had survived many crises and reinvented itself
numerous times to become a leading global mobile communications firm. As the plan
with Microsoft unfolded over the next two years, the proud Finnish company would
jump into the icy North Sea—which financing alternative would best prepare the
company to survive this time?
Page 449
Kelly Solar
Jessica Kelly was learning to deal with disappointment. In early May of 2010 her startup
was poised to sell its designs and related patents to a large manufacturer of solar
equipment. This was to be the culmination of two years of hard work developing a new
product from a series of patents she had purchased through her fledgling company, Kelly
Solar. That product, a major improvement in the reflector lenses used in solar power
generation, had just proven out in a series of tests. But all her excitement turned to dread
as she learned of a competing technology that promised to match all the advantages of
the Kelly Solar designs. It was clear the equipment manufacturer would back only one
of the technologies and the other would become worthless.
The Solar Energy Parts Business Opportunity
The “green technology” opportunity Jessica Kelly was exploring with Kelly Solar
actually had its roots in her grandfather’s “old technology” auto parts business. Jessica
had spent many summers working in her grandfather’s manufacturing facility, and when
she graduated from college, her first job was in the accounting department of that
business. After a few years, she obtained a business degree and began working for a
small regional investment bank. She loved the bank job and advanced quickly, but she
stayed close to her manufacturing roots and often dreamed about starting her own
business. It all suddenly came together as she read an article about the use of Fresnel
reflector technology in solar-energy power plants.
Kelly was familiar with Fresnel lens technology; her grandfather’s business had
produced Fresnel lenses for the automobile industry for many years, and she had
Page 450
recently helped finance a small company called Lens Tech that was producing lenses for
cars based on patents for a particular kind of high-density plastic. The Fresnel reflectors
used in solar energy conversion were similar but on a much larger physical scale. As
Kelly read the article on solar applications, she immediately realized that
lenses built with high-density plastic could be very useful in the solar power
industry. Certainly there was much additional work to be done to adapt the technology,
and Kelly would need to secure certain patents and rights from Lens Tech, but she was
convinced this application would be successful. One frantic year later, Kelly Solar had
been formed and the patents acquired; Kelly pulled together the team she needed and
began research and development in an old warehouse that, fittingly, was previously
owned by her grandfather.
One thing that had attracted Kelly to this venture and facilitated the financing was
the relative simplicity of the whole enterprise. Kelly was not interested in becoming the
long-term manufacturer and distributor of the products Kelly Solar might develop. Her
goal was to quickly prove the technology and then sell it to the industry’s single
dominant manufacturer. When Kelly started the business, the future could be effectively
described as having two states. In one state, the technology would prove out and Kelly
would secure a profitable sale of patents and processes. Given the market’s size and the
knowledge that her product, if successful, would be able to offer uniquely superior
lenses at no extra cost relative to existing technology, she estimated that she could
secure a payoff of $22 million. In the other state, the product would not prove out and,
disappointing as it might be, the company would close down with no residual value.
By May of 2010, it was clear that the technology had proven successful. But the
emergence of the competing technology created a new uncertainty. Only one of the two
technologies would be backed by the manufacturer, and it was not clear which would be
chosen. What had been a sure payoff of $22 million had turned into a gamble with equal
Page 451
odds of obtaining $22 million or closing down without any payoff at all.
Possible Improvements and Financing
One consolation was the possibility of making modifications that would effectively
raise that probability that Kelly Solar’s technology would be chosen. Lens Tech’s
technology offered some advantages over the competing technology related to the ability
of the Kelly Solar lenses to be unaffected by high temperatures. Kelly had not initially
chosen to buy the patents related to this characteristic because they would have
provided no additional advantage over existing glass-based products in the market. With
the competing plastic product, this was no longer the case. Kelly was sure that buying
the patents and making the modifications would increase the odds of being selected to
Unfortunately, the modifications would require an additional investment of a hefty
$3.20 million, much of that simply to buy the additional patents from Lens Tech. Kelly
Solar had effectively used up its start-up funds and would have to seek new capital to
develop the modifications. The initial investment in Kelly Solar had come from two
sources. Kelly’s grandfather had retired and sold his auto parts company and had given
his granddaughter the funds she used to start Kelly Solar. These funds covered only a
small portion of the needed research and development costs, and Kelly had secured the
remaining funding from Scott Barkley, a local businessman with a history of lending to
small innovation ventures. Barkley insisted that his investment be in the form of
debt financing and that he would have to approve any dividend to Kelly,
thereby guaranteeing that his claims would be paid in full before Kelly could receive
any payout. Kelly, on the other hand, reserved the right to determine exactly when the
company would sell the rights to any products, thereby ensuring that Kelly would obtain
her equity stake in a sale of technology. After substantial negotiation, Kelly and Barkley
agreed upon a note that promised Barkley a single lump sum payment of $15 million at
the start of 2011.
Given that the Kelly Solar designs had already proven out in tests and given that
there was little development uncertainty related to the improvements, Kelly believed it
would not be difficult to find new investors and she could certainly approach Barkley
again. If necessary and with some effort, she could also obtain some additional money
from her family. Of course, securing the initial funding had been expensive—legal and
accounting fees associated with due diligence and drafting documents for the Barkley
loan had totaled $400,000. Kelly expected that any new agreement or renegotiation of
the existing agreement with Barkley would incur legal and accounting fees equal to
Kelly realized that the next step was to inform Barkley as to recent developments
and the possible additional investment. As disappointed as Kelly might be, she
anticipated that Barkley would be even more upset. There had always been a chance
that the firm would default on the promised debt payment, and this possibility was now
substantially greater. But she hoped the conversation would go smoothly. The
modifications were worth the investment. More important, she did not want a negative
experience to dampen her enthusiasm as she began the hard work of implementing the
modifications and pitching the product to its potential buyer. Whatever success they
might achieve still depended a great deal on her.
Page 453
J. C. Penney Company
On Friday, February 8, 2013, J. C. Penney (JCP) CEO Ron Johnson was facing the
unenviable task of turning around one of America’s oldest and most prominent retailers.
The past three years had seen variable financial results for the company (Exhibit 36.1
through Exhibit 36.4) and the cash balance had gradually declined. According to
Johnson, “as we execute our ambitious transformation plan, we are pleased with the
great strides we made to improve J. C. Penney’s cost structure, technology platforms
and the overall customer experience. We have accomplished so much in the last twelve
months. We believe the bold actions taken in 2012 will materially improve the
Company’s long-term growth and profitability.”1
EXHIBIT 36.1 | Income Statements 2010–2012 (in millions of dollars, except per-share data)
Data source: All exhibits, unless otherwise specified, include data sourced from J. C. Penney annual reports.
EXHIBIT 36.2 | Balance Sheets 2010–2012 (in millions of dollars)
EXHIBIT 36.3 | Quarterly Income Statements, 2011 and 2012 (in millions of dollars, except per-share
EXHIBIT 36.4 | Quarterly Balance Sheets 2011–2012 (in millions of dollars)
Page 454
Despite Johnson’s plans, there were rumors among Wall Street analysts that the
company was facing significant liquidity issues and perhaps the possibility of
bankruptcy. Sales and profits were continuing to decline and the dividend had
been eliminated. Just two days earlier, a Wall Street equity analyst had recommended
investors sell their JCP stock by stating, “Cash flow is weak and could become critical.
At current burn rates—and absent any further asset sales—we estimate that J. C. Penney
will be virtually out of cash by fiscal year-end 2013.” On top of that, JCP was dealing
with allegations that the company was defaulting on its 7.4% debentures, which were
due in 2037. Although JCP management had responded that the default allegations were
invalid, the rumors of the company’s liquidity problems continued to circulate and
analysts wanted assurance that JCP had a financing plan in place in the event that an
injection of cash became critical to the company’s survival.
History of J. C. Penney
In 1902, an ambitious 26-year-old man named James Cash Penney used his $500
savings account to open “The Golden Rule Store” in a one-room wooden building in
Kemmerer, Wyoming. The store appealed to mining and farming families and was well
known for its assortment of merchandise and exceptional customer service. By 1914,
Penney had changed the company name to J. C. Penney and relocated headquarters to
New York City. Penney was using private-label brands as a means to ensure a distinct
level in quality, and the ability to control pricing and margins, which was not often the
case when handling brand names.
By 1929, JCP had expanded to more than 1,000 stores and the company was listed
on the New York Stock Exchange. The week after the company went public, however,
the stock market crashed and the Great Depression began. Despite its dubious
beginnings as a public company, JCP was able to prosper during the ensuing years by
managing inventory levels and passing low prices on to consumers. By 1951, the
company achieved the unprecedented sales level of $1 billion due partly to having
eliminated the company’s cash-only policy and introducing its first credit card. By
1968, sales exceeded $3 billion and the company had begun to see increased
competition from small specialty stores that carried a specific range of merchandise.
Nonetheless, with the help of its $1 billion catalog service and the launch of a women’s
fashion program, sales reached $11 billion by 1978.
In an effort to stay current with continued shifts in consumer trends and to solidify its
identity, JCP launched a restructuring initiative in the early 1980s with the objective of
transforming the company from a mass merchant to a national department store. JCP
spent $1 billion to remodel its stores and announced in 1983 that it would begin to
phase out its auto-service, appliance, hardware, and fabrics merchandising in favor of
emphasizing apparel, home furnishings, and leisure lines. Despite these changes, JCP
continued to face challenges of being perceived as being a middle-ground retailer;
consumers were favoring either luxury merchants or discounters. As part of the effort to
revamp JCP’s image, the company named a new CEO, Allen Questrom, in 2000.
Questrom was known as a “retailing turnaround artist,” and had made his name leading
famous department stores—including Federated Department Stores, Macy’s, and
Barneys—out of bankruptcy. Questrom competed a second round of restructuring that
included store closings, layoffs, a conversion to a centralized merchandising system,
and large divestitures of noncore units (insurance and Eckerd Drug). The results for
2004 were promising: the company reported $584 million in net profits.
Having succeeded in his turnaround efforts, Questrom stepped down and was
replaced by Mike Ullman. Considered a branding expert, Ullman ushered in a new era
of higher-end fashion as JCP signed large exclusive deals with big brands such as Liz
Claiborne and makeup retailer Sephora. Ullman successfully grew online sales and, in
Page 455
2007, instituted an aggressive expansion plan for new store openings and to expand
the net income margin to 15%. Despite these efforts, the credit crisis and
economic downturn of 2008 to 2011 provided an extra set of challenges as
growing consumer frugality allowed “off-price” competitors such as Kohl’s to further
erode JCP’s sales and margins.
Bill Ackman Takes a Stake
After each of the first two quarters of 2010, Ullman lowered sales and earnings
guidance. After a disappointing Q2 earnings report, JCP’s stock price dropped to a low
for the year of $19.82 per share. In the days following, activist investor Bill Ackman,
founder of Pershing Square Management, began buying JCP shares. Ackman was well
known for his activism tactics with companies such as Wendy’s, Target, and Barnes &
Noble, wherein he had successfully pressured management into making decisions that he
believed benefited shareholder interests. For example, in 2006, Ackman managed to
convince Wendy’s management to sell its subsidiary Tim Hortons doughnut chain
through an IPO. In 2012, Ackman persuaded Burger King’s private equity owners to
postpone a planned IPO in order to begin merger negotiations with a publicly traded
shell company.
By early October 2010, Ackman’s position in JCP was close to 40 million shares,
which represented a 16.8% stake in the company and was worth approximately $900
million. By February 2011, when asked about his investment in JCP, Ackman
responded that it had “the most potential of any company in his portfolio.” Furthermore,
he believed that the stock was being valued cheaply at only five times EBITDA and that
the company’s 110 million square feet of property was “some of the best real estate in
the world.” Ackman also disclosed that he was interested in changing the operations of
the company and that he would be joining the board of directors. Later in February,
Page 456
when JCP released its results for 2010, it appeared that Ackman had once again picked
a winner. Not only had earnings beaten expectations, but based on the company’s strong
cash position of $2.6 billion, Ullman announced that JCP would commence a
$900 million buyback program:
Our performance in 2010 reflects the strides we have made to deliver on our operating goals and position J.
C. Penney as a retail industry leader. This was particularly evident in the fourth quarter when the actions we
took during the year—including new growth initiatives and improvements across our merchandise
assortments, redefining the experience and driving efficiencies across our company—enabled us to
achieve sales, market share and profitability growth that surpassed our expectations, and to establish a share
buyback plan which will return value to our shareholders.
Management Changes
Despite Ullman’s successful 2010 and Q1 2011 results, he announced he would be
stepping down as CEO. Although Ullman retained his position as executive chairman,
Ron Johnson of Apple Inc.’s retail stores was hired as the new CEO. Johnson’s success
at Apple had been well documented. The New York Times described him as the man at
Apple who had “turned the boring computer sales floor into a sleek playroom filled
with gadgets.” In an interview following the announcement, Johnson stated, “My
lifetime dream has been to lead one of the large great retailers, to reimagine what it
could be. In the U.S., the department store has a chance to regain its status as the leader
in style, the leader in excitement. It will be a period of true innovation for this
company.” Ackman conveyed confidence in the management change by saying, “Ron
Johnson is the Steve Jobs of the retail industry.”
JCP investors echoed Ackman’s optimism as the stock rallied 17% upon the
announcement of Johnson’s appointment. JCP’s board had created a compensation
package to incentivize Johnson’s performance that included a base salary of $375,000
and a performance-based bonus of $236,000. The board also awarded Johnson $50
million of restricted stock to offset the Apple stock options Johnson had forfeited when
Page 457
he accepted the JCP position. Through the rest of 2011, Johnson continued to make
headlines by recruiting high-profile executives for his management team. The most
noteworthy was the CMO, Michael Francis, who had held the same position at Target.
Francis received a base salary of $1.2 million and $12 million as a “sign-on cash
bonus.” In addition to Francis, Johnson hired Daniel Walker as his chief talent officer
for $8 million and Michael Kramer as COO for $4 million in cash and $29 million in
restricted stock.
Results for Q3 2011 were disappointing: sales declined 4.8% compared to
Q3 2010 and earnings fell to a $143 million loss. As JCP entered 2012 in a
tenuous financial position, Johnson responded by announcing a “fair-and-square”
pricing strategy that eliminated all promotions in favor of “everyday, regular prices.”
Having run 590 separate promotions in 2011, Johnson argued, “We want customers to
shop on their terms, not ours. By setting our store monthly and maintaining our best
prices for an entire month, we feel confident that customers will love shopping when it
is convenient for them, rather than when it is expedient for us.”
The new pricing strategy was met with skepticism. Pricing consultant Rafi
Mohammed wrote in the Harvard Business Review that “J. C. Penney lacks the
differentiation to make this pricing strategy successful. J. C. Penney’s products are fairly
homogenous. When selling a relatively undifferentiated product, the only lever to
generate higher sales is discounts. Even worse, if competitors drop prices on
comparable products, J. C. Penney’s hands are tied—it is a sitting duck that can’t
By Q1 2012, JCP’s financial condition was showing signs of rapid deterioration as
sales dropped 20% relative to Q1 2011 and losses hit 75 cents per share. Johnson
announced a 10% reduction of the work force and that the dividend that had been paid
since 1987 would be discontinued. The dividend cut was a clear signal to Wall Street
Page 458
that the company was experiencing significant liquidity concerns. As retail equities
analyst Brian Sozzi observed, “The dividend cut makes you lose shareholder support.
And it also makes you wonder, [does JCP] have the balance sheet to fund this massive
transformation of the business over the next two to three years?”
In June 2012, with sales declining and the share price sliding toward $20 a share,
Johnson’s key hire, CMO Francis, resigned. After only nine months at the company,
Francis left with a $12 million bonus in his pocket.
Liquidity Issues
At the end of Q2 2012, JCP’s capital structure was relatively strong. With $3.1 billion
in debt and a market capitalization of $6.5 billion, JCP had a debt-to-capital ratio of
33%, only slightly higher than the average of 30% for its competitors (Exhibit 36.5).
The company’s debt included secured and unsecured bonds and a short-term
credit facility (revolver) that was secured by JCP’s credit card receivables,
accounts receivable, and inventory. JPC had traditionally made limited use of the
revolver and had not drawn upon it during 2012. As was true for most short-term credit
facilities, JCP’s was designed primarily to finance seasonal inventories and receivables
around the holiday season. The credit limit of the revolver was $1.5 billion.
EXHIBIT 36.5 | Capital Structure of J. C. Penney and Competitors (in millions of dollars as of June
30, 2012)
By Q3 2012, the company’s diminishing cash balance had become evident
(Exhibit 36.6)—it had only $525 million in cash. Analysts began to question the
company’s long-term stability. For example, prior to the company’s 2012 annual
earnings announcement, JPMorgan Chase & Co. equity analysts wrote:
We increasingly question JCP’s ability to self-fund its transformation on [free cash flow] generation alone.
We view a draw on the revolver as increasingly likely in 1H13. Despite recent actions geared toward capital
preservation, JCP will likely require $1B of capital this year to continue its transformation at the pace initially
The release of the full-year results proved to be worse than expected: JCP lost $985
million for 2012 and Q4 earnings alone were $1.71-per-share below analyst
expectations. When compared to prior Q4 cash balances of $2.6 billion in 2010 and
$1.5 billion in 2011, JCP’s cash balance of $930 million for 2012 confirmed that the
analyst community had good reason for concern. An analysis of the sources and uses of
cash for 2012 revealed that JCP’s large operating losses were draining the company of
Debt includes all short-term and long-term interest-bearing debt. Operating leases are included in debt as six times the
reported rental expense for the year.
Equity is computed on a market-value basis (i.e., market cap = stock price × shares outstanding).
EXHIBIT 36.6 | Cash Balances: Q1 2010–Q4 2012 (in millions of dollars)
cash, and were it not for the reduction of inventories and sales of “other assets,” JCP’s
cash could have fallen critically close to zero (Exhibit 36.7).
Johnson had a variety of actions he could take to meet the demand for cash flow.
First, he could manage cash flow by stretching payables and reducing inventories. Both
of these working-capital components were significant cash flow determinants for most
large retailers. If the internally generated cash flow proved inadequate, he could turn to
JCP’s credit facility, which had $1.5 billion of available credit. By design, however, the
revolver was a short-term source of funds that the banks could choose to not renew if
they perceived that JCP was using the revolver as permanent financing. If JCP had to
seek permanent financing, Johnson could access either the debt market or the equity
market. The prospect, however, of issuing debt was no more appealing than issuing
equity. The debt would likely carry a non-investment-grade credit rating with a coupon
rate of approximately 6.0% (Exhibit 36.8). Given that the stock was currently selling at
$19.80 per share, a much larger share issuance would be required than if it had
EXHIBIT 36.7 | Sources and Uses of Cash for 2012 (in millions of dollars)
Source: Case writer estimates.
occurred just one year earlier, when the stock was selling at $42 per share
(Exhibit 36.9).
EXHIBIT 36.8 | Credit Rating History and Debt Yields J. C. Penney Long-Term Unsecured Senior
Data sources: Bloomberg,,
EXHIBIT 36.9 | Stock Price Performance: January 2011–February 2013
Data source: Yahoo! Finance.
Page 467
Horizon Lines, Inc.
Even a small leak will sink a great ship
—Benjamin Franklin
By April 1, 2011, the Horizon Lines 2010 annual report had been published with a
statement from newly appointed CEO Stephen Fraser, explaining that the company
expected to be in technical default on its debt. During the previous 50 years, Horizon
Lines (Horizon) had revolutionized the global economy with the invention of
containerized shipping to become the largest U.S. domestic ocean carrier. By the
beginning of 2007, however, Horizon was unprofitable, and its losses had increased
each year since (Exhibit 37.1). As negative earnings mounted, so did Horizon’s debt
burden: current liabilities had nearly quadrupled by the end of 2010 (Exhibits 37.2 and
37.3). The company had also suffered two major setbacks in the past six months: the
loss of a key strategic alliance and $65 million in criminal and civil fines.
EXHIBIT 37.1 | Consolidated Statement of Operations, December 31, 2008–10 (in thousands of U.S.
Data source: Horizon Lines annual report, 2010.
EXHIBIT 37.2 | Consolidated Balance Sheet Statements, December 31, 2009–10 (in thousands of
U.S. dollars)
*Includes capital lease.
**Common stock, $0.01 par value, 100,000 shares authorized, 34,546 shares issued and 30,746 shares outstanding on
December 26, 2010, and 34,091 shares issued and 30,291 shares outstanding on December 20, 2009.
Data source: Horizon Lines annual report, 2010.
EXHIBIT 37.3 | Consolidated Cash Flow Statements, December 31, 2008–10 (in thousands of U.S.
Data source: Horizon Lines annual report, 2010, (F-5).
Management’s reaction had been to conserve cash by cutting the common dividend
for 2010 by more than half and then eliminating it completely beginning in the first
quarter of 2011. Investors responded accordingly; the company’s stock price dropped
from $5 per share at the start of 2011 to a recent price of $0.85. Bondholders also were
concerned as the market price of the convertible notes had fallen to $0.80 on the dollar,
raising the yield on the notes to more than 20% (Exhibit 37.4).
Price Fixing in Puerto Rico
In October 2008, three Horizon executives and two executives from its competitor
Sea Star Line pled guilty to crimes related to price fixing. A U.S. Department of Justice
investigation revealed that for nearly six years, Horizon and Sea Star Line had colluded
EXHIBIT 37.4 | Horizon Lines (HRZ) Stock Price and Convertible Notes Price
Data sources: Yahoo! Finance, NYSE, and author estimates.
Page 468
to fix prices, rig bids, and allocate customers. All five executives were sentenced to
prison time, and Horizon began a long period of litigation that culminated in
February 2011 when Horizon pleaded guilty to one felony count of violating the
Sherman Antitrust Act. The court imposed a fine of $45 million to be paid out over the
next five years. On top of the criminal penalties, nearly 60 civil class-action lawsuits
had also been filed against Horizon, which prompted the company to report a $20
million expense for legal settlements in 2009. In 2011, Horizon would begin payments
on the criminal fine and expected to close out the civil claims with a payment of $11.8
As a result of the legal difficulties, Horizon’s board of directors announced that
Chairman, President, and CEO Chuck Raymond would be leaving the company, and
Stephen Fraser, a board member, would assume the roles of president and CEO.
The Jones Act
Consistent with most sectors in the transportation industry, shipping was greatly affected
by government regulations. For almost a century, the U.S. domestic shipping market had
been regulated by Section 27 of the Merchant Marine Act of 1920, more commonly
known as the Jones Act. The federal statute applied to maritime commerce traveling in
U.S. waters between ports located on the U.S. mainland and in Alaska, Hawaii, and
Puerto Rico. The law’s purpose was to support the U.S. maritime industry by requiring
that all goods transported by water between U.S. ports be carried on ships constructed
and flagged in the United States.
In the last few decades, however, the economic conditions of the industry, in
particular high labor rates in the United States, caused Jones Act vessels to have higher
construction, maintenance, and operation costs than foreign vessels. This prompted
critics to claim that the regulations were outdated and protectionist and that they
Page 469
hindered free trade and priced U.S. shipbuilders out of the international market. But the
law had continued to receive political support from every U.S. president since
Woodrow Wilson, who had originally signed it into law. In reference to the current
political climate, Horizon’s 2010 annual report stated: “The ongoing war on terrorism
has further solidified political support for the Jones Act, as a vital and dedicated U.S.
merchant marine cornerstone for strong homeland defense, as well as a critical source
of trained U.S. mariners for wartime support.”
Despite the extra costs associated with the Jones Act, it also created an attractive
competitive landscape for existing container ship operators in the market. Although
container shipping between ports in the contiguous United States was no longer
competitive with inland trucking, Jones Act carriers had been able to maintain an
operating advantage on trade routes between the U.S. mainland, Alaska, Hawaii, and
Puerto Rico. As of 2008, only 27 vessels, 19 of which were built before 1985, were
qualified by the Jones Act. The high capital investments and long delivery lead times
associated with building a new containership created high barriers for new
entrants. These barriers also caused the domestic market to be less fragmented
and less vulnerable to overcapacity.
The Maersk Partnership
A major drawback of the Jones Act market was that very few goods were shipped back
to the continental United States, leading to a severe imbalance in container utilization.
This was particularly significant for Hawaii and Guam, because ships returning to the
mainland had to travel a long distance with mostly empty containers. To alleviate this
problem, Horizon entered into a strategic alliance with A.P. Moller-Maersk in the
1990s to share container space along the Hawaii and Guam lane. Under the terms of the
agreement, Horizon used its vessels to ship a portion of its cargo in Maersk-owned
containers on westbound routes. The cargo would be unloaded in Hawaii or Guam, and
the empty containers would then be shipped to ports in China and Taiwan instead of
directly back to the United States. After the vessels arrived in Asia, Maersk replaced
the empty containers with loaded containers for Horizon to carry back to the West Coast
of the United States.
This alliance was so beneficial that in 2006, Horizon entered into a long-term lease
agreement with Ship Finance International Limited to charter five container vessels not
qualified by the Jones Act to travel on its Asia-Pacific route. Horizon was obligated to
charter each ship for 12 years from the date of delivery at an annual rate of $6.4 million
per vessel. The economic conditions changed with the global recession of 2008,
however, causing overcapacity in the international shipping market, which led to
container freight rates falling significantly. Horizon’s profitability also fell, due partly to
top-line reductions but also to escalating fuel costs. Although Horizon was locked into
its long-term lease until 2018–19, Maersk was only committed until December 2010, at
which time the company elected to exit the alliance.
Shortly after termination of the partnership, Horizon attempted to cover its lease
obligations by starting its own trans-Pacific shipping service. Unfortunately, by March
2011, freight rates continued to decline, and fuel costs continued to increase.
Projections for the remainder of the year showed that eastbound freight rates would
drop 35%, while the average price of fuel would increase 40%, which put the Pacific
route into a significant operating-loss position.
Pushed by mounting operating losses, Horizon management decided to save money
by shutting down its unprofitable routes in the Pacific and holding all five non-Jones Act
vessels pier-side in a reduced operational state. Although Horizon would continue to
incur leasing costs for those vessels for another eight or nine years, it eliminated most of
the operating costs associated with the Pacific routes.
Page 470
The Debt Structure
In 2007, when the future of the shipping business seemed bright and Horizon’s stock
was trading at an all-time high, the company completed a major round of refinancing to
consolidate its debt into two sources. The first was a senior secured credit agreement
that used all Horizon-owned assets as collateral. The senior credit facility included a
$125 million term loan and a $100 million five-year revolving credit facility
provided by a lending group of major banks. The second source was $330
million of unsecured, 4.25% convertible senior notes which, like the term loan, matured
in 2012. The notes were primarily held by three large mutual fund companies: Legg
Mason, Pioneer Investment Management, and Angelo Gordon & Co. Exhibit 37.5
provides the details of Horizon’s debt structure.
EXHIBIT 37.5 | Debt Structure* (in thousands of U.S. dollars)
*Both the senior credit facility and the 4.25% convertible notes carried covenants that specified a maximum leverage ratio
and a minimum interest coverage ratio. The interest coverage ratio was defined as Adjusted EBITDA/Cash Interest, and
the leverage ratio was computed as Senior Secured Debt/Adjusted EBITDA (annualized). Between the credit facility and
the convertible notes, the tightest covenant requirements were a minimum interest coverage ratio of 2.75× for each
quarter of 2011 and a maximum leverage ratio of 3.25× for each quarter of 2011. For purposes of the covenants, EBITDA
was adjusted to report legal settlements on a cash basis.
**The senior credit facility is provided by a lending group of major banks and is composed of the term loan and the
revolving credit facility and is secured by substantially all of the assets of the company. Interest payments on the revolver
are variable and are based on the three-month London Inter-Bank Offered Rate (LIBOR) plus 3.25%. Through the use of
an interest rate swap, the term loan bears interest at a fixed rate of 4.52% per annum. The weighted average interest rate
for the facility was 4.6% at the end of 2010. Remaining quarterly principal payments for the term loan are specified as
$4.7 million through September 30, 2011, and $18.8 million until final maturity on August 8, 2012.
***The notes are unsecured and mature on August 15, 2012. The aggregate principal amount of $330 million for the
notes is recorded net of original issue discount. Each $1,000 of principal is convertible into 26.9339 shares of Horizon’s
common stock, which is the equivalent of $37.13 per share. The notes were primarily held by three large mutual fund
companies: Legg Mason, Pioneer Investment Management, and Angelo Gordon & Co.
Both the senior credit facility and the 4.25% convertible notes carried covenants
that specified a maximum leverage ratio and a minimum interest coverage ratio. By the
time 2010 results were released, the company’s poor earnings performance plus its
payments for the criminal fine and the civil settlements made it apparent that the
company would be unlikely to satisfy these covenants during 2011. Tripping a debt
covenant would put the company in technical default, giving debt holders the right to
call the loan (i.e., demand immediate and full payment of the principal outstanding).
Unless Horizon could negotiate a change to the covenants to remove the default, it
would almost certainly have to seek the protection of the bankruptcy courts because it
would be impossible to raise new debt or equity under such dire circumstances.
Although Horizon was not expected to miss an interest payment the following
quarter, future interest and principal payments would be accelerating and would place
an increasing strain on Horizon’s ability to meet its cash obligations, regardless of
whether the company satisfied the debt covenants. For example, the $125 million term
loan required Horizon to make quarterly principal payments of $4.7 million through
September 2011, at which point the principal payments escalated to $18.4 million until
August 2012 when the loan matured. Interest payments on the senior credit facility were
due semiannually (February and August) and averaged about 4.6%. The convertible
notes carried a low coupon rate of 4.25%, but the $330 million principal would also be
due in August 2012. Exhibit 37.6 provides management’s report of interest, principal,
and other contractual obligations for 2011 and beyond. Exhibit 37.7 shows current
interest rates for government and corporate debt obligations.
Data sources: Horizon Lines 10-K filing, 2010, and author estimates.
EXHIBIT 37.6 | Contractual Obligations, 2011 and Beyond (in thousands of U.S. dollars)
*Horizon has announced that it expects a covenant default on its debt. The company has until May 21, 2011, to obtain a
waiver from the debt holders, which if not received could result in the holders’ demanding acceleration of all principal
and interest payments. In addition, due to cross-default provisions, such a default could lead to the acceleration of the
maturity of all the company’s scheduled principal and interest payments.
**Interest payments on the term loan portion of the senior credit facility are fixed via an interest rate swap at 4.52%.
Interest payments on the revolver portion of the senior credit facility are variable and are computed as LIBOR plus 3.25%.
The weighted average interest rate for the facility was 4.6% at the end of 2010. Interest on the 4.25% convertible senior
notes is fixed and is paid semiannually on February 15 and August 15 of each year, until maturity on August 15, 2012.
***Legal settlement for 2011 consists of a $1 million charge for the $45 million criminal fines and $11.767 million as
final settlement of the civil lawsuits. The civil settlement was originally recorded as $20 million in 2009, of which $5
million was paid immediately, and the remainder was eventually settled as $11.767 million.
Data sources: Horizon Lines 10-K filing, 2010, and author estimates.
EXHIBIT 37.7 | Interest Rates for March 31, 2011 U.S. Treasury Yields
Data source: Yahoo! Finance.
Page 471
Restructuring Options
On the operational side, in addition to shutting down the Pacific routes, Horizon had
made attempts to reduce headcount, but this had had little impact, since much of the
work force was protected by unions. The next step would be to divest underperforming
business units or sell the entire business to a strategic buyer. Given the high barriers to
entry for the domestic market and the general view that container traffic was relatively
stable, finding a buyer was feasible, but finding a buyer that would pay a reasonable
price would be difficult to execute in the near term. The net effect was that Horizon was
expecting poor performance for 2011 as operating costs were rising, and shutting down
the Pacific routes would add to those expenses for 2011. Longer term, the
reduced operations were expected to decrease Horizon’s revenues for 2012,
but they would also allow the company to show positive EBIT starting in 2013
(Exhibit 37.8).
EXHIBIT 37.8 | Operating Cash Flow Projections for 2011–15 (in thousands of U.S. dollars)
*Revenues for 2012 and beyond reflect the shutdown of unprofitable routes in the Pacific.
Realistically, the only viable alternative to avoid a default in 2011 was for Horizon
to restructure its capital structure. For a financial restructuring, there were three basic
options available to Stephen Fraser and his management team.
Option 1: Issue new equity
A straightforward way to inject capital into the business would be to issue new shares
of common stock. Horizon could use the funds from the new stock offering to pay down
its debt obligation and give the business additional capital to grow the Jones Act side
of the business. This was relatively easy and required no negotiations with existing debt
Option 2: File for Chapter 11
As a U.S. business, Horizon had the option of filing for protection under Chapter 11 of
the U.S. Bankruptcy code. Fraser could file immediately and rely on the bankruptcy
judge to oversee the reorganization. Normally, the judge would request a plan of
reorganization (POR) from management that specified how the company needed to be
changed in order to emerge from Chapter 11 as an economically viable entity. The
primary purpose of the POR was to present a blueprint of how to restructure the balance
sheet to a manageable level of interest and principal payments. This meant that many of
the debt claimants were asked to accept new securities that summed to less than the face
value of their claim.
**Cash flow projections are computed using an “adjusted” EBITDA for which legal settlements are recorded on an
expected cash basis. In contrast, GAAP requires EBIT to be computed based on settlement charges computed as the
present value of the future payments and reported in the year of the settlement. Specifically, Horizon reported $31.77
million as legal settlements for 2010, which represented the present value of the $45 million to be received over the
ensuing five years. Legal settlement for 2011 consists of a $1 million charge for the $45 million criminal fines and
$11.767 million as final settlement of the civil lawsuits. Debt covenants use adjusted EBITDA for the leverage and
interest coverage ratios.
Source: Author estimates.
Page 472
The amount of the haircut would depend upon the seniority of the claim. For
example, a senior secured lender might receive full cash payment for its claim, whereas
a junior unsecured lender might receive a combination of new debt and equity
representing $0.40 on the dollar of the face value of the original debt. The judge would
not allow senior claimants to take a larger haircut than any junior claimant, nor would
the judge entertain a POR that was unlikely to receive the voting approval of all the
impaired claimants. If the judge thought a POR was fair to all claimants and provided a
viable capital structure for the company going forward, he or she could overrule a
dissenting class of claimants in order to force a solution. In this regard, the judge played
the role of mediator in a negotiation process that often involved many revisions to the
POR before being accepted by all parties, or the judge exercised the right to cram down
the plan in order to enact it.
A Chapter 11 bankruptcy was designed to give a failing company the best possible
chance to restructure and continue operating as a viable enterprise. The courts served
the purpose of intervening with bill collectors to protect the company from being forced
to liquidate in order to make an interest or principal payment. The theory was that it was
better to have an orderly reorganization within the court system that resulted in a viable
company that could continue to pay its suppliers and employees than to allow
the company to disintegrate in the chaos of a feeding frenzy of its creditors.
Companies continued to operate normally while in Chapter 11, so most customers were
not aware of the reorganization process. If the company needed additional capital to
grow the business, it could simply increase the size of the new debt and equity offerings
as part of the POR.
Option 3: Restructure the debt directly
This approach had the same objective as using Chapter 11. Negotiating a deal directly
with the debt holders, however, had the advantage of being faster, and it avoided court
costs. The typical Chapter 11 process took months or years to resolve and resulted in
large legal fees for both company and claimants. To be successful, Horizon would need
to exchange its existing debt for a combination of new notes and common shares. The
swap would give the existing debt holders a reduced claim on the company, but it would
be a claim that was much more likely to be serviced. At the same time, Horizon could
ask creditors to accept a new set of covenants and a longer maturity to alleviate the
short-term cash-flow crunch it currently faced. The net effect would be to lengthen the
maturity of the outstanding debt plus reduce the overall amount of debt outstanding and
therefore reduce the level of interest payments.
As part of the restructuring, Horizon also needed to receive new capital to pay off
the senior credit facility and help grow the Jones Act business. The new capital could
come from issuing shares to the public in addition to the shares distributed to the
existing debt holders to satisfy their claims on the company. Horizon could also raise
the capital by issuing new debt. Regardless of whether the new capital was debt or
equity, it would be expensive and reflect the high risk associated with Horizon. For
example, given the low stock price, it would require a large number of new shares to
raise a meaningful amount of equity money. Also for such a risky situation, any new
lender would require collateral for the debt plus an interest rate in the range of 10% to
Restructuring had several disadvantages. First, it would be unlikely that Horizon
could successfully include any claimants other than the senior creditors. Like most
companies with strong unions, Horizon offered a defined-benefit pension plan to its
employees, and that plan was underfunded. A Chapter 11 proceeding could result in a
reduction of the benefits paid to employees, which would reduce the company’s own
mandatory contributions to the plan. But such changes were very difficult to enact
outside of the court system, so if Horizon opted to restructure its debt directly, it would
need to focus solely on the claims of the senior credit facility and the convertible bonds.
A second disadvantage was that a voluntary restructuring created a risk for the
claimants. In particular, if Horizon were to declare bankruptcy shortly after the
restructuring, the Chapter 11 proceedings would start from the newly restructured
claims. Therefore, if debt holders had agreed to accept equity in lieu of all or part of
their original debt claim, the courts would view the reduced debt claim as the relevant
claim for the Chapter 11 proceedings. Once a claimant voluntarily agreed to a reduction
of its original claim, that claim was gone forever.
Stephen Fraser was not in an enviable position. Regardless of the option he chose,
the company’s success was not guaranteed. Moreover, with the covenant default
approaching, it was time to “right the ship,” but a poor choice by Fraser at this point
could take his company down and his career along with it.
Page 481
7 Analysis of Financing Tactics: Leases,
Options, and Foreign Currency
Page 483
Baker Adhesives
In early June 2006, Doug Baker met with his sales manager Alissa Moreno to
discuss the results of a recent foray into international markets. This was new territory
for Baker Adhesives, a small company manufacturing specialty adhesives. Until a recent
sale to Novo, a Brazilian toy manufacturer, all of Baker Adhesives’ sales had been to
companies not far from its Newark, New Jersey, manufacturing facility. As U.S.
manufacturing continued to migrate overseas, however, Baker would be under intense
pressure to find new markets, which would inevitably lead to international sales.
Doug Baker was looking forward to this meeting. The recent sale to Novo, while
modest in size at 1,210 gallons, had been a significant financial boost to Baker
Adhesives. The order had used up some raw-materials inventory that Baker had
considered reselling at a significant loss a few months before the Novo order.
Furthermore, the company had been running well under capacity and the order was
easily accommodated within the production schedule. The purpose of the meeting was
to finalize details on a new order from Novo that was to be 50% larger than the original
order. Also, payment for the earlier Novo order had just been received and Baker was
looking forward to paying down some of the balance on the firm’s line of credit.
As Baker sat down with Moreno, he could tell immediately that he was in for bad
news. It came quickly. Moreno pointed out that since the Novo order was denominated
in Brazilian reais (BRL), the payment from Novo had to be converted into U.S. dollars
(USD) at the current exchange rate. Given exchange-rate changes since the time Baker
Adhesives and Novo had agreed on a per-gallon price, the value of the payment was
substantially lower than anticipated. More disappointing was the fact that Novo was
Page 484
unwilling to consider a change in the per-gallon price for the follow-on order.
Translated into dollars, therefore, the new order would not be as profitable as the
original order had initially appeared. In fact, given further anticipated changes
in exchange rates the new order would not even be as profitable as the original
order had turned out to be!
Adhesives Market
The market for adhesives was dominated by a few large firms that provided the vast
bulk of adhesives in the United States and in global markets. The adhesives giants had
international manufacturing and sourcing capabilities. Margins on most adhesives were
quite slim since competition was fierce. In response, successful firms had developed
ever more efficient production systems which, to a great degree, relied on economies of
The focus on scale economies had left a number of specialty markets open for small
and technically savvy firms. The key to success in the specialty market was not the
efficient manufacture of large quantities, but figuring out how to feasibly and
economically produce relatively small batches with distinct properties. In this market, a
good chemist and a flexible production system were key drivers of success. Baker
Adhesives had both. The business was started by Doug Baker’s father, a brilliant
chemist who left a big company to focus on the more interesting, if less marketable,
products that eventually became the staple of Baker Adhesives’ product line. While
Baker’s father had retired some years ago, he had attracted a number of capable new
employees, and the company was still an acknowledged leader in the specialty markets.
The production facilities, though old, were readily adaptable and had been well
Until just a few years earlier, Baker Adhesives had done well financially. While
growth in sales had never been a strong point, margins were generally high and sales
levels steady. The company had never employed long-term debt and still did not do so.
The firm had a line of credit from a local bank, which had always provided sufficient
funds to cover short-term needs. Baker Adhesives presently owed about USD180,000
on the credit line. Baker had an excellent relationship with the bank, which had been
with the company from the beginning.
Novo Orders
The original order from Novo was for an adhesive Novo was using in the production of
a new line of toys for its Brazilian market. The toys needed to be waterproof and the
adhesive, therefore, needed very specific properties. Through a mutual friend, Moreno
had been introduced to Novo’s purchasing agent. Working with Doug Baker, she had
then negotiated the original order in February (the basis for the pricing of that original
order is shown in Exhibit 38.1). Novo had agreed to pay shipping costs, so Baker
Adhesives simply had to deliver the adhesive in 55-gallon drums to a nearby shipping
EXHIBIT 38.1 | Novo Price Calculation on Initial Order (figures in U.S. dollars unless otherwise
The exchange rate used in the calculation was obtained from the Wall Street Journal.
Overhead was applied based on labor hours.
Page 485
The proposed new order was similar to the last one. As before, Novo agreed to
make payment 30 days after receipt of the adhesives at the shipping facility. Baker
anticipated a five-week manufacturing cycle once all the raw materials were in place.
All materials would be secured within two weeks. Allowing for some
flexibility, Moreno believed payment would be received about three months
from order placement; that was about how long the original order took. For this reason,
Moreno expected receipt of payment on the new order, assuming it was agreed upon
immediately, somewhere around September 5, 2006.
Exchange Risks
With her newfound awareness of exchange-rate risks, Moreno had gathered additional
information on exchange-rate markets before the meeting with Doug Baker. The history
of the dollar-to-real exchange rate is shown in Exhibit 38.2. Furthermore, the data in
that exhibit provided the most recent information on money markets and an estimate of
the expected future (September 5, 2006) spot rates from a forecasting service.
The raw materials expense was based on the original cost (book value) of the materials.
The rounded price of BRL90.15 per gallon was used in negotiations with Novo. Thus, for the final order, Novo was billed
a total of BRL90.15 × 1,210 = BRL109,081.50.
Source: Created by case writer.
EXHIBIT 38.2 | Exchange Rate and Money-Market Information
Moreno had discussed her concerns about exchange-rate changes with the bank
when she had arranged for conversion of the original Novo payment. The bank, helpful
as always, had described two ways in which Baker could mitigate the exchange risk
from any new order: hedge in the forward market or hedge in the money markets.
Hedge in the forward market
Banks would often provide their clients with guaranteed exchange rates for the future
exchange of currencies (forward rates). These contracts specified a date, an amount to
Source: Created by case writer.
Page 486
be exchanged, and a rate. Any bank fee would be built into the rate. By securing a
forward rate for the date of a foreign-currency-denominated cash flow, a firm could
eliminate any risk due to currency fluctuations. In this case, the anticipated future inflow
of reais from the sale to Novo could be converted at a rate that would be known today.
Hedge in the money markets
Rather than eliminate exchange risk through a contracted future exchange rate, a firm
could make any currency exchanges at the known current spot rate. To do this, of course,
the firm needed to convert future expected cash flows into current cash flows. This was
done on the money market by borrowing “today” in a foreign currency against an
expected future inflow or making a deposit “today” in a foreign account so as to be able
to meet a future outflow. The amount to be borrowed or deposited would depend on the
interest rates in the foreign currency because a firm would not wish to transfer more or
less than what would be needed. In this case, Baker Adhesives would borrow in reais
against the future inflow from Novo. The amount the company would borrow
would be an amount such that the Novo receipt would exactly cover both
principal and interest on the borrowing.
After some discussion and negotiation with the bank and bank affiliates, Moreno
was able to secure the following agreements: Baker Adhesives’ bank had agreed to
offer a forward contract for September 5, 2006, at an exchange rate of 0.4227
USD/BRL. An affiliate of the bank, located in Brazil and familiar with Novo, was
willing to provide Baker with a short-term real loan, secured by the Novo receivable, at
26%. Moreno was initially shocked at this rate, which was more than three times the
8.52% rate on Baker’s domestic line of credit; however, the bank described Brazil’s
historically high inflation and the recent attempts by the government to control inflation
with high interest rates. The rate they had secured was typical of the market at the time.
The Meeting
It took Doug Baker some time to get over his disappointment. If international sales were
the key to the future of Baker Adhesives, however, Baker realized he had already
learned some important lessons. He vowed to put those lessons to good use as he and
Moreno turned their attention to the new Novo order.
Page 489
Vale SA
Headquartered in Brazil but with a global presence, Vale SA was the world’s largest
producer of iron ore and second-largest producer of nickel. The company had continued
growing rapidly despite the global economic downturn that had begun in 2007 and, by
April 2010, was in need of (U.S. dollars) USD1.0 billion of additional capital. This
issue was intended to support organic growth, particularly with respect to investments
in its fertilizer business. Historically, Vale issued bonds in U.S. dollars, but the
conditions in global capital markets suggested that the firm should consider borrowing
in other currencies. In particular, the company was considering an eight-year bond that
could be priced close to par at a coupon rate of 4.375% in euros, 5.475% in British
pounds, or 5.240% in U.S. dollars.
Early 2010 was a good time for companies to issue debt if they were able. Central
banks across the globe had been keeping interest rates at record lows for an extended
period to support economic recovery, and this, in turn, would lower the real cost of
borrowing. Other market conditions favored Vale and suggested an issue denominated in
euros or British pounds to take advantage of interest in Vale credit from investors in
Europe and Great Britain, respectively. First, companies in emerging markets were
viewed favorably since their economies had recovered more quickly than developed
economies, and investors therefore viewed them as more financially sound. This was
particularly true of Latin America. Second, the market had little interest in issues by
European or British companies. In fact, investors had abandoned European assets in
general due to concerns about the European economy, and this had resulted in a
depreciation of the euro against major currencies. Similarly, a high level of UK debt
Page 490
relative to the British pound combined with political uncertainty around the
parliamentary elections had depressed interest in British assets.
Given the high cost of local-currency borrowing in Brazil and the fact that many of
the commodities it sold were priced in U.S. dollars, Vale had traditionally looked to
U.S. dollar debt markets. Certainly, going global with its financing was the right thing
to do. Still, at the time, it also seemed that markets other than the United States
might look attractive.
Vale SA
Vale was founded by the Brazilian government in 1974 and privatized in 1997. A focus
on mining became Vale’s prevailing strategy after its privatization. The firm sold its
steel and wood pulp businesses between 2000 and 2007. Vale acquired several iron ore
manufacturing companies during that period and gained control of 85% of Brazil’s 300
million tons of annual iron ore production by 2007. The company also invested in the
iron transportation infrastructure: Vale owned three major railway concessions, 800
locomotives, and more than 35,000 freight cars and either owned or operated six ports.
Much of the Vale mining business was concentrated on iron and in Brazil. To
mitigate the impact of iron ore price changes on its revenue and net income and to
diversify globally, Vale launched a diversification program in 2001. The share of
nonferrous metals, including aluminum, alumina, copper, cobalt, gold, and nickel,
increased as a fraction of Vale’s revenue from 7% in 2000 to 30.7% in 2009. Global
acquisitions included Canico Resource Corp. (a Canadian nickel company), AMCI
Holdings Inc. (an Australian coal-mining company), and Inco Limited (Canada’s
second-largest mining company). The acquisition of Inco for USD18.9 billion was the
largest acquisition ever made by a Brazilian company. By 2009, over half of Vale’s
revenue (56.9%) came from Asia; Brazil, the Americas excluding Brazil, and Europe
accounted for 15.3%, 8.7%, and 16.9% of the revenue, respectively.
The firm experienced strong growth from 2005 to 2009. Revenue increased at a
compound annual growth rate (CAGR) of 17.5%, and earnings per share rose at a
CAGR of 7%. For the same period, capital spending averaged 360% of depreciation,
and dividends increased at a CAGR of 18.9%. Vale’s consolidated financial results are
presented in Exhibit 39.1 and Exhibit 39.2.
EXHIBIT 39.1 | Income Statement (in millions of U.S. dollars)
Data source: Vale 20-F filings, 2000–09.
EXHIBIT 39.2 | Balance Sheet Statement (in millions of U.S. dollars)
Page 491
Global Markets
The financial crisis that sparked the global recession starting in 2007 had only slightly
abated by the start of 2010. The weak global economy had forced central banks to
loosen their monetary policy and governments to use stimulus plans to prevent further
slowdown in major economies around the world. As a result, several European
countries were dealing with huge fiscal deficits and high levels of debt relative to GDP.
The fiscal situation in emerging markets was exactly the opposite. Most emerging
markets were running trade surpluses and had average debt-to-GDP ratios of 30%.
Those markets were expected to grow faster than industrial countries’ markets should
the global economy recover. In particular, Latin American economies were
Data source: Vale 20-F filings, 2000–09.
well positioned for growth, and some sovereign debts traded at rates favorable to highly
rated European corporates.
Emerging economies were also appealing to investors, since they provided
geographic diversification and offered high return in a very low interest rate
environment. In fact, investors were selling highly rated sovereign debt and buying
riskier emerging-market corporate bonds. The fact that emerging-market governments
didn’t need excessive external financing generated substantial demand for high-quality
corporate debt. The overall situation created a good place for large emerging-market
companies to tap global bond markets.
Major central banks had slashed short-term interest rates to near zero in response to
the global recession. With treasury interest rates at historically low levels, investors
looked increasingly to riskier assets for higher returns. As a result, corporate bonds
looked quite attractive. In 2009, large European companies had raised substantial funds,
largely to boost cash reserves. Figure 39.1 shows quarterly government bond yields by
government, and Figure 39.2 shows credit spreads for corporate issues (BBB-rated
issuers) by currency. Exhibit 39.3 provides interest rates and exchange rates by maturity
for the U.S. dollar, euro, and British pound, as well as data on spot exchange rates.
FIGURE 39.1 | Quarterly government bond yields.
Data source: Bloomberg.
Investors across the globe were big buyers of emerging-market corporate bonds.
Emerging-market issuers were glad to access these global capital flows given the high
FIGURE 39.2 | Quarterly credit spreads for corporate issues (BBB-rated issuers) by currency.
Data source: Bloomberg.
EXHIBIT 39.3 | Interest Rates and Exchange Rates as of April 30, 2010
*Interest rates are zero-curve fixed-to-floating swap rates appropriate for pricing currency forward rates and indicative of
prevailing interbank market rates for the given maturities.
Data sources: U.S. Federal Reserve Board and Datastream.
Page 492
Page 493
local currency borrowing rates. For example, nominal and real interest rates in Brazil
were still higher than those of comparably rated countries. Even though credible fiscal
and monetary policy in Brazil suggested that the gap between Brazilian interest rates
and rates in developed countries would narrow, observers expected an aggressive
tightening following a robust recovery and were concerned about inflation due
to increases in commodity prices. Figure 39.3 shows quarterly historic
inflation rates for the Brazilian real, euro, U.S. dollar, and British pound. The general
consensus was that over the next five years, inflation rates would remain high at about
8.0% for the Brazilian real, would drop to about 3.1% for the British pound, and would
rise slightly to about 2.8% and 2.1% for the U.S. dollar and euro, respectively.
The spike in inflation for the British pound in the last quarter had raised
some concerns about possible changes in the value of the pound relative to
other developed countries. Further fueling concern was the low real rate or return on
10-year government securities given the recent inflation figure. While some argued that
FIGURE 39.3 | Quarterly inflation rates by currency.
Data source: Bloomberg.
the inflation number was anomalous, others pointed to structural issues in the British
economy and concerns that the Bank of England would not aggressively pursue its
inflation targets.
Over the previous eight years, most major currencies had appreciated against the
dollar. The financial crisis in 2008 reversed the trend as a flight to safety caused
significant appreciation of the U.S. dollar. The depreciation of the U.S. dollar started
over again in 2009, but the euro had recently been under pressure for sovereign debt
concerns, and capital had flowed to emerging markets. (Figure 39.4 shows monthly
exchange rates.)
Vale Capital Structure
Vale was a disciplined borrower. The firm had maintained an average debt-to-equity
ratio of 47.6% from 2007 to 2009. Given that sufficient debt capital was often difficult
to obtain in Brazilian reais, that the real rates on those borrowings were relatively high
when the debt could be obtained, and that most of Vale’s revenues were denominated in
U.S. dollars, the company had traditionally issued bonds in U.S. dollars. Detailed
information on Vale’s outstanding debt is provided in Exhibit 39.4. Whereas the global
FIGURE 39.4 | Monthly exchange rates.
Data source: Bloomberg.
Page 494
demand for emerging-market corporate debt certainly suggested a Vale debt issue would
be well received and priced at an attractive yield, the complicated state of global
capital markets made the choice of currency difficult. It was clear the company should
consider other alternatives along with the U.S. dollar.
At this point, the firm needed to make a choice and proceed. There was
concern that many corporations would be issuing securities in the next few
years to roll over debt, and Vale wanted to get its issue done before this “maturity wall”
hit the markets. The U.S. dollar, euro, and British pound issues identified above
represented typical alternatives available to Vale and reflected the most likely
conditions the firm would face in each market. It was anticipated that these issues would
carry a BBB+ rating. The size of each issue would generate close to (Brazilian reais)
BRL1.8 billion, equivalent to USD1.0 billion, (euros) EUR750 million, or (British
pounds) GBP700 million. Of course, any loan would be evaluated relative to a U.S.
dollar loan. By way of comparison, Exhibit 39.5 provides current information on
EXHIBIT 39.4 | Vale’s Debt Outstanding as of December 31, 2009
Data source: Capital IQ.
outstanding issues by comparable companies in each of the three currencies.
EXHIBIT 39.5 | Select Outstanding Debt Issues of Comparable Companies
Data sources: Datastream and case writer estimates.
Page 501
J&L Railroad
It was Saturday, April 25, 2009, and Jeannine Matthews, chief financial officer at J&L
Railroad (J&L), was in the middle of preparing her presentation for the upcoming board
of directors meeting on Tuesday. Matthews was responsible for developing alternative
strategies to hedge the company’s exposure to locomotive diesel-fuel prices for the next
12 months. In addition to enumerating the pros and cons of alternative hedging
strategies, the board had asked for her recommendation for which strategy to follow.
Fuel prices had always played a significant role in J&L’s profits, but management
had not considered the risk important enough to merit action. During February as the
board reviewed the details of the company’s performance for 2008, they discovered
that, despite an increase of $154 million in rail revenues, operating margin had shrunk
by $114 million, largely due to an increase in fuel costs (Exhibits 40.1 and 40.2).
Having operating profit fall by 11% in 2008 after it had risen 9% in 2007 was
considered unacceptable by the board, and it did not want a repeat in 2009.
EXHIBIT 40.1 | Consolidated Income Statement, 2006–08 (in millions of dollars) December 31
Source: Main Street Trading data.
EXHIBIT 40.2 | Consolidated Balance Sheets, 2007–08 (in millions of dollars) December 31
Recently in a conversation with Matthews, the chairman of the board had expressed
his personal view of the problem:
Our business is running a railroad, not predicting the strength of an oil cartel or whether one Middle East
nation will invade another. We might have been lucky in the past, but we cannot continue to subject our
shareholders to unnecessary risk. After all, if our shareholders want to speculate on diesel fuel prices, they
Source: Main Street Trading data.
Page 502
can do that on their own; but I believe fuel-price risk should not be present in our stock price. On the other
hand, if the recession continues and prices drop further, we could increase our profit margins by not hedging.
Diesel-fuel prices had peaked in early July 2008 but then had trended downward as
a result of the worldwide recession and softening demand. By January 2009, diesel-fuel
prices had fallen to their lowest level since early 2005. At February’s meeting, the
board had decided to wait and see how the energy markets would continue to react to
the recession and softening demand. By March, however, oil and diesel-fuel prices had
begun to rebound, so the board charged Matthews with the task of proposing a hedging
policy at the meeting on April 28.
It was industry practice for railroads to enter into long-term contracts with
their freight customers, which had both good and bad effects. On the positive
side, railroads could better predict available resources by locking in revenues in
advance. On the negative side, fixed-price contracts limited railroads’ profit margins
and exposed them to potentially large profit swings if any of their costs changed. In this
regard, diesel fuel was a particularly troublesome cost for railroads, because it
represented a large cost item that also was difficult to predict due to the volatility of
fuel prices.
An ideal solution to the fuel-price risk would be for railroads to enter into long-term
fixed-price contracts with their fuel suppliers. A fixed-price contract with suppliers
when combined with the fixed-price contracts with freight customers would serve to
steady future profits. Moreover, by contracting with fuel suppliers to deliver all of
J&L’s fuel needs at a fixed price, management could be assured of meeting its fuel
budget numbers at year’s end. At times, fuel suppliers had agreed to such contracts, but
over the years, J&L had not been satisfied with the results. The problem was that when
fuel prices had risen substantially, many suppliers walked away from their commitments
leaving J&L with a list of three unattractive options:
Based on this history, J&L’s board decided to “assume the fuel suppliers are not the
answer to our fuel price problem.” The board then asked Matthews to explore other
alternatives to manage the fuel risk and preserve J&L’s relationships with the fuel
Mathews had determined that, if J&L were to hedge, it could choose between two
basic strategies. The first was to do the hedging in-house by trading futures and options
contracts on a public exchange. This presented a number of tradeoffs, including the
challenge of learning how to trade correctly. The second was to use a bank’s risk
management products and services. This would cost more but would be easier to
implement. For either alternative, she would need to address a number of important
details, including how much fuel to hedge and how much risk should be eliminated with
the hedge.
Railroad Industry
Railroads hauled record amounts of freight in 2006 and 2007 and began to encounter
capacity constraints. In 2008, the industry hauled nearly 2-billion tons of freight,
1. Force compliance: J&L could take the supplier to court to enforce the contract;
however, many suppliers were thinly capitalized, which meant that the legal action
against them could put them into bankruptcy. As a result, J&L might get little or nothing
from the supplier and yet would be saddled with significant legal fees.
2. Negotiate a new price: This usually meant that J&L would agree to pay at or near the
current market price, which was equivalent to ignoring the original contract; plus it set
a bad precedent for future contracts.
3. Walk away and buy the fuel on the open market from another supplier: This choice
avoided “rewarding” the supplier for defaulting on its contract but was functionally
equivalent to never having the contract in the first place.
Page 503
although rail traffic declined due to weakness in the economy. The transportation of coal
was by far the number one commodity group carried. Other significant
commodity groups were chemicals, farm products, food, metallic ores,
nonmetallic minerals, and lumber, pulp, and paper products.
Freight and unit trains had expanded the industry since deregulation in the 1980s.
Rail carriers served as long-distance haulers of intermodal freight, carrying the freight
containers for steamship lines, or trailers for the trucking industry. Unit train loads were
used to move large amounts of a single commodity (typically 50 or more cars) between
two points using more efficient locomotives. A unit train would be used, for example, to
move coal between a coal mine and an electric generating plant.
Several factors determined a railroad’s profitability: government regulation,
oligopolistic competition within the industry, and long-term contracts with shippers and
suppliers. The railroad industry had a long history of price regulation; the government
had feared the monopolistic pricing that had driven the industry to the brink of ruin
in the 1970s. Finally recognizing the intense competition among most rail traffic,
Congress passed the Staggers Rail Act of 1980, allowing railroads to manage their own
assets, to price services based on market demand, and earn adequate revenues to
support their operations. America’s freight railroads paid almost all of the costs of
tracks, bridges, and tunnels themselves. In comparison, trucks and barges used highways
and waterways provided and maintained by the government.
After the Staggers Act was passed, railroad fuel efficiency rose 94%. By 2009, a
freight train could move a ton of freight 436 miles on a single gallon of locomotive
diesel fuel, approximately four times as far as it could by truck. The industry had spent
considerable money on the innovative technology that improved the power and
efficiency of locomotives and produced lighter train cars. Now, a long freight train
could carry the same load as 280 trucks while at the same time producing only one-third
Page 504
the greenhouse-gas emissions.
Market share was frequently won or lost solely on the basis of the price charged by
competing railroads. Although rarely more than two or three railroads competed for a
particular client’s business, price competition was often fierce enough to prohibit
railroads from increasing freight prices because of fuel-price increases. But, as fuel
prices during 2008 climbed higher and faster than they had ever done before, there was
some discussion in the railroad industry regarding the imposition of fuel surcharges
when contracts came up for renewal. So far, however, none of the major carriers had
followed up the talk with action.
J&L Railroad
J&L Railroad was founded in 1928 when the Jackson and Lawrence rail lines combined
to form one of the largest railroads in the country. Considered a Class I railroad, J&L
operated approximately 2,500 miles of line throughout the West and the Midwest.
Although publicly owned, J&L was one of the few Class I railroads still managed by the
original founding families. In fact, two of the family members still occupied seats on its
board of directors. During the periods 1983–89, 1996–99, and 2004–08, J&L
had invested significant amounts of capital into replacing equipment and
refurbishing roadways. These capital expenditures had been funded either through
internally generated funds or through long-term debt. The investment in more efficient
locomotives was now paying off, despite the burden of the principal and interest
J&L had one of the most extensive intermodal networks, accounting for
approximately 20% of revenues during the last few years, as compared to the Class I
industry average of 10%. Transportation of coal, however, had accounted for only 25%
to 30% of freight revenues. With the projected increase in demand for coal from
emerging economies in Asia, management had committed to increase revenues from coal
to 35% within three years. That commitment was now subject to revision due to slowing
global economic activity and the recent fall in energy prices.
Exchange-Traded Contracts
J&L’s exposure to fuel prices during the next 12 months would be substantial. Matthews
estimated that the company would need approximately 17.5 million gallons of diesel
fuel per month or 210 million gallons for the coming year. This exposure could be offset
with the use of heating oil futures and option contracts that were traded on the New York
Mercantile Exchange (NYMEX) (Exhibits 40.3 and 40.4). NYMEX did not trade
contracts on diesel fuel, so it was not possible to hedge diesel fuel directly. Heating oil
and diesel fuel, however, were both distillates of crude oil with very similar chemical
profiles and highly correlated market prices (Exhibit 40.5). Thus, heating-oil futures
were considered an excellent hedging instrument for diesel fuel.
EXHIBIT 40.3 | NYMEX Heating Oil Exchange Futures (in dollars per gallon) April 24, 2009
Each heating-oil futures contract was for the delivery of 42,000 gallons and matured on the last business day of the
preceding month (e.g., the June 2009 contract expires May 29, 2009).
Source: New York Mercantile Exchange data.
EXHIBIT 40.4 | NYMEX Heating Oil Call Option Premiums (in dollars per gallon) April 24, 2009
Source: Main Street Trading data.
EXHIBIT 40.5 | Diesel Fuel versus Heating Oil Prices (in dollars per gallon) January 2007 to March
Source: Graph created by case writer using data from Energy Information Association.
Page 505
Futures allowed market participants to contract to buy or sell a commodity at a
future date at a predetermined price. If market participants did not want to buy a
commodity today based on its spot price, the current market price, they could use the
futures market to contract to buy it at a future date at the futures price. A futures price
reflected the market’s forecast of what the spot price was expected to be at the
contract’s maturity date. Many factors influenced the spot price and futures prices, both
of which changed constantly depending on the market news. As illustrated in
Exhibit 40.3, the current market conditions were such that the futures market was
expecting price to trend up from the spot of $1.36 to an average of $1.52 over the next
12 months.
A trader who wanted to buy a commodity would take a “long” position in the
contract, whereas a seller would take a “short” position. Because J&L’s profits fell
when fuel prices increased, the company could offset its exposure by taking long
positions in heating-oil futures. For example, instead of waiting two months to buy fuel
on the open market at the going price, J&L could enter into the July futures contract on
April 25 to buy heating oil at $ 1.4138/gallon (Exhibit 40.3). Therefore, when the
contract matured in two months, J&L would end up buying heating oil at exactly
$1.4138/gallon regardless of the price of heating oil at the time. This could work for or
against J&L depending on whether prices rose or fell during the two months.
For example, if at maturity of the contract, heating oil was selling at $1.4638,
J&L would have benefited by $.05/gallon by owning the futures. If heating oil was
selling for $1.3638 at maturity, J&L would have lost $.05/gallon on the futures. In either
case, however, J&L would pay exactly $1.4138 per gallon.
Fuel producers or distributors who wanted to fix their selling price would take a
short position in the fuel futures. Alternatively, the seller might be a speculator who
believed that the spot price of fuel at maturity would end up being lower than the current
futures price. In either case, futures was a zero-sum game because one party’s gain
exactly equals the other party’s loss. As long as the futures price was an unbiased
estimate of the future spot price, the expected payoff at maturity was zero for both the
long and short side of the contract. Thus, although the buyer and seller were required to
pay a modest fee to the exchange to enter a futures contract, no money was exchanged
between buyers and sellers at the outset. If the futures price increased over time, the
buyer would collect, and if the futures price decreased, the seller would collect. When
the contract matured, it was rare for the buyer to request physical delivery of the
commodity, rather the vast majority of contracted futures were cash settled.
NYMEX futures created a few problems for J&L management. First, because J&L
would have to use heating-oil contracts to hedge its diesel-fuel exposure, there would
be a small amount of risk created by the imperfect match of the prices of the two
commodities. This “basis,” however, was minimal owing to the high correlation
historically between the two price series. Of greater concern was that NYMEX
contracts were standardized with respect to size and maturity dates. Each heating-oil
futures contract was for the delivery of 42,000 gallons and matured on the last business
day of the preceding month. Thus, J&L faced a maturity mismatch because the hedge
would only work if the number of gallons being hedged was purchased specifically on
the day the futures contract matured. In addition, J&L faced a size mismatch because the
number of gallons needed in any month was unlikely to equal an exact multiple of
42,000 gallons.
Some institutional features of NYMEX futures contracts had to be considered as
well. NYMEX futures were “marked to market” daily, which meant that every
investor’s position was settled daily, regardless of whether the position was closed or
kept open. Daily marking-to-market limited the credit risk of the transaction to a single
day’s movement of prices. To further reduce the credit risk, the exchange required
Page 506
margin payments as collateral. When a contract was initially opened, both parties were
required to post an initial margin equal to approximately 5% or less of the contract
value. At the end of each trading day, moneys were added or subtracted from the margin
account as the futures trader’s position increased or decreased in value. If the value of
the position declined below a specified maintenance level, the trader would be required
to replenish the margin to its initial margin level. Thus, the combination of daily
marking-to-market and the use of margins effectively eliminated any credit risk for
exchange-traded futures contracts. Still, the daily settlement process created a cash-flow
risk because J&L might have to make cash payments well in advance of the maturity of a
In addition to futures contracts, it was possible to buy NYMEX options on the
futures. A call option gave the buyer the right, but not the obligation, to go long on the
underlying commodity futures at a given price (the strike price) on or before
the expiration date. A put option gave the buyer the right to go short on the
futures at the strike price. The typical futures option expired a few days prior to the
expiration of the underlying futures contract to give the counterparties time to offset their
positions on the futures exchange. Options were offered at a variety of strike prices and
maturities (Exhibit 40.4). Unlike the underlying futures contract, puts and calls
commanded a market price called the premium. A call premium increased as the spread
of the futures price over the strike price increased, whereas a put premium increased as
the spread of the strike price over the futures price increased. The premiums of both
puts and calls were higher for options with more time to maturity. Thus, unlike the
futures, option buyers had to pay the premium to buy the contract in addition to both
buyer and seller paying a fee for the transaction.
The Risk-Management Group at Kansas City National
Walt Bernard, vice president of the risk management group of Kansas City National
Bank, (KCNB) had recently given a presentation to J&L senior management in which he
described the wide range of risk-management products and techniques available to
protect J&L’s profit margin. Each technique used a particular financial product to hedge
by various degrees J&L’s exposure to diesel-fuel price changes. The products offered
by KCNB were completely financial in design (i.e., no actual delivery of the commodity
took place at maturity). To hedge diesel fuel, KCNB offered No. 2 heating-oil contracts,
the same commodity traded on the NYMEX. Also similar to trading on the NYMEX,
working with KCNB meant that J&L could continue to do business as usual with its
suppliers and perform its hedging activities independently.
The primary risk-management products offered by KCNB were commodity swaps,
caps, floors, and collars (see Exhibit 40.6 for cap and floor quotes). KCNB’s
instruments were designed to hedge the average price of heating oil during the contract
period. By contrast, NYMEX futures and options were contracts designed against the
spot price in effect on the last day of the contract. In a commodity swap, the bank agreed
to pay on the settlement date if the average price of heating oil was above the agreedupon
swap price for the year. Conversely, J&L would have to pay the bank if the
average price was below the contracted swap price. Thus, a swap was essentially a
custom-fit futures contract, with KCNB rather than NYMEX carrying the credit risk.
Because the swap was priced on the average heating-oil price, settlement occurred at
the end of the swap (12 months in J&L’s case) rather than daily as with NYMEX futures.
In addition, KCNB would not require J&L to post a margin but would charge a nominal
up-front fee as compensation for accepting J&L’s credit risk. KCNB was currently
quoting the 12-month swap price for heating oil as $1.522/gallon.
Page 507
KCNB also offered commodity options, referred to as caps, floors, and collars. A
cap was essentially a call option; a floor was a put option; and a collar was the
combination of a cap and a floor. For a cap, KCNB agreed to pay the excess of the
realized average fuel price over the cap’s “strike price.” If the average fuel price never
reached the strike price, KCNB would pay nothing. As for any option, J&L would need
to pay KCNB a premium for the cap. The cap premium varied according to how far the
strike price was above the expected price. If the strike was close to the
expected price implied by the futures contracts, J&L would have to pay a
relatively high premium. If J&L was willing to accept some risk by contracting for a
strike price that was significantly higher than the expected average price, the premium
would be smaller. In any case, the cap would allow J&L to take advantage of price
decreases and yet still be protected from price increases above the cap’s strike price.
A commodity collar was used to limit the movement of prices within the range of the
cap and floor strike prices. By choosing a collar, J&L would be selling a floor while
simultaneously buying a cap. KCNB agreed to pay the excess, if any, of the average
heating-oil price over the cap strike price. Conversely, J&L would have to pay if the
average price fell below the floor strike price. Collars could be designed to have a
minimal up-front cost by setting the cap and floor strike prices so that the revenue
EXHIBIT 40.6 | KCNB Cap and Floor Prices (in dollars per gallon) April 24, 2009
Note: Cap and floors prices are based on the average daily closing price of heating fuel for one year.
Data Source: Company documents.
derived from selling the floor exactly offset the premium for buying the cap. If J&L
management wanted to guard against prices rising above a certain price (the cap’s strike
price) but were willing to give up the benefit of prices falling below a certain level (the
floor’s strike price), a collar could be the logical choice.
Matthews’s Choice
Jeannine Matthews had decided to recommend that J&L hedge its fuel costs for the next
12 months, at least to some extent. Her analysis revealed that despite using more
efficient equipment, the cost of fuel as a percentage of revenues had increased every
year since 2001 (Exhibit 40.7). The immediate questions to be answered were: How
much fuel should be hedged, and how should the hedge be structured?
Bernard had presented Matthews with a myriad of possibilities, each of which
provided some degree of profit protection. A commodity swap, for example, could be
used to completely fix the price of fuel for the next year. If the price of diesel fuel ended
up falling below the swap price, however, the hedge would be more of an
embarrassment than a benefit to Matthews. Defending a newly initiated hedging policy
EXHIBIT 40.7 | Fuel Costs 2001–2008
Data Source: Company documents.
Page 508
would be difficult if J&L’s profits lagged those of other railroads because of a failure to
capture lower fuel costs.
Then there was the issue of how much fuel to hedge. If the economy experienced a
slowdown, J&L would experience a drop in rail loads, which would result in using less
than the 210 million gallons currently expected. If the hedge was constructed based on
more fuel than needed, it was conceivable that J&L could end up paying to settle its
position with the bank for fuel that it could not use. At the same time, it was also
possible that the economy would pick up, and J&L would end up having to buy a
significant amount of fuel on the open market without the benefit of a hedge.
Instead of a swap, Matthews could use a cap to eliminate the risk of high fuel prices.
This would seem to alleviate the problem of over- or under-hedging because the cap
would only be exercised if it was profitable (i.e., if prices rose beyond the cap’s strike
price). At that point, J&L would prefer to have been over-hedged because the company
would get a higher payoff from the cap. The biggest concern about the cap strategy was
that the price of heating oil might not rise high enough to trigger the cap, in which case
the premium paid for the cap would have only served to reduce profits with no
offsetting benefits. Another alternative was to enter into a collar, which could
be structured to have a zero cost; however, a collar carried a hidden cost because it
gave up the savings if fuel prices happened to fall below the floor’s strike price.
Matthews knew that it was important for her to keep in mind that all of KCNB’s
product could be mimicked using NYMEX futures and options. In fact, maybe there was
a creative way to combine NYMEX securities to give J&L a better hedge than provided
by KCNB’s products. Regardless of what she recommended, Matthews realized that she
needed to devise a hedging strategy that would give J&L the maximum benefit at the
lowest cost and would not prove to be an embarrassment for her or J&L.

Page 513
WNG Capital LLC
WNG succeeds because we create value for all our stakeholders. Our model allows both airlines and investors
to achieve their financial objectives.
—Michael Gangemi, CEO WNG Capital LLC
In late 2013, Wenbo Su, an analyst at WNG Capital LLC (WNG), a U.S.-based asset
management firm, was reviewing the terms of a proposed transaction for his employer.
WNG specialized in aviation leases, and Su was evaluating the terms of a proposed
purchase and leaseback deal with a small private airline based in the United Kingdom.
The essence of the transaction would be to transform the airline from being the owner of
certain aircraft in its fleet to being the lessee of the aircraft for 12 months (through the
end of 2014). WNG would be the new owner of the equipment and would act as
the lessor in the deal. The airline would have full use of the aircraft, but would not own
the aircraft or have use of the aircraft after the end of the lease. The cash flows to all
parties were complicated, and Su planned to conduct a thorough analysis of the
proposed lease terms before making a recommendation to WNG’s CEO, Michael
WNG was established in 2009 as an operating lessor of used commercial aircraft
manufactured by Airbus Group and Boeing Corporation. The company had offices in
Dallas, Boston, and Dublin, Ireland, where Su worked. Since its first investment in
2011, the firm had invested approximately $805 million in 54 aircraft using specialpurpose
entities (SPEs). The small firm of 15 employees had global reach, with leases
to 34 airlines operating in 22 countries around the world. At the time of the proposed
deal, the firm was managing 41 aircraft valued in excess of $735 million in four SPEs.
Page 514
In its marketing materials, WNG informed potential investors that it sought an
unlevered, pretax net-annual internal rate of return (IRR) on invested capital of 11% to
The challenge of analyzing and setting lease terms was not new to Su, who
was aware that the aircraft-leasing market was both small and competitive.
Beyond structuring a deal that was profitable for both WNG and its investors, Su
understood the importance of reputation in such a small market. A deal that proved too
costly for an airline could cost the firm future deals not only with that airline but also
with others. Protecting the firm’s reputation in the industry was as important as
protecting the firm’s capital; and structuring a deal that benefited WNG, its investors,
and the airline presented an interesting challenge given the opaque nature of older
aircraft values.
Aviation Industry
The aviation industry launched on December 17, 1903, in Kill Devil Hills, North
Carolina, when inventors Wilbur and Orville Wright successfully piloted their heavierthan-
air machine on four flights ranging from 12 to 59 seconds. Within 11 years of this
historic event, the commercial airline business had begun, and it quickly evolved into an
industry dominated by regulation. Routes and fares were controlled by governments, and
airlines competed on food and service, including frequency of flights. Fares were high
and the load factor—the percentage of seats filled—was low because the price of air
travel was beyond the reach of many.
The passage of the 1978 Airline Deregulation Act in the United States ushered in a
new age, making it possible for smaller regional economy airlines, such as Southwest
Airlines, to enter the U.S. market. Ticket prices fell and air travel increased. The
European market deregulated several years later, and new airlines such as Ryan Air and
EZJet emerged to offer travelers low-cost flights between the United Kingdom and the
European continent. Following deregulation, air travel became affordable for many and
passenger air travel grew. Exhibit 41.1 shows the historical growth in global air traffic
from 1974 through 2015. Growth in the industry, measured in revenue passenger
kilometers (RPK) was forecast to continue its growth at an average annual rate of 4.5%
from 2011 to 2030, comparable to the 4.6% growth recorded from 1995 to 2010.
Deregulation reduced government control of routes and fares but had little impact on
the regulations governing aircraft safety. Regulations required that aircraft demonstrate
“airworthiness” through a certification process. The process included registering the
aircraft, followed by intensive physical and records inspection. Once a certificate had
been issued, the aircraft operator was required to keep detailed records for each
aircraft, documenting each flight hour and flight cycle (defined as a take-off and
EXHIBIT 41.1 | Global Air Transport: Billions of Passengers Carried 1970—2015
Source: Created by author from data provided by the World Bank from the International Civil Aviation Organization, Civil
Aviation Statistics of the World, and ICAO staff estimates, (accessed Feb.
3, 2017).
Page 515
landing), as well as all maintenance performed on the aircraft, to prove continued
The development of widespread fatigue damage (WFD) was a major safety issue for
aircraft with high hour and cycle counts. To reduce the risk of passenger injury,
regulations required increasingly frequent airframe inspections for airframes
with high hour and cycle counts and specific service actions to preclude the onset of
WFD. Each airframe was tested for its limits of validity (LOV), defined as the period of
time (in cycles, hours, or both) up to which WFD would not occur. The LOV set the
operational limits of the airframe’s maintenance program and thus defined the airframe’s
usable life. Separate regulations governed aircraft engines, which were unaffected by
To meet the regulations, aircraft and their parts had to be tracked by both their age
and flight cycles. The records, referred to as back-to-birth traceability, or “trace,” had
to be available to the FAA, as well as the next owner/operator of the aircraft. Without
complete trace from original delivery of an aircraft and its related parts, the owner
could not prove the aircraft’s airworthiness, and the aircraft could not be operated
commercially. Parts lacking complete trace had zero residual value. Detailed
recordkeeping, therefore, was vital to maintaining the value of an aircraft.
Two manufacturers, U.S.-based Boeing and France-based Airbus, dominated the
aircraft industry. Each offered a wide range of aircraft, from small single-aisle to large
wide-body aircraft. Among the most popular for leasing were short- to medium-range,
narrow-body commercial jet aircraft, and the most popular of these was the Boeing 737.
Originally introduced in 1967, the 737 design developed into a family of 10 models,
each with the capacity to transport 85 to 215 passengers. Since its inception, Boeing had
delivered more than 7,700 of the narrow-bodied jets to airlines around the globe. More
than 4,100 remained in service, used by more than 500 airlines, servicing 1,200
Page 516
destinations in 190 countries.
In 1981, Boeing introduced the 757, a slightly larger aircraft. The mid-size, narrowbody
twin-engine jet aircraft was intended for short- to medium-range routes and could
carry up to 295 passengers for a maximum of 4,100 nautical miles. The larger capacity,
however, came at the expense of fuel efficiency, and only 1,049 of the 757 aircraft were
built before production ended in 2004.
Airbus introduced the A320 in 1984. A close competitor of the 737, the A320 also
developed into a family of multiple models, accommodating as many as 220 passengers.
Since its introduction, Airbus had built more than 7,100 of the A320 family. Together,
the 737 and the A320 numbered more than 11,600 aircraft in service, representing
approximately 58% of the worldwide fleet. The retirement age for narrow-body aircraft
averaged approximately 25 years. Prices for new aircraft ranged from $32 million to
$114 million, and each manufacturer had a backlog of orders in the thousands.
Aircraft Financing
For an airline to buy and own an aircraft required a significant capital investment.
Leasing aircraft, however, improved an airline’s financial flexibility by improving its
liquidity position and balance sheet. In addition, leasing aircraft improved an airline’s
operating flexibility by allowing it to respond to short- and medium-term fluctuations in
demand, as well as changes in technology and route structures, without capital-intensive
investments. Over the course of its 25-year economic life, an aircraft could be leased
multiple times, with the owner/lessor retaining the residual-value risk until the aircraft
was either sold or retired and converted to parts. Overall, operating leases were
attractive to airlines because of the low capital outlay, flexibility for fleet planning,
increased access to new or improved technology, shortened delivery times, and the
elimination of residual-value risk.
As illustrated in Exhibit 41.2, aircraft leasing gained momentum following
deregulation. In the face of increasing competition, many airlines pursued leasing
aircraft to maintain as much liquidity as possible. For smaller start-up airlines, leasing
offered an additional benefit: established leasing companies were able to access bank
lines of credit and the capital markets at lower costs than the start-ups could.
Capital versus Operating Leases
WNG followed U.S. accounting rules and Su was familiar with the existing rules
regarding both capital and operating leases. Operating leases were generally perceived
to have a number of financial advantages for a lessee, but to qualify as an operating
lease meant that it could not meet any of the criteria of a capital lease. According to the
Financial Accounting Standards Board (FASB) Statement No. 13, a lease was
considered a capital lease if any of the following four criteria were true.
EXHIBIT 41.2 | Growth of Leased Aircraft 1970—2012
Sources: Created by author using Boeing Corporation data from Avolon Holdings Limited, Form F-1 Registration
Statement, December 1, 2014, 90,
1a.htm (accessed Feb. 3, 2017) and “Aircraft Leasing—A Promising Investment Market for Institutional Investors,” KGAL
Group, 3,
(accessed Feb. 3, 2017).
Page 517
In a capital lease, the FASB required that the lessee include both the asset (the
property) and the corresponding liability (the lease) on its balance sheet. At the end of
the lease term, the lessee retained ownership of the property. Importantly,
capital-lease payments were not tax-deductible expenses, but depreciation
expenses associated with the asset could be deducted by the lessee and, as the owner,
the lessee bore the risk of any changes in the asset’s value, including depreciation.
Operating leases were treated significantly differently in terms of ownership and
thus balance-sheet impact. Under an operating lease, the lessor retained ownership of
the leased property and included the property as an asset on its balance sheet. The
lessee enjoyed the use of the property for the term of the lease without having reported
the asset on its balance sheet. The lessee recorded lease payments as ordinary business
expenses, deductible from taxable income. At the conclusion of the lease term, physical
control of the leased property returned to the lessor.
Under the existing rules, long-term leases—more than 12 months—were not
reported as liabilities on the balance sheet. Earlier in the year, the International
Accounting Standards Board (IASB) and the FASB had published an exposure draft
outlining proposed changes to the accounting for leases, including a requirement that
lessees would recognize assets and liabilities for leases of more than 12 months. The
accounting boards believed that the proposed changes would provide investors with
“greater transparency about . . . exposure to credit risk and asset risk.” If approved, Su
1. Ownership of the asset transferred to the lessee by the end of the lease term.
2. The lease contained a bargain-purchase option, whereby the lessee paid below fair
market value for the property at the end of the lease.
3. The lease term was equal to 75% or more of the economic life of the property.
4. The present value of the lease payments over the lease term was equal to or greater
than 90% of the fair market value of the leased property at the beginning of the lease.
Page 518
recognized that the proposed rules would have a significant impact on the financial
statements of airlines and could affect WNG’s business model. Su also knew that such
changes would take years to implement, since the IASB and FASB had been studying the
issue since 2006.
Special-Purpose Entities (SPEs)
Financial institutions often created legal entities known as SPEs to meet specific or
temporary objectives. Often structured as a limited liability company (LLC) or a limited
liability partnership (LLP), an SPE isolated the parent company from the financial and
reputational risk of a large project. SPEs could also be used to hide debt, which could
strengthen the balance sheet, or ownership, which could obscure relationships between
entities. When used to hold a single asset with permits and contract rights (e.g., a power
plant or an aircraft), SPEs simplified transfer of the asset. When registered in low-tax
jurisdictions, such as Ireland, SPEs offered tax advantages. Ireland had become popular
for aircraft finance and leasing activities because of its favorable tax legislation and
treaties. Under Irish tax legislation, SPEs could be liable for a corporate tax rate of up
to 25%, but with appropriate structuring the SPE’s taxable profit would be minimal. In
addition, Ireland had negotiated bilateral tax treaties with the majority of countries
where aircraft were operated. Under these treaties, lease payments made to Irish
registered owners were not subject to the host-country withholding taxes that otherwise
applied to lease income.
The Lease Proposal
The deal being reviewed by Su was the purchase and leaseback of three Boeing 757–
200 aircraft, each with two Rolls-Royce engines and all related maintenance and
technical records (Exhibit 41.3 lists the equipment). The deal also included two spare
engines and the related maintenance and technical records. The two engines would be
sold as is without a full QEC (Quick Engine Change). Under the terms of the initial
agreement as detailed in the Letter of Intent (LOI), WNG would establish an SPE as the
purchaser and lessor of the equipment. On the delivery date, December 15, 2013, the
SPE would make payment of $15 million for the three aircraft and two spare engines.
Under the proposed terms of the lease, the lessee would make rent payments of
$325,000 at the beginning of each of the 12 months, and the lessee would be responsible
for all maintenance, insurance, and taxes on the aircraft during the lease.
The deal involved some potential wrinkles. First, Su needed to consider the
adequacy of the maintenance and technical records associated with the equipment. Su
was already aware that some of the equipment lacked complete back-to-birth
traceability based on information provided by the seller. Prior to signing the LOI, the
seller had disclosed complete back-to-birth traceability on one complete set of landing
gear, partial traceability on another set, and no back-to-birth traceability on a third. The
purchase price of $15 million reflected the lack of full traceability for the landing gear.
The LOI specified that if the airline were to provide proof of full traceability for the
landing gear prior to closing, the purchase price would be increased to $15.5 million.
Given that the seller had proactively disclosed the missing trace for the landing gear, Su
was hoping that the full inspection results would confirm that the remaining equipment
EXHIBIT 41.3 | Equipment to be Purchased and Leased Back
Source: Created by author.
had full traceability. In the event that this proved too optimistic, the LOI provided an
opportunity to renegotiate terms. The LOI specified that both the purchase price and
rental payments were subject to further negotiation if inspections revealed missing trace
for any equipment other than the landing gear.
Another potential wrinkle was that the deal was for 757s rather than the more
common 737s, which could affect WNG’s options for the aircraft at the end of the lease.
The 757 was a versatile aircraft that was popular with pilots because of its more
powerful engines and ability to fly in any weather. In addition to its larger capacity, the
757 had a longer range and could be used for trans-Atlantic flights. Even so, the 757
was not as popular among airlines as the 737, primarily because the larger size made it
much more expensive to operate. The higher operating costs suppressed overall demand
for the aircraft, making the used 757 market much smaller and much less active than the
737 market. Exhibit 41.4 shows the capacity and operating costs of narrow-body jets in
the short-haul sector.
EXHIBIT 41.4 | Aircraft Capacity and Trip Costs of Selected Aircraft in the Short-Haul Sector 1,000
nautical miles (1,900 km)
Page 519
At the end of an initial lease, WNG was usually able to re-lease rather than part out
its aircraft. Historically, more than 80% of WNG’s deals resulted in re-leasing the
aircraft, either to the current lessee or another airline. Ultimately WNG would sell the
aircraft, or, if the airframe were near the end of its operating life based on the
LOV, WNG might sell the airframe and lease or sell the engines and other
major components, including the landing gear and the auxiliary power unit. Another, less
likely option was a freighter conversion. Aircraft that did not justify further investment
to meet the airworthiness requirements for passenger transport were sometimes
converted to freight-carrying aircraft. Such a conversion required extensive airframe
investment. Federal Express operated 70 converted 757 freighters at the time WNG was
considering this investment.
WNG had used published aircraft-appraisal valuations to determine the proposed
$15 million purchase price for the 757s and engines. To estimate the residual value—
the market value of the equipment at the end of the lease—Su had used comparable
appraisal valuations and engine values based upon expected engine life at lease expiry.
Those valuations suggested a residual value of approximately $14 million. However, Su
knew that the market for 757s was far less robust than the market for more popular
aircraft and that finding a buyer in a timely fashion was often very difficult, unless the
seller was willing to reduce the asking price by up to 20%. Also, when he considered
the age of the 757s and their LOV, Su doubted that selling the aircraft to another operator
would be a viable option when the lease expired.
The most likely option in Su’s view was to re-lease the equipment to another
operator, and for this there was a reasonably healthy market. He expected that when the
lease expired at the end of 2014, the aircraft and engines would have up to three years
*USG = U.S. gallons.
**FH = flight hour.
Data source: “Analysing the Options for 757 Replacement,” Aircraft Commerce, no. 42, August/September 2005, 29.
Page 520
of remaining operating life, and he surmised that the equipment could be re-leased at the
same monthly rental rate. Su estimated that the most likely outcome for WNG would be
that the equipment would be on lease for 80% of the time during the last three years of
operating life, after which WNG would realize $3 million for each aircraft, including
parts and engines.
Finally, Su considered the lessee. The airline was far less creditworthy than WNG’s
typical client. It had suffered significant losses in recent years and was heavily in debt,
which made Su wonder about the risks of entering into a deal with a heavily indebted
and financially challenged counterparty. Would the airline be able to meet its financial
obligations to WNG? On the plus side, the airline had completed several sale-andleaseback
deals in the past 18 months, which had substantially improved its balance
sheet. The question for Su was whether WNG’s usual IRR of 11% to 14% would be
sufficient compensation for either WNG or WNG’s investors. Therefore, to reflect the
higher risk, Su had chosen to use a required annual return of 20% to evaluate the deal.
The Side Letter
As Su began reviewing all the documents of the deal, he learned that the final
inspections had just been completed and WNG had discovered more problems with the
condition of the equipment. At the time of the LOI, WNG had assumed that all the
engines were serviceable and the trace would be intact for all the equipment except the
landing gear. To Su’s chagrin, the most recent inspections revealed that one of the two
spare engines was not serviceable, and a long list of additional equipment lacked backto-
birth traceability. To deal with these revelations, WNG would draft and send a “side
letter” to the airline, detailing the inspection results and specifying revisions to
the terms in the LOI. The first item in the side letter would be a reduction of the
purchase price by $750,000 as compensation for the unserviceable engine.
The inspection team had estimated the value of the items missing trace
documentation as “at least US$1.4 million.” To compensate for the missing traces,
WNG was seeking two adjustments to the LOI terms: a reduction in the purchase price
by $1.4 million and an increase in the monthly rent by $140,000 (10% of the $1.4
million). The additional rent would serve as an incentive for the seller/lessee to locate
and provide as many of the records as possible and as quickly as possible. If and when
the missing documentation was located and provided, the additional 10% in rent for
those items would no longer apply. Also, if the airline were to agree to re-lease the
equipment beyond 2014, the additional 10% in rent would no longer apply.
As Su reviewed the proposed terms of the side letter, it was clear to him that the
revisions to the purchase price and the additional rent would have a substantial impact
on the value of the lease to WNG. He wondered about the value of the unserviceable
engine. Su was aware that a used modern jet engine contained precious metals such as
cadmium and palladium that would have some small value in the scrap market. If the
engine could be sold for parts, it could be worth in the neighborhood of $50,000, but
since the engine was unserviceable, Su was assuming a residual/salvage value of zero
for the analysis.
Like the purchase price, Su’s estimate of the residual value would have to be
adjusted for the unserviceable engine and the missing trace. Su estimated that a
reduction of the appraisal value by $1.4 million would approximate the impact of the
missing trace. If anything, however, the missing trace would make finding a buyer even
more difficult than he had originally thought. The prospect of searching for a buyer for
months and months made it all the more likely to Su that WNG would follow a releasing
strategy to stretch out the cash flows for the equipment.
With these issues in mind, Su had begun to create a model for computing the net
present value (NPV) and IRR of the cash flows. Because of the tax advantages of the
SPE structure, Su conducted his analyses with a zero marginal tax rate. Included in the
specific cash flows of the deal (Exhibit 41.5) were the purchase price and rental rates
agreed upon in the LOI, plus the adjustments to the purchase price and the additional
rent demanded in the side letter. Su wondered whether the airline would be able to
locate the missing records, and if not, whether the extra rent would become
unaffordable. Maybe it would be better to propose a larger reduction of the purchase
price and smaller additional rent payments. Most importantly, however, Su wondered
whether this deal would be profitable for WNG.
EXHIBIT 41.5 | Summary of Cash-Flow Assumptions
*Rent is due at the beginning of each month with the first payment due at closing
Source: Author estimates.
Page 525
Mogen, Inc.
On January 10, 2006, the managing director of Merrill Lynch’s Equity-Linked Capital
Markets Group, Dar Maanavi, was reviewing the final drafts of a proposal for a
convertible debt offering by MoGen, Inc. As a leading biotechnology company in the
United States, MoGen had become an important client for Merrill Lynch over the years.
In fact, if this deal were to be approved by MoGen at $5 billion, it would represent
Merrill Lynch’s third financing for MoGen in four years with proceeds raised totaling
$10 billion. Moreover, this “convert” would be the largest such single offering in
history. The proceeds were earmarked to fund a variety of capital expenditures,
research and development (R&D) expenses, working capital needs, as well as a share
repurchase program.
The Merrill Lynch team had been working with MoGen’s senior management to find
the right tradeoff between the conversion feature and the coupon rate for the bond.
Maanavi knew from experience that there was no “free lunch,” when structuring the
pricing of a convertible. Issuing companies wanted the conversion price to be as high as
possible and the coupon rate to be as low as possible; whereas investors wanted the
opposite: a low conversion price and a high coupon rate. Thus, the challenge was to
structure the convert to make it attractive to the issuing company in terms of its cost of
capital, while at the same time selling for full price in the market. Maanavi was
confident that the right balance in the terms of the convert could be found, and he was
also confident that the convert would serve MoGen’s financing needs better than a
straight bond or equity issuance. But, he needed to make a decision about the final terms
of the issue in the next few hours, as the meeting with MoGen was scheduled for early
Page 526
the next morning.
Company History
Founded in 1985 as MoGen (Molecular Genetics) the company was among the first in
the biotechnology industry to deliver on the commercial promises of emerging sciences,
such as recombinant DNA and molecular biology. After years of research, MoGen
emerged with two of the first biologically derived human therapeutic drugs, RENGEN
and MENGEN, both of which helped to offset the damaging effects from chemotherapy
for cancer patients undergoing treatment. Those two MoGen products were
among the first “blockbuster” drugs to emerge from the nascent biotechnology
By 2006, MoGen was one of the leading biotech companies in an industry that
included firms such as Genentech, Amgen, Gilead Sciences, Celgene, and Genzyme. The
keys to success for all biotech companies were finding new drugs through research and
then getting the drugs approved by the U.S. Food and Drug Administration (FDA).
MoGen’s strategy for drug development was to determine the best mode for attacking a
patient’s issue and then focusing on creating solutions via that mode. Under that
approach, MoGen had been able to produce drugs with the highest likelihood of both
successfully treating the patient as well as making the company a competitive leader in
drug quality. In January 2006, MoGen’s extensive R&D expenditures had resulted in a
portfolio of five core products that focused on supportive cancer care. The success of
that portfolio had been strong enough to offset other R&D write-offs so that MoGen was
able to report $3.7 billion in profits in 2005 on $12.4 billion in sales. Sales had grown
at an annual rate of 29% over the previous five years, and earnings per share had
improved to $2.93 for 2005, compared with $1.81 and $1.69 for 2004 and 2003,
respectively (Exhibits 42.1 and 42.2).
EXHIBIT 42.1 | Consolidated Income Statements (in millions of dollars, except per share)
EXHIBIT 42.2 | Consolidated Balance Sheets (in millions of dollars)
The FDA served as the regulating authority to safeguard the public from dangerous
drugs and required extensive testing before it would allow a drug to enter the U.S.
marketplace. The multiple hurdles and long lead-times required by the FDA created a
constant tension with the biotech firms who wanted quick approval to maximize the
return on their large investments in R&D. Moreover, there was always the risk that a
drug would not be approved or that after it was approved, it would be pulled from the
market due to unexpected adverse reactions by patients. Over the years, the industry had
made progress in shortening the approval time and improving the predictability of the
approval process. At the same time, industry R&D expenditures had increased 12.6%
over 2003 in the continuing race to find the next big breakthrough product.
Like all biotech companies, MoGen faced uncertainty regarding new product
creation as well as challenges involved with sustaining a pipeline of future products.
Now a competitive threat of follow-on biologics or “biosimilars” began emerging. As
drugs neared the end of their patent protection, competitors would produce similar
drugs as substitutes. Competitors could not produce the drug exactly, because they did
not have access to the original manufacturer’s molecular clone or purification process.
Thus, biosimilars required their own approval to ensure they performed as safely as the
original drugs. For MoGen, this threat was particularly significant in Europe, where
several patents were approaching expiration.
Funding Needs
MoGen needed to ensure a consistent supply of cash to fund R&D and to maintain
financial flexibility in the face of uncertain challenges and opportunities. MoGen had
cited several key areas that would require approximately $10 billion in funding for
Page 527
1. Expanding manufacturing and formulation, and fill and finish capacity: Recently,
the company had not been able to scale up production to match increases in demand
for certain core products. The reason for the problem was that MoGen outsourced
most of its formulation and fill and finish manufacturing processes, and these
offshore companies had not been able to expand their operations quickly
enough. Therefore, MoGen wanted to remove such supply risks by increasing both its
internal manufacturing capacity in its two existing facilities in Puerto Rico as well as
new construction in Ireland. These projects represented a majority of MoGen’s total
capital expenditures that were projected to exceed $1 billion in 2006.
2. Expanding investment in R&D and late-stage trials: Late-stage trials were
particularly expensive, but were also critical as they represented the last big hurdle
before a drug could be approved by the FDA. With 11 late-stage “mega-site” trials
expected to commence in 2006, management knew that successful outcomes were
critical for MoGen’s ability to maintain momentum behind its new drug development
pipeline. The trials would likely cost $500 million. MoGen had also decided to
diversify its product line by significantly increasing R&D to approximately $3 billion
for 2006, which was an increase of 30% over 2005.
3. Acquisition and licensing: MoGen had completed several acquisition and licensing
deals that had helped it achieve the strong growth in revenues and earnings per share
(EPS). The company expected to continue this strategy and had projected to complete
a purchase of Genix, Inc., in 2006 for approximately $2 billion in cash. This
acquisition was designed to help MoGen capitalize on Genix’s expertise in the
discovery, development, and manufacture of human therapeutic antibodies.
4. The stock repurchase program: Due to the highly uncertain nature of its operations,
MoGen had never issued dividends to shareholders but instead had chosen to pursue a
stock repurchase program. Senior management felt that this demonstrated a strong
belief in the company’s future and was an effective way to return cash to shareholders
without being held to the expectation of having a regular dividend payout. Due to
strong operational and financial performance over the past several years, MoGen had
executed several billion dollars worth of stock repurchases, and it was management’s
intent to continue repurchases over the next few years. In 2005, MoGen purchased a
total of 63.2 million shares for an aggregate $4.4 billion. As of December 31, 2005,
MoGen had $6.5 billion remaining in the authorized share repurchase plan, of which
Page 528
With internally generated sources of funds expected to be $5 billion (net income
plus depreciation), MoGen would fall well below the $10 billion expected uses of
funds for 2006. Thus, management estimated that an offering size of about $5 billion
would cover MoGen’s needs for the coming year.
Convertible Debt
A convertible bond was considered a hybrid security, because it had attributes of both
debt and equity. From an investor’s point of view, a convert provided the safety of a
bond plus the upside potential of equity. The safety came from receiving a fixed income
stream in the form of the bond’s coupon payments plus the return of principal.
The upside potential came from the ability to convert the bond into shares of
common stock. Thus, if the stock price should rise above the conversion price, the
investor could convert and receive more than the principal amount. Because of the
potential to realize capital appreciation via the conversion feature, a convert’s coupon
rate was always set lower than what the issuing company would pay for straight debt.
Thus, when investors bought a convertible bond, they received less income than from a
comparable straight bond, but they gained the chance of receiving more than the face
value if the bond’s conversion value exceeded the face value.
To illustrate, consider a convertible bond issued by BIO, Inc., with a face value of
$1,000 and a maturity of five years. Assume that the convert carries a coupon rate of 4%
and a conversion price of $50 per share and that BIO’s stock was selling for $37.50 per
share at the time of issuance. The coupon payment gives an investor $40 per year in
interest (4% × $1,000), and the conversion feature gives investors the opportunity to
exchange the bond for 20 shares (underlying shares) of BIO’s common stock ($1,000 ÷
$50). Because BIO’s stock was selling at $37.50 at issuance, the stock price would
management expected to spend $3.5 billion in 2006.
need to appreciate by 33% (conversion premium) to reach the conversion price of $50.
For example, if BIO’s stock price were to appreciate to $60 per share, investors could
convert each bond into 20 shares to realize the bond’s conversion value of $1,200. On
the other hand, if BIO’s stock price failed to reach $50 within the five-year life of the
bond, the investors would not convert, but rather would choose to receive the bond’s
$1,000 face value in cash.
Because the conversion feature represented a right, rather than an obligation,
investors would postpone conversion as long as possible even if the bond was well “in
the money.” Suppose, for example, that after three years BIO’s stock had risen to $60.
Investors would then be holding a bond with a conversion value of $1,200; which is to
say, if converted they would receive the 20 underlying shares worth $60 each. With two
years left until maturity, however, investors would find that they could realize a higher
value by selling the bond on the open market, rather than converting it. For example, the
bond might be selling for $1,250; $50 higher than the conversion value. Such a premium
over conversion value is typical, because the market recognizes that convertibles have
unlimited upside potential, but protected downside. Unlike owning BIO stock directly,
the price of the convertible bond cannot fall lower than its bond value—the value of the
coupon payments and principal payment—but its conversion value could rise as high as
the stock price will take it. Thus, as long as more upside potential is possible, the
premium price will exist, and investors will have the incentive to sell their bonds,
rather than convert them prior to maturity.
Academics modeled the value of a convertible as the sum of the straight bond value
plus the value of the conversion feature. This was equivalent to valuing a convert as a
bond plus a call option or a warrant. Although MoGen did not have any warrants
outstanding, there was an active market in MoGen options (Exhibit 42.3). Over the past
five years, MoGen’s stock price had experienced modest appreciation with
Page 529
considerable variation (Exhibit 42.4).
EXHIBIT 42.3 | MoGen Option Data: January 10, 2006 (MoGen closing stock price = $77.98)
EXHIBIT 42.4 | MoGen Stock Price for 2001 to 2005
MoGen’s Financial Strategy
As of December 31, 2005, the company had approximately $4 billion of long-term debt
on the books (Exhibit 42.5). About $2 billion of the debt was in the form of straight
debt with the remaining $1.8 billion as seven-year convertible notes. The combination
of industry and company-specific risks had led MoGen to keep its long-term debt at or
below 20% of total capitalization. There was a common belief that because of the
industry risks, credit-rating agencies tended to penalize biotech firms by placing a
“ceiling” on their credit ratings. MoGen’s relatively low leverage, however, allowed it
to command a Standard and Poor’s (S&P) rating of A+, which was the highest rating
within the industry. Based on discussions with S&P, MoGen management was confident
that the company would be able to maintain its rating for the $5 billion new straight debt
or convertible issuance. For the current market conditions, Merrill Lynch had estimated
a cost to MoGen of 5.75%, if it issued straight five-year bonds. (See Exhibit 42.6 for
capital market data.)
EXHIBIT 42.5 | Long-Term Debt as of December 31, 2005 (in millions of dollars)
EXHIBIT 42.6 | Capital Market Data for January 2006
MoGen’s seven-year convertible notes had been issued in 2003 and carried a
conversion price of $90.000 per share. Because the stock price was currently at $77.98
per share, the bondholders had not yet had the opportunity to exercise the conversion
option. Thus, the convertibles had proven to be a low-cost funding source for MoGen,
as it was paying a coupon of only 1.125%. If the stock price continued to remain below
the conversion price, the issue would not be converted and MoGen would simply retire
the bonds in 2010 (or earlier, if called) at an all-in annual cost of 1.125%. On the other
hand, if the stock price appreciated substantially by 2010, then the bondholders would
convert and MoGen would need to issue 11.1 shares per bond outstanding or
approximately 20 million new shares. Issuing the shares would not necessarily be a bad
outcome, because it would amount to issuing shares at $90 rather than at $61, the stock
price at the time of issuance.
Since its initial public offering (IPO), MoGen had avoided issuing new equity,
except for the small amounts of new shares issued each year as part of management’s
incentive compensation plan. The addition of these shares had been more than offset,
however, by MoGen’s share repurchase program, so that shares outstanding had fallen
from 1,280 million in 2004, to 1,224 million in 2005. Repurchasing shares served two
purposes for MoGen: (1) It had a favorable impact upon EPS by reducing the shares
outstanding; and (2) It served as a method for distributing cash to shareholders.
Although MoGen could pay dividends, management preferred the flexibility of Page 530
repurchasing shares. If MoGen were to institute a dividend, there was always
the risk that the dividend might need to be decreased or eliminated during hard times
which, when announced, would likely result in a significant drop of the stock price.
Merrill Lynch Equity-Linked Origination Team
The U.S. Equity-Linked Origination Team was part of Merrill Lynch’s Equity Capital
Markets Division that resided in the Investment Banking Division. The team was the
product group that focused on convertible, corporate derivative, and special equity
transaction origination for Merrill Lynch’s U.S. corporate clients. As product experts,
members worked with the industry bankers to educate clients on the benefits of utilizing
equity-linked instruments. They also worked closely with derivatives and convertible
traders, the equity and equity-linked sales teams, and institutional investors including
hedge funds, to determine the market demand for various strategies and securities.
Members had a high level of expertise in tax, accounting, and legal issues. The technical
aspects of equity-linked securities were rigorous, requiring significant financial
modeling skills, including the use of option pricing models, such as Black-Scholes and
other proprietary versions of the model used to price convertible bonds. Within the
equities division and investment banking, the team was considered one of the most
technically capable and had proven to be among the most profitable businesses at
Merrill Lynch.
Pricing Decision
Dar Maanavi was excited by the prospect that Merrill Lynch would be the lead book
runner of the largest convertible offering in history. At $5 billion, MoGen’s issue would
represent more than 12% of the total proceeds for convertible debt in the United States
during 2005. Although the convert market was quite liquid and the Merrill Lynch team
was confident that the issue would be well received, the unprecedented size heightened
the need to make it as marketable as possible. Maanavi knew that MoGen wanted a
maturity of five years, but was less certain as to what he should propose regarding the
conversion premium and coupon rate. These two terms needed to be satisfactory to
MoGen’s senior management team while at the same time being attractive to potential
investors in the marketplace. Exhibit 42.7 shows the terms of the offering that had
already been determined.
Most convertibles carried conversion premiums in the range of 10% to 40%. The
coupon rates for a convertible depended upon many factors, including the conversion
premium, maturity, credit rating, and the market’s perception of the volatility of the
issuing company’s stock. Issuing companies wanted low coupon rates and high
conversion premiums, whereas investors wanted the opposite: high coupons and low
EXHIBIT 42.7 | Selected Terms of Convertible Senior Notes
Page 531
conversion premiums. Companies liked a high conversion premium, because it
effectively set the price at which its shares would be issued in the future. For example,
if MoGen’s bond was issued with a conversion price of $109, it would represent a 40%
conversion premium over its current stock price of $77.98. Thus, if the issue were
eventually converted, the number of MoGen shares issued would be 40% less than what
MoGen would have issued at the current stock price. Of course, a high conversion
premium also carried with it a lower probability that the stock would ever
reach the conversion price. To compensate investors for this reduced upside
potential, MoGen would need to offer a higher coupon rate. Thus, the challenge for
Maanavi was to find the right combination of conversion premium and coupon rate that
would be acceptable to MoGen management as well as desirable to investors.
There were two types of investor groups for convertibles: fundamental investors
and hedge funds. Fundamental investors liked convertibles, because they viewed them
as a safer form of equity investment. Hedge fund investors viewed convertibles as an
opportunity to engage in an arbitrage trading strategy that typically involved holding
long positions of the convertible and short positions of the common stock. Companies
preferred to have fundamental investors, because they took a longer-term view of their
investment than hedge funds. If the conversion premium was set above 40%,
fundamental investors tended to lose interest because the convertible became a more
speculative investment with less upside potential. Thus, if the conversion premium were
set at 40% or higher, it could be necessary to offer an abnormally high coupon rate for a
convertible. In either case, Maanavi thought a high conversion premium was not
appropriate for such a large offering. It could work for a smaller, more volatile stock,
but not for MoGen and not for a $5 billion offering.
Early in his conversations with MoGen, Maanavi had discussed the accounting
treatment required for convertibles. Recently, most convertibles were being structured
Page 532
to use the “treasury stock method,” which was desirable because it reduced the impact
upon the reported fully diluted EPS. To qualify for the treasury stock method the
convertible needed to be structured as a net settled security. This meant that investors
would always receive cash for the principal amount of $1,000 per bond, but could
receive either cash or shares for the excess over $1,000 upon conversion. The
alternative method of accounting was the if-converted method, which would require
MoGen to compute fully diluted EPS, as if investors received shares for the full amount
of the bond when they converted; which is to say the new shares equaled the principal
amount divided by the conversion price per share. The treasury stock method, however,
would allow MoGen to report far fewer fully diluted shares for EPS purposes because
it only included shares representing the excess of the bond’s conversion value over the
principal amount. Because much of the issue’s proceeds would be used to fund the stock
repurchase program, MoGen’s management felt that using the treasury stock method
would be a better representation to the market of MoGen’s likely EPS, and therefore
agreed to structure the issue accordingly (see “conversion rights” in Exhibit 42.7).
In light of MoGen management’s objectives, Maanavi decided to propose a
conversion premium of 25%, which was equivalent to conversion price of $97.000.
MoGen management would appreciate that the conversion premium would appeal to a
broad segment of the market, which was important for a $5 billion offering. On the other
hand, Maanavi knew that management would be disappointed that the conversion
premium was not higher. Management felt that the stock was selling at a depressed price
and represented an excellent buy. In fact, part of the rationale for having the
stock repurchase program was to take advantage of the stock price being low.
Maanavi suspected that management would express concern that a 25% premium would
be sending a bad signal to the market: a low conversion premium could be interpreted
as management’s lack of confidence in the upside potential of the stock. For a five-year
issue, the stock would only need to rise by 5% per year to reach the conversion price by
maturity. If management truly believed the stock had strong appreciation potential, then
the conversion premium should be set much higher.
If Maanavi could convince MoGen to accept the 25% conversion premium, then
choosing the coupon rate was the last piece of the pricing puzzle to solve. Because he
was proposing a mid-range conversion premium, investors would be satisfied with a
modest coupon. Based on MoGen’s bond rating, the company would be able to issue
straight five-year bonds with a 5.75% yield. Therefore, Maanavi knew that the
convertible should carry a coupon rate noticeably lower than 5.75%. The challenge was
to estimate the coupon rate that would result in the debt being issued at exactly the face
value of $1,000 per bond.
Page 537
8 Valuing the Enterprise: Acquisitions and
Page 540
Page 539
Methods of Valuation for Mergers and
This note addresses the methods used to value companies in a merger and acquisitions
(M&A) setting. It provides a detailed description of the discounted-cash-flow (DCF)
approach and reviews other methods of valuation, such as market multiples of peer
firms, book value, liquidation value, replacement cost, market value, and comparable
transaction multiples.
Discounted-Cash-Flow Method
The DCF approach in an M&A setting attempts to determine the enterprise value, or
value of the company, by computing the present value of cash flows over the life of the
company. Because a corporation is assumed to have infinite life, the analysis is broken
into two parts: a forecast period and a terminal value. In the forecast period, explicit
forecasts of free cash flow that incorporate the economic costs and benefits of the
transaction must be developed. Ideally, the forecast period should comprise the interval
over which the firm is in a transitional state, as when enjoying a temporary competitive
advantage (i.e., the circumstances wherein expected returns exceed required returns). In
most circumstances, a forecast period of five or ten years is used.
The terminal value of the company, derived from free cash flows occurring after the
forecast period, is estimated in the last year of the forecast period and capitalizes the
present value of all future cash flows beyond the forecast period. To estimate
the terminal value, cash flows are projected under a steady-state assumption that the
firm enjoys no opportunities for abnormal growth or that expected returns equal
required returns following the forecast period. Once a schedule of free cash flows is
developed for the enterprise, the weighted average cost of capital (WACC) is used to
discount them to determine the present value. The sum of the present values of the
forecast period and the terminal value cash flows provides an estimate of company or
enterprise value.
Review of DCF basics
Let us briefly review the construction of free cash flows, terminal value, and the
WACC. It is important to realize that these fundamental concepts work equally well
when valuing an investment project as they do in an M&A setting.
Free cash flows: The free cash flows in an M&A analysis should be the expected
incremental operating cash flows attributable to the acquisition, before consideration of
financing charges (i.e., prefinancing cash flows). Free cash flow equals the sum of net
operating profits after taxes (NOPAT), plus depreciation and noncash charges, less
capital investment and less investment in working capital. NOPAT captures the earnings
after taxes that are available to all providers of capital. That is, NOPAT has no
deductions for financing costs. Moreover, because the tax deductibility of interest
payments is accounted for in the WACC, such financing tax effects are also excluded
from the free cash flow, which is expressed in Equation 43.1.
NOPAT is equal to EBIT (1 − t), where t is the appropriate marginal (not average)
cash tax rate, which should be inclusive of federal, state, local, and foreign
jurisdictional taxes.2
Page 541
The cash-flow forecast should be grounded in a thorough industry and company
forecast. Care should be taken to ensure that the forecast reflects consistency with firm
strategy as well as with macroeconomic and industry trends and competitive pressure.
The forecast period is normally the years during which the analyst
estimates free cash flows that are consistent with creating value. A convenient
way to think about value creation is whenever the return on net assets (RONA) exceeds
the WACC. RONA can be divided into an income statement component and a balance
sheet component:
In this context, value is created whenever earnings power increases (NOPAT/Sales)
or when asset efficiency is improved (Sales/Net Assets). In other words, analysts are
assuming value creation whenever they allow the profit margin to improve on the
income statement and whenever they allow sales to improve relative to the level of
assets on the balance sheet.
Terminal value: A terminal value in the final year of the forecast period is added to
reflect the present value of all cash flows occurring thereafter. Because it capitalizes all
future cash flows beyond the final year, the terminal value can be a large component of
the value of a company, and therefore deserves careful attention. This can be of
particular importance when cash flows over the forecast period are close to zero (or
Depreciation is noncash operating charges including depreciation, depletion, and
amortization recognized for tax purposes.
CAPEX is capital expenditures for fixed assets.
ΔNWC is the increase in net working capital defined as current assets less the noninterest-
bearing current liabilities.3
Page 542
even negative) as the result of aggressive investment for growth.
A standard estimator of the terminal value (TV) in the final year of the cash-flow
forecast is the constant growth valuation formula (Equation 43.2).
The free-cash-flow value used in the constant growth valuation formula should
reflect the steady-state cash flow for the year after the forecast period. The assumption
of the formula is that in steady state, this cash flow will grow in perpetuity at the steadystate
growth rate. A convenient approach is to assume that RONA remains constant in
perpetuity; that is, both profit margin and asset turnover remain constant in perpetuity.
Under this assumption, the analyst grows all financial statement line items (i.e., revenue,
costs, assets) at the expected steady-state growth rate. In perpetuity, this assumption
makes logical sense in that if a firm is truly in steady state, the financial statements
should be growing, by definition, at the same rate.
Discount rate: The discount rate should reflect the weighted average of
investors’ opportunity cost (WACC) on comparable investments. The WACC
matches the business risk, expected inflation, and currency of the cash flows to be
discounted. In order to avoid penalizing the investment opportunity, the WACC also
must incorporate the appropriate target weights of financing going forward. Recall that
the appropriate rate is a blend of the required rates of return on debt and equity,
weighted by the proportion of the firm’s market value they make up (Equation 43.3).
FCF is the steady-state expected free cash flow for the year after the final
year of the cash-flow forecast.
Steady State
WACC is the weighted average cost of capital.
g is the expected steady-state growth rate of FCFSteady State in perpetuity.
The costs of debt and equity should be going-forward market rates of return. For
debt securities, this is often the yield to maturity that would be demanded on new
instruments of the same credit rating and maturity. The cost of equity can be obtained
from the Capital Asset Pricing Model (CAPM) (Equation 43.4).
k is the required yield on new debt: It is d yield to maturity.
ke is the cost of equity capital.
W , W are target percentages of debt and equity (using market values of debt and
d e
t is the marginal tax rate.
R is the expected return on risk-free securities over a time horizon consistent with the
investment horizon. Most firm valuations are best served by using a long maturity
government bond yield.
R − R is the expected market risk premium. This value is commonly estimated as the
average historical difference between the returns on common stocks and long-term
government bonds. For example, Ibbotson Associates estimated that the geometric
mean return between 1926 and 2007 for large capitalization U.S. equities between
1926 and 2007 was 10.4%. The geometric mean return on long-term government
bonds was 5.5%. The difference between the two implies a historical market-risk
premium of about 5.0%. In practice, one observes estimates of the market risk
premium that commonly range from 5% to 8%.
m f
The M&A Setting
No doubt, many of these concepts look familiar. Now we must consider how they are
altered by the evaluation of a company in an M&A setting. First, we should recognize
that there are two parties (sometimes more) in the transaction: an acquirer (buyer or
bidder) and a target firm (seller or acquired). Suppose a bidder is considering the
potential purchase of a target firm and we must assess whether the target would be a
good investment. Some important questions arise in applying our fundamental concepts:
Page 543
β, or beta, is a measure of the systematic risk of a firm’s common stock. The beta of
common stock includes compensation for business and financial risk.
1. What are the potential sources of value from the combination? Does the acquirer
have particular skills or capabilities that can be used to enhance the value of the
target firm? Does the target have critical technology or other strengths that can
bring value to the acquirer?
Potential sources of gain or cost savings achieved through the combination are called
synergies. Baseline cash-flow projections for the target firm may or may not include
synergies or cost savings gained from merging the operations of the target into those of
the acquirer. If the base-case cash flows do not include any of the economic benefits
an acquirer might bring to a target, they are referred to as stand-alone cash flows.
Examining the value of a target on a stand-alone basis can be valuable for several
reasons. First, it can provide a view of what the target firm is capable of achieving on
its own. This may help establish a floor with respect to value for negotiating purposes.
Second, construction of a stand-alone DCF valuation can be compared with the
target’s current market value. This can be useful in assessing whether the target is
under- or overvalued in the marketplace. Given the general efficiency of markets,
however, it is unlikely that a target will be significantly over- or undervalued relative
to the market. Hence, a stand-alone DCF valuation allows analysts to calibrate model
assumptions to those of investors. By testing key assumptions relative to this important
benchmark, analysts can gain confidence that the model provides a reasonable guide to
investors’ perception of the situation.
Page 544
2. What is the proper discount rate to use?
The discount rate used to value the cash flows of the target should compensate the
investor/acquiring firm for the risk of the cash flows. Commonly, the cost of capital of
the target firm provides a suitable discount rate for the stand-alone and merger cash
flows. The cost of capital of the target firm is generally more appropriate as a
discount rate than the cost of capital of the acquiring firm because the target cost of
capital generally better captures the risk premium associated with bearing the risk of
the target cash flows than does the cost of capital of the acquiring firm. If the target
and acquirer are in the same industry, they likely have similar business risk. Because
in principle the business risk is similar for the target and the acquirer, either one’s
WACC may be justifiably used. The use of the target’s cost of capital also assumes
that the target firm is financed with the optimal proportions of debt and equity and that
these proportions will continue after the merger.
Additional information on the appropriate discount rate can be obtained by computing
the WACCs of firms in the target’s industry. These estimates can be summarized by
taking the average or median WACC. By using the betas and financial structures of
firms engaged in this line of business, a reliable estimate of the business risk
and optimal financing can be established going forward.
Sometimes an acquirer may intend to increase or decrease the debt level of the target
significantly after the merger—perhaps because it believes the target’s current
financing mix is not optimal. The WACC still must reflect the business risk of the
target. A proxy for this can be obtained from the unlevered beta of the target firm’s
equity or an average unlevered beta for firms with similar business risk. The target’s
premerger unlevered beta must then be relevered to reflect the acquirer’s intended
postmerger capital structure.
To unlever a firm beta, one uses the prevailing tax rate (T) and the predeal debt-toequity
ratio (D/E) of the firm associated with the beta estimate (β ) to solve
Equation 43.5:
Next, one uses the unlevered beta estimate (β ) or average unlevered beta estimate (if
using multiple firms to estimate the unlevered beta) to relever the beta to the new
intended debt-to-equity ratio (D/E ) (Equation 43.6):
The result is a relevered beta estimate (β′ ) that captures the business risk and the
financial risk of the target cash flows.
The circumstances of each transaction will dictate which of these approaches is most
reasonable. Of course, if the target’s business risk somehow changes because of the
merger, some adjustments must be made to all of these approaches on a judgment
basis. The key concept is to find the discount rate that best reflects the business and
financial risks of the target’s cash flows.
3. After determining the enterprise value, how is the value of the equity computed?
This is a straightforward calculation that relies upon the definition of enterprise value
as the value of cash flows available to all providers of capital. Because debt and
equity are the sources of capital, it follows that enterprise value (V) equals the sum of
debt (D) and equity (E) values (Equation 43.7):
Example of the DCF Method
Therefore, the value of equity is simply enterprise value less the value of existing debt
(Equation 43.8):
where debt is the market value of all interest-bearing debt outstanding at the time of
the acquisition. For publicly traded targets, the value of the share price can be
computed by simply dividing the equity value by the number of shares of stock
Page 545
4. How does one incorporate the value of synergies in a DCF analysis?
Operating synergies are reflected in enterprise value by altering the stand-alone cash
flows to incorporate the benefits and costs of the combination. Free cash flows that
include the value an acquirer and target can achieve through combination are referred
to as combined or merger cash flows.
If the acquirer plans to run the acquired company as a stand-alone entity, as in
the case of Berkshire Hathaway purchasing a company unrelated to its existing
holdings (e.g., Dairy Queen), there may be little difference between the stand-alone
and merger cash flows. In many strategic acquisitions, however, such as the
Pfizer/Wyeth and InBev/Fujian Sedrin Brewery mergers, there can be sizable
How the value of these synergies is split among the parties through the determination
of the final bid price or premium paid is a major issue for negotiation. If the bidder
pays a premium equal to the value of the synergies, all the benefits will accrue to
target shareholders, and the merger will be a zero net-present-value investment for the
shareholders of the acquirer.
Suppose Company A has learned that Company B (a firm in a different industry but in a
business that is strategically attractive to Company A) has retained an investment bank
to auction the company and all of its assets. In considering how much to bid for
Company B, Company A starts with the cash-flow forecast of the stand-alone business
drawn up by Company B’s investment bankers shown in Table 43.1. The discount rate
used to value the cash flows is Company B’s WACC of 10.9%. The inputs to WACC,
with a market risk premium of 6%, are shown in Table 43.2.
TABLE 43.1 | Valuation of Company B as a stand-alone unit. (assume that Company A will allow
Company B to run as a stand-alone unit with no synergies)
TABLE 43.2 | Inputs to WACC.
On a stand-alone basis, the analysis in Table 43.1 suggests that Company B’s
enterprise value is $9.4 million.
Now suppose Company A believes it can make Company B’s operations more
efficient and improve its marketing and distribution capabilities. In Table 43.3, we
incorporate these effects into the cash-flow model, thereby estimating a higher range of
values that Company A can bid and still realize a positive net present value (NPV) for
its shareholders. In the merger cash-flow model of the two firms in Table 43.3,
Company B has added two percentage points of revenue growth, subtracted two
percentage points from the COGS /Sales ratio, and subtracted one percentage point
from the SG&A/Sales ratio relative to the stand-alone model. We assume that all of the
merger synergies will be realized immediately and therefore should fall well within the
five-year forecast period. The inputs to target and acquirer WACCs are summarized in
Table 43.3.
TABLE 43.3 | Valuation of Company B with synergies.
(assume that Company B merges with Company A and realizes operational synergies)
Because Company A and Company B are in different industries, it is not appropriate
to use Company A’s WACC of 10.6% in discounting the expected cash flows. Despite
the fact that after the merger, Company B will become part of Company A, we do not
use Company A’s WACC because it does not reflect the risk associated with the merger
Page 547
Page 546
cash flows. In this case, one is better advised to focus on “where the money is going,
rather than where the money comes from” in determining the risk associated with the
transaction. In other words, the analyst should focus on the target’s risk and financing
(not the buyer’s risk and financing) in determining the appropriate discount rate. The
discount rate should reflect the expected risk of the cash flows being priced and not
necessarily the source of the capital.
Notice that the value with synergies, $15.1 million, exceeds the value as a
stand-alone entity by $5.7 million. In devising its bidding strategy, Company A
would not want to offer the full $15.1 million and concede all the value of the synergies
to Company B. At this price, the NPV of the acquisition to Company A is zero. The
existence of synergies, however, allows Company A leeway to increase its bid
above $9.4 million and enhance its chances of winning the auction.
Considerations for Terminal Value Estimation
In the valuation of both the stand-alone and merger cash flows, the terminal value
contributes the bulk of the total cash-flow value (if the terminal value is eliminated, the
enterprise value drops by about 75%). This relationship between terminal value and
enterprise value is typical of firm valuation because of the ongoing nature of the life of a
business. Because of the importance of the terminal value in firm valuation, the
assumptions that define the terminal value deserve particular attention.
In the stand-alone Company B valuation in Table 43.1, we estimated the terminal
value using the constant-growth valuation model. This formula assumes that the business
has reached some level of steady-state growth such that the free cash flows can be
modeled to infinity with the simple assumption of a constant growth rate. Because of
this assumption, it is important that the firm’s forecast period be extended until such a
steady state is truly expected. The terminal-value growth rate used 9 in the valuation is
Page 548
Page 549
5.9%. In this model, the analyst assumes that the steady-state growth rate can be
approximated by the long-term risk-free rate (i.e., the long-term Treasury bond yield).
Using the risk-free rate to proxy for the steady-state growth rate is equivalent to
assuming that the expected long-term cash flows of the business grow with the overall
economy (i.e., nominal expected growth rate of GDP). Nominal economic
growth contains a real growth component plus an inflation rate component,
which are also reflected in long-term government bond yields. For example, the
Treasury bond yield can be decomposed into a real rate of return (typically between 2%
and 3%) and expected long-term inflation. Because the Treasury yield for our example
is 5.9%, the implied inflation is between 3.9% and 2.9%. Over the long term,
companies should experience the same real growth and inflationary growth as
the economy on average, which justifies using the risk-free rate as a reasonable
proxy for the expected long-term growth of the economy.
Another important assumption is estimating steady-state free cash flow that properly
incorporates the investment required to sustain the steady-state growth expectation. The
steady-state free-cash-flow estimate used in the merger valuation in Table 43.3 is
$974,000. To obtain the steady-state cash flow, we start by estimating sales in
Equation 43.9:
Steady state demands that all the financial statement items grow with sales at the
same steady-state rate of 5.9%. This assumption is reasonable because in steady state,
the enterprise should be growing at a constant rate. If the financial statements did not
grow at the same rate, the implied financial ratios (e.g., operating margins or RONA)
would eventually deviate widely from reasonable industry norms.
The steady-state cash flow can be constructed by simply growing all relevant line
Page 550
items at the steady-state growth rate as summarized in Tables 43.1 and 43.3. To estimate
free cash flow, we need to estimate the steady-state values for NOPAT, net working
capital, and net property, plant, and equipment. By simply multiplying the Year 5 value
for each line item by the steady-state growth factor of 1.059, we obtain the steady-state
Year 6 values. Therefore, to estimate the steady-state change in NWC, we use the
difference in the values for the last two years (Equation 43.10):
This leaves depreciation and capital expenditure as the last two components of cash
flow. These can be more easily handled together by looking at the relation between
sales and net property, plant, and equipment where NPPE is the accumulation of capital
expenditures less depreciation. Table 43.3 shows that in the steady-state year, NPPE
has increased to 11,914. The difference of NPPE gives us the net of capital expenditures
and depreciation for the steady state (Equation 43.11):
Summing the components gives us the steady-state free cash flow (Equation 43.12):
Therefore, by maintaining steady-state growth across the firm, we have
estimated the numerator of the terminal value formula that gives us the value of
all future cash flows beyond Year 5 (Equation 43.13):
Note that we can demonstrate that the cash-flow estimation process is consistent with the steady-state growth. If we
were to do these same calculations using the same growth rate for one more year, the resulting FCF would be 5.9%
higher (i.e., 974 × 1.059 = 1,031).
The expression used to estimate steady-state free cash flow can be used for
alternative assumptions regarding expected growth. For example, one might also assume
that the firm does not continue to build new capacity but that merger cash flows grow
only with expected inflation (e.g., 3.9%). In this scenario, the calculations are similar
but the growth rate is replaced with the expected inflation. Even if capacity is not
expanded, investment must keep up with growth in profits to maintain a constant
expected rate of operating returns.
Finally, it is important to acknowledge that the terminal value estimate embeds
assumptions about the long-term profitability of the target firm. In the example in
Table 43.3, the implied steady-state RONA can be calculated by dividing the steadystate
NOPAT by the steady-state net assets (NWC + NPPE). In this case, the return on
net assets is equal to 12.0% [1,815 ÷ (3,170 + 11,914)]. Because in steady state the
profits and the assets will grow at the same rate, this ratio is estimated to remain in
perpetuity. The discount rate of 10.9% maintains a benchmark for the steady-state
RONA. Because of the threat of competitive pressure, it is difficult to justify in most
cases a firm valuation wherein the steady-state RONA is substantially higher than the
WACC. Alternatively, if the steady-state RONA is lower than the WACC, one should
question the justification for maintaining the business in steady state if the assets are not
earning the cost of capital.
Market Multiples as Alternative Estimators of Terminal
Given the importance attached to terminal value, analysts are wise to use several
approaches when estimating it. A common approach is to estimate terminal value using
market multiples derived from information based on publicly traded companies similar
to the target company (in our example, Company B). The logic behind a market multiple
Page 551
is to see how the market is currently valuing an entity based on certain benchmarks
related to value rather than attempting to determine an entity’s inherent value. The
benchmark used as the basis of valuation should be something that is commonly valued
by the market and highly correlated with market value. For example, in the real estate
market, dwellings are frequently priced based on the prevailing price per square foot of
comparable properties. The assumption made is that the size of the house is correlated
with its market value. If comparable houses are selling at $100 per square foot, the
market value for a 2,000-square-foot house is estimated to be $200,000. For firm
valuation, current or expected profits are frequently used as the basis for relative market
multiple approaches.
Suppose, as shown in Table 43.4, that there are three publicly traded businesses that
are in the same industry as Company B: Company C, Company D, and Company E. The
respective financial and market data that apply to these companies are shown in
Table 43.4. The enterprise value for each comparable firm is estimated as the
current share price multiplied by the number of shares outstanding (equity
value) plus the book value of debt. Taking a ratio of the enterprise value divided by the
operating profit (EBIT), we obtain an EBIT multiple. In the case of Company C, the
EBIT multiple is 5.3 times, meaning that for every $1 in current operating profit
generated by Company C, investors are willing to pay $5.3 of firm value. If Company C
is similar today to the expected steady state of Company B in Year 5, the 5.3-times-
EBIT multiple could be used to estimate the expected value of Company B at the end of
Year 5, the terminal value.13
TABLE 43.4 | Comparable companies to target company.
Page 552
To reduce the effect of outliers on the EBIT multiple estimate, we can use the
information provided from a sample of comparable multiples. In sampling additional
comparables, we are best served by selecting multiples from only those firms that are
comparable to the business of interest on the basis of business risk, economic outlook,
profitability, and growth expectations. We note that Company E’s EBIT multiple of
8.7 times is substantially higher than the others in Table 43.4. Why should investors be
willing to pay so much more for a dollar of Company E’s operating profit than for a
dollar of Company C’s operating profit? We know that Company E is in a higher growth
stage than Company C and Company D. If Company E profits are expected to grow at a
higher rate, the valuation or capitalization of these profits will occur at a higher level or
multiple. Investors anticipate higher future profits for Company E and consequently bid
up the value of the respective capital.
Because of Company E’s abnormally strong expected growth, we decide that
Company E is not a good proxy for the way we expect Company B to be in Year 5. We
choose, consequently, to not use the 8.7-times-EBIT multiple in estimating our
terminal value estimate. We conclude instead that investors are more likely to
value Company B’s operating profits at approximately 5.7 times (the average of 5.3 and
6.0 times). The logic is that if investors are willing to pay 5.7 times EBIT today for
operating profit of firms similar to what we expect Company B to be in Year 5, this
valuation multiple will be appropriate in the future. To estimate Company B’s terminal
value based on our average EBIT multiple, we multiply the Year 5 stand-alone EBIT of
$2.156 million by the average comparable multiple of 5.7 times. This process provides
a multiple-based estimate of Company B’s terminal value of $12.2 million. This
estimate is somewhat above the constant-growth-based terminal value estimate of $11.3
Although the importance of terminal value motivates the use of several estimation
methods, sometimes these methods yield widely varying values. The variation in
estimated values should prompt questions on the appropriateness of the underlying
assumptions of each approach. For example, the differences in terminal value estimates
could be due to:
The potential discrepancies motivate further investigation of the assumptions and
information contained in the various approaches so that the analyst can “triangulate” to
the most appropriate terminal-value estimate.
In identifying an appropriate valuation multiple, one must be careful to choose a
multiple that is consistent with the underlying earnings stream of the entity one is
valuing. For example, one commonly used multiple based on net earnings is called the
price–earnings, or P/E, multiple. This multiple compares the value of the equity to the
value of net income. In a valuation model based on free cash flow, it is typically
inappropriate to use multiples based on net income because these value only the equity
A forecast period that is too short to have resulted in steady-state performance.
The use of comparable multiples that fail to match the expected risk, expected growth,
or macroeconomic conditions of the target company in the terminal year.
An assumed constant growth rate that is lower or higher than that expected by the
portion of the firm and assume a certain capital structure. Other commonly used
multiples that are appropriate for free-cash-flow valuation include EBITDA (earnings
before interest, tax, depreciation, and amortization), free cash flow, and total capital
Although the market-multiple valuation approach provides a convenient, marketbased
approach for valuing businesses, there are a number of cautions worth noting:
1. Multiples can be deceptively simple. Multiples should provide an alternative way to
triangulate toward an appropriate long-term growth rate and not a way to avoid
thinking about the long-term economics of a business.
Page 553
2. Market multiples are subject to distortions due to market misvaluations and
accounting policy. Accounting numbers further down in the income statement (such as
net earnings) are typically subject to greater distortion than items high on the
income statement. Because market valuations tend to be affected by business
cycles less than annual profit figures, multiples can exhibit some business-cycle
effects. Moreover, business profits are negative; the multiples constructed from
negative earnings are not meaningful.
3. Identifying closely comparable firms is challenging. Firms within the same industry
may differ greatly in business risk, cost and revenue structure, and growth prospects.
4. Multiples can be computed using different timing conventions. Consider a firm with
a December 31 fiscal year (FY) end that is being valued in January 2005. A trailing
EBIT multiple for the firm would reflect the January 2005 firm value divided by the
2004 FY EBIT. In contrast, a current-year EBIT multiple (leading or forward EBIT
multiple) is computed as the January 2005 firm value divided by the 2005 EBIT
(expected end-of-year 2006 EBIT). Because leading multiples are based on
expected values, they tend to be less volatile than trailing multiples. Moreover,
leading and trailing multiples will be systematically different for growing businesses.
Page 554
Transaction multiples for comparable deals
In an M&A setting, analysts look to comparable transactions as an additional benchmark
against which to assess the target firm. The chief difference between transaction
multiples and peer multiples is that the former reflects a “control premium,” typically
30% to 50%, that is not present in the ordinary trading multiples. If one is examining the
price paid for the target equity, transactions multiples might include the Per-Share Offer
Price ÷ Target Book Value of Equity per Share, or Per-Share Offer Price ÷ Target
Earnings per Share. If one is examining the total consideration paid in recent deals, one
can use Enterprise Value ÷ EBIT. The more similarly situated the target and the more
recent the deal, the better the comparison will be. Ideally, there would be several
similar deals in the past year or two from which to calculate median and average
transaction multiples. If there are, one can glean valuable information about how the
market has valued assets of this type.
Analysts also look at premiums for comparable transactions by comparing the offer
price to the target’s price before the merger announcement at selected dates, such as 1
day or 30 days before the announcement. A negotiator might point to premiums in
previous deals for similarly situated sellers and demand that shareholders receive
“what the market is paying.” One must look closely, however, at the details of each
transaction before agreeing with this premise. How much the target share price moves
upon the announcement of a takeover depends on what the market had anticipated before
the announcement. If the share price of the target had been driven up in the days or
weeks before the announcement on rumors that a deal was forthcoming, the control
premium may appear low. To adjust for the “anticipation,” one must examine
the premium at some point before the market learns of (or begins to anticipate
the announcement of) the deal. It could also be that the buyer and seller in previous
deals are not in similar situations compared with the current deal. For example, some of
the acquirers may have been financial buyers (leveraged buyout [LBO] or private equity
firms) while others in the sample were strategic buyers (companies expanding in the
same industry as the target.) Depending on the synergies involved, the premiums need
not be the same for strategic and financial buyers.
Other Valuation Methods
Although we have focused on the DCF method, other methods provide useful
complementary information in assessing the value of a target. Here, we briefly review
some of the most popularly used techniques.
Book value
Book-value valuation may be appropriate for firms with commodity-type assets valued
at market, stable operations, and no intangible assets. Caveats are the following:
Liquidation value
Liquidation value considers the sale of assets at a point in time. This may be
appropriate for firms in financial distress or, more generally, for firms whose operating
prospects are highly uncertain. Liquidation value generally provides a conservative
This method depends on accounting practices that vary across firms.
It ignores intangible assets such as brand names, patents, technical know-how, and
managerial competence.
It ignores price appreciation due, for instance, to inflation.
It invites disputes about types of liabilities. For instance, are deferred taxes equity or
Book-value method is backward-looking. It ignores the positive or negative operating
prospects of the firm and is often a poor proxy for market value.
Page 555
lower bound to the business valuation. Liquidation value will depend on the recovery
value of the assets (e.g., collections from receivables) and the extent of viable
alternative uses for the assets. Caveats are the following:
Replacement-cost value
In the 1970s and early 1980s, an era of high inflation in the United States, the U.S.
Securities and Exchange Commission required public corporations to estimate
replacement values in their 10-K reports. This is no longer the case, making this method
less useful to U.S. firms, but it is still useful to international firms, for which the
requirement continues. Caveats are the following:
Market value of traded securities
Most often, this method is used to value the equity of the firm (E) as Stock Price ×
Outstanding Shares. It can also be used to value the enterprise (V) by adding the market
It is difficult to get a consensus valuation. Liquidation values tend to be highly
It relies on key judgment: How finely one might break up the company? Group?
Division? Product line? Region? Plant? Machines?
Physical condition, not age, will affect values. There can be no substitute for an onsite
assessment of a company’s real assets.
It may ignore valuable intangible assets.
Comparisons of replacement costs and stock market values ignore the possible
reasons for the disparity: overcapacity, high interest rates, oil shocks, inflation, and so
Replacement-cost estimates are not highly reliable, often drawn by simplistic rules of
thumb. Estimators themselves (operating managers) frequently dismiss the estimates.
value of debt (D) as the Price per Bond × Number of Bonds Outstanding. This method
is helpful if the stock is actively traded, followed by professional securities analysts,
and if the market efficiently impounds all public information about the company and its
industry. It is worth noting the following:
Summary Comments
The DCF method of valuation is superior for company valuation in an M&A setting
because it:
Rarely do merger negotiations settle at a price below the market price of the target. On
average, mergers and tender offers command a 30% to 50% premium over the price
one day before the merger announcement. Premiums have been as high as 100% in
some instances. Often the price increase is attributed to a “control premium.” The
premium will depend on the rarity of the assets sought after and also on the extent to
which there are close substitutes for the technology, expertise, or capability in
question; the distribution of financial resources between the bidder and target; the
egos of the CEOs involved (the hubris hypothesis); or the possibility that the ex ante
target price was unduly inflated by market rumors.
This method is less helpful for less well-known companies that have thinly or
intermittently traded stock. It is not available for privately held companies.
Page 556
The method ignores private information known only to insiders or acquirers who may
see a special economic opportunity in the target company. Remember, the market can
efficiently impound only public information.
Is not tied to historical accounting values. It is forward-looking.
Focuses on cash flow, not profits. It reflects noncash charges and investment inflows
and outflows.
Page 557
Virtually every number used in valuation is measured with error, either because of
flawed methods to describe the past or because of uncertainty about the future.
Beyond the initial buy/no buy decision, the purpose of most valuation analysis is to
support negotiators. Knowing value boundaries and conducting sensitivity analysis
enhances one’s flexibility to respond to new ideas that may appear at the negotiating
Methods of Valuation for Mergers and Acquisitions
Description of Relationship between Multiples of Operating Profit and Constant Growth
Separates the investment and financing effects into discrete variables.
Recognizes the time value of money.
Allows private information or special insights to be incorporated explicitly.
Allows expected operating strategy to be incorporated explicitly.
Embodies the operating costs and benefits of intangible assets.
No valuation is “right” in any absolute sense.
It is appropriate to use several scenarios about the future and even several valuation
methods to limit the target’s value.
Adapt to diversity: It may be easier and more accurate to value the divisions or product lines of a target,
rather than to value the company as a whole. Recognize that different valuation methods may be
appropriate for different components.
Avoid analysis paralysis: Limit the value quickly. Then if the target still looks attractive, try some sensitivity
One can show that cash-flow multiples such as EBIT and EBITDA are economically
related to the constant growth model. For example, the constant growth model can be
expressed as follows:
Rearranging this expression gives a free-cash-flow multiple expressed in a constant
growth model:
This expression suggests that cash-flow multiples are increasing in the growth rate
and decreasing in the WACC. In the following table, one can vary the WACC and
growth rate to produce the implied multiple.
Page 559
44 Medfield Pharmaceuticals
Susan Johnson, founder and CEO of Medfield Pharmaceuticals, had planned to spend
the first few weeks of 2011 sorting out conflicting recommendations for extending the
patent life of the company’s flagship product, Fleximat, which was scheduled to go off
patent in two years. With only three other products in Medfield’s lineup of medications,
one of which had only just received U.S. Food and Drug Administration (FDA)
approval, strategic management of the company’s product pipeline was of paramount
importance. But a recent $750 million offer to purchase the company had entirely shifted
her focus.
The offer was not a complete surprise. The pharmaceutical industry landscape
had changed considerably since Johnson, formerly a research scientist, had founded
Medfield 20 years earlier. Development costs were rising, patents were running out,
and new breakthroughs seemed ever more difficult to obtain. The industry was now
focused on mergers and acquisitions, restructuring, and other strategies for cost-cutting
and survival. Smaller firms like Medfield were being gobbled up by the major players
all the time. Companies with approved products or products in the later stages of
development, such as Medfield, were especially likely targets.
While she no longer owned a controlling interest in the firm and could not force a
particular decision, Johnson recognized that as CEO, founder, and largest single
investor, she would be expected to offer an opinion and that her opinion would be
extremely influential. It was also clear that determining the value of the company, and
therefore whether the offer was reasonable, would necessitate a careful review of the
company’s existing and potential future products, and no one understood these as well
Page 560
as Johnson.
Of course, for Johnson, this was more than simply a financial decision. She believed
strongly, as did other employees, particularly among the research staff, that Medfield
was engaged in work that was important, and she took great pride in the firm’s
accomplishments. Medfield’s corporate culture was explicitly oriented toward the end
goal of improving patients’ health, as evidenced by its slogan: “We Bring Wellness.”
This was an important value that Johnson had consciously and specifically
built into the firm’s culture. Both Johnson’s parents were doctors and ran a
small family-oriented practice that they had taken over from Johnson’s maternal
grandfather in the town where Johnson was raised. The idea of bettering lives through
medicine was one Johnson had grown up with.
Current Product Lines
The company had experienced excellent growth over the years and in 2009 had
290 employees, total sales of $329 million (primarily in the United States), and a net
income of $58 million. See Exhibits 44.1 and 44.2 for financial information. The
company manufactured and sold three primary drugs; all but one had substantial patent
life remaining. Two were for pain management and the third was for auto-immune
diseases. A fourth drug, also for pain management, had been approved and was ready
for distribution. Due to its strong marketing and sales force, Medfield enjoyed an
excellent reputation with both physicians and hospitals.
EXHIBIT 44.1 | Medfield Pharmaceuticals Annual Income Statement1 (in thousands of dollars)
The company’s leading seller—responsible for 64% of its revenues—was Fleximat.
Fleximat was a drug used to treat pain and swelling in patients with ulcerative colitis,
rheumatoid arthritis, and Crohn’s disease, an ongoing disorder that caused painful
inflammation of the digestive tract. Fleximat had proved to be much more effective than
competing sulfa-based drugs (such as sulfasalazine) in treating those patients—
Note: The company has negligible depreciation and amortization.
All exhibits were created 1 by the case writer.
EXHIBIT 44.2 | Medfield Pharmaceuticals Balance Sheet (in thousands o