Thursday, August 27, 2009

Mergers and Acquisitions

I've been away for two months and this blog has suffered the difference.

I've noticed that it is not beneficial for me to be posting multiple posts on two blogs, so I've merged them both to my amgstr.blogspot.com blog and continue from there.

Slowly, I'll depreciate this blog.

Friday, June 26, 2009

Game Theory Foundations: Production-Possibility Frontiers

The next concept relating to efficiency is the idea of production-possibility frontiers. This is the idea that given a limited amount of resources and accounting for systematic diminishing returns, that there is some limit where you cannot create more of one product without sacrificing another. In the simplest case, this is described as a curve on a graph with two axis each reflecting one product (a more complicated model with more products can be represented with more dimensions in a multi-dimensional model, but is beyond the scope of this discussion).

The most popular form of this model includes the Guns or Butter model as well as the efficient frontier in market portfolio theory.

In the Guns or Butter model, a government has the choice of spending its resource on foreign or defense projects (Guns) versus domestic or civilian projects (Butter). With a limited budget, there is some limit where in order to get more of one, they have to sacrifice production of the other. Plotting the points along this curve produces the production-possibility frontier.
Excusing the violent association with the "Guns" component, it is possible to instead substitue the idea of exportable goods and globalization which affect the balance of trade between countries (a topic for another time).

An efficient frontier in market portfolio theory describes a securities portfolio which includes securities in the most efficient weights and correlations such that the portfolio exhibits no un-systematic risk and is composed entirely of non-diversifiable risk (Capital market line - CML). Here the frontier is presented a little differently with the resource simply being combinations of different securities betas versus their expected return to form the efficient frontier (which intersects with the investor's marginal utility curve to determine the efficient market portfolio).

It seems like everywhere we go we naturally run into these frontier limits. How can we describe them in game theory? As it turns out, there are some models which describe behaviour in different stages of game theory.

Positions on the efficient possibility frontiers can not produce more of one benefit without sacrificing another. In game theory, this manifests as a zero-sum game. That is to say for one participant to gain (show on the production possibility frontier as one axis), another has to lose out).

Points contained within the curve (but not on the curve) allow for gains of one benefit or the other without having to make any sacrifices. This is represented in game theory as a non-zero sum game. That is to say, that at the end of the day, it is possible for both parties to benefit positively without the expense of the other.

You'll notice that the theme here is one of efficiency. In an environment where no more benefit can be extracted from the system with out additional resources (maximum efficiency), it becomes a zero-sum model. However, if production is inefficient, there is potential for a non-zero sum model (recouping the inefficientcy as non-zero sum gains - getting more of A without comprimising B).

This is also an indicator of efficiency and competition. In an industry with a maturing life cycle, transactions, deals and strategy will slowly being to have less non-zero sum models.

Game Theory Foundations: Economies of Scale or Diminishing Returns?

Before we start out foray into game theory, I'd like to do a quick review of economic efficiency framed as economies of scale versus diminishing returns for growing enterprises. In the examples provided by economics textbooks, they will often express diminishing returns as using less efficient resources in order to produce the same task (which reflects a decreasing marginal utility / benefit / revenue).

However, in the abstract, we are also told about economies of scale, where adding staff and capacity can result in productivity gains beyond what is expected (Specialization of tasks and the Model T assembly line strategy). In evaluating a scenario, how can we tell which stage we are in?

The best way to understand your efficiencies is by understanding the production bottle necks of your current factors of production. That is to say that you can fight off diminishing returns if you add resources to the weakest link of your system (similar to the idea of critical path of a PERT chart). While adding resources in general will cause you to gain diminishing returns in the one production line, it should create synergies that recoup the drop in efficiency.

Now, if you are sensitive to seemingly meaningless buzzwords like "synergy" which tend to be overused, you'll probably recoil a little as I did upon hearing that word so let's break it down to less abstract terms with an example.

Example: A factory has 200 workers working on 20 machines. The optimal ratio of workers to machines is 12 to 1 for the purposes of scheduling and capacity planning. Although adding the 21st machine will bring some benefit (versus not having the machine at all), it is clearly diminishing versus adding the 3rd machine. In this scenario, adding another 40 workers would help bring the worker / machine level up to its target concentration. Note that beyond that, adding another worker begins to diminish the benefit of workers and the value of adding a machine starts to increase again.

Now assume that you have a limited number of resources to apply to your production. What mix of workers to machines will you have?
This introduces the idea of production-possibility frontiers which we will discuss in the next post.

Game Theory - An "Emerging" Field of Study and Interest

While game theory, the study of strategic interactions, has been a field of study in social psychology for sometime, it has become an increasingly important field which on which management science has recently begun to focus on. Particularly, the logical analysis of decision making processes to optimize success as defined within the model.

While the CFA has introduced the idea in the Economics portion of their level I curriculum, I thought that it would be valuable to look beyond what is presented there (including the "optional" sections) and how using game theory can predict strategic level behaviours.

First, the basics, we are familiar with the Hawk-Dove model (a game of chicken) which I used in my investment blog to describe the benefits and pitfalls awaiting investors hoping to gain first movers advantage in the market. I've also used the Prisoner's Dilemma to describe OPEC and Oil.

However, what is what oversimplifies these two examples of game theory is the assumption that they occur only once. That is to say that these models only look at a particular snapshot in time. Whereas we know that the game continues to be played and is modeled by a series of events rather than one discrete iteration. Also, the models used to describe these scenarios are simple 2x2 matrices. As we start to remove some of the assumptions, we will also begin to look at more flexible (and therefore complex) models.

In the next couple of posts and into next week, I will focus on a series of posts aiming to look at more involved cases of game theory and how they can be used to model strategic business behaviour.

Thursday, June 11, 2009

Demand Elasticity - Who pays?

In high school, through university and even in the CFA, economics is an important field of study. One of my favourite topics is the concept of elasticity.

Elasticity is literally defined as the percentage change in quantity over the percentage change in price and has several flavours (negative elasticity, positive elasticity for substitutes, negative cross elasticity for complements etc)
Elasticity = %ΔPrice / %ΔQuantity
%ΔPrice = ΔPrice / Pavg = P1 - P2 / Pavg
%ΔQuantity = ΔQuantity / Qavg = Q2 - Q1 / Qavg

Mathematically, note that as you move up and down the curve, the elasticity changes because percentage is affected by the absolute value of the average price. If the average price falls, the elasticity increases because the change becomes larger relative to the average to which it is compared for percentage purposes.

Another way of looking at elasticity is flexibility or bargaining power. If you review Michael Porter's five forces, you can see that elasticity for suppliers or customers increases their bargaining power. That is to say, the more competition and choices available means more options.

Let's look at a good which exhibits perfect inelasticity. This means that regardless of the price, consumers will always consume a constant amount (they set the quantity demanded). This manifests in a horizontal demand curve as shown below:

Note that the determining factor of the price is the supply curve. If there are more suppliers, the supply curve shifts right and the price drops. If there are less suppliers, the supply curve shifts left and the price rises. This is similar to what happens with oil and the "prisoner's dilemma" in OPEC's oligopoly.

Next, look at a perfectly elastic curve. This means that given the slightest change in price, the consumers will dramatically change their spending habits (that is to say, that consumers set the price). This manifests as a horizontal demand curve (at the price they set).
The only power suppliers have here is to set the quantity sold (they are price takers). If there are more suppliers, the supply curve shifts right and there is more quantity sold. Visa versa, if there are less suppliers the supply curve shifts left and there is less quantity sold.

This is important when determining how price changes will affect measures like total revenue, quantity consumed etc. This also applies regardless of whether you are talking about goods sold, wages paid, taxes paid etc.

[Example] The government is thinking of applying a tax on a good which exhibits perfectly elastic demand. Who bears the cost?
  1. The supplier
  2. The customer
  3. The supplier and the customer share the tax burden
[Solution] One way to look at this is that if the good exhibits perfectly elastic demand, then the customers have all the bargaining power. This means that if any supplier were to simply "pass along the tax" and make the consumer pay, the consumer would just go to a different supplier. This means the supplier is forced to take on all the tax. The solution is 1. Notice this also means it eats into the producer surplus.

If the good were perfectly demand inelastic, the suppliers have all the bargaining power then the customer would bear all the tax and the solution would be 2. This would eat into the consumer surplus.

If the good were neither perfectly demand elastic or inelastic, the supplier and customer would split the difference in fractions based on who had more relative bargaining power. Both producer and consumer surplus would diminish. The solution would be 3.

Tuesday, June 9, 2009

Cash Flow and Operating Cycle

I've written about cash flow with queuing theory as the lifeblood of business in my blog, as well as activity (operations) ratios in my financial profitability analysis series on my investment blog, but I wanted to review an interested concept in the CFA level I regarding the cash conversion cycle.

First let's do a review of the tools and topic. Firstly, what affects operations from a cash flow perspective? Using the direct method, the operating items which affect cash flow is change in working capital and the three items that affect that is Accounts Payable (AP), Accounts Receivable (AR) and Inventory (Inv). Now let's look at a standard process for how changes in each affect the operating cycle.

Order of Operations (like the BEDMAS of elementary arithmetic):
  1. Purchase supplies from vendor on credit (AP up, Inv up)
  2. Process supplies into goods for sale
  3. Sell products on credit (Inv down, AR up)
  4. Pay back supplier (AP down, cash down)
  5. Receive payment from customers (AR down, cash up)
Notice that you don't actually receive any cash until step 5, but you have to pay it back in step 4. This means that you have a negative cash flow until you complete the cycle.

Recall that in the indirect method (calculating CFO from NI):
  • If more inventory is made than sold, some "cash value" is retained in Inventory (Inv up, cash down)
  • Alternately, if more inventory is sold than made, then you are liquidating your inventory (Inv down, cash up)
  • An increase in AP means that you owe your supplier more money. This means that instead of paying with cash, you paid with credit so your cash flow goes up
  • A decrease in AP therefore means you paid back your debts
  • An increase in AR means that your customers paid you with credit so your cash flow goes down
  • A decrease in AR therefore means you were paid back (collected on sales on account)
This next little diagram illustrates the relationship between Operating Cycle, DOH, DSO, Days Payables and Cash Conversion Cycle:
Operating cycle is simply the time it takes from when you purchase supplies to when you collect the cash and is composed of two components, Days Inventory on Hand (DOH) and Days Sales Outstanding (DSO).
  • Days Inventory on Hand includes the manufacturing process, as well as storage. In accounting terms, this means works-in-progress (WIP), finished goods, sales cycle.
  • Days Sales Outstanding is the time between sales on credit and the collection of cash.
  • Cash conversion cycle is the time between when you pay your vendor to when you yourself collect cash. It is the difference between operating cycle and Days Payables.
In looking at which company is more likely to have cash flow problems, cateris paribus, it would be the company with the largest cash conversion cycle. That is to say, it has a low Days Payables (bills due sooner - cash out), but a long Operating Cycle (takes really long to produce and sell goods as well as collect on credit - cash in). So the larger the cash conversion cycle, the worse the operational and implicit structural liquidity.

Friday, June 5, 2009

Meet the Dean - Roger Martin and Integrative Thinking

I figure I'll take a short CFA study break to write about an encounter I had at the Meet the Dean session at Rotman early last week. I thought it was an invitational event for those of us who were accepted, but it turns out there were some people who were still applying, waiting for acceptances or deciding.

Dean Martin spoke about how Rotman is different from other MBA programs and for once, I was actually impressed with the Rotman presentation. This may seem kind of odd, coming from someone who has already "sampled" the proverbial kool-aid so to speak by accepting my offer letter to start in September, but the truth of the matter is that I was more sold on Rotman by my colleagues and friends currently enrolled (or graduated) than I was from the Faculty administration. I'm quite embarrassed to say that the admin simply made it seem like just an MBA program whereas my friends were raving about their experiences.

The reason I bring up this point is that Dean Roger Martin brought it up and addressed it as well. Now from ANY MBA program, you would expect some pomp and circumstance regarding why their program is so fantastic. One of the major issues facing MBA programs today is their incremental value add. For instance, there are some top schools for which recruiting companies have stated they would rather hire students who were accepted, rather than students that had graduated. The reason? Top schools who accept good candidates are simply validating their position as top performers, whereas the marginal benefit of attending a top school doesn't necessarily justify the exorbitant increase in salary.

Dean Martin reframed this postulate as top schools resting on their laurels and not affecting the changes required by society in light of the financial crisis in the markets. He put up a rather simple diagram of a three dimensional box with the dimensions described as depth, breadth and flexibility. He called the current state of MBA education, shallow, narrow and static where it should be deep, broad and dynamic. I can't remember who he was quoting off hand, but he mentioned: "There aren't marketing or finance problems. Only business problems" (reflecting the interdisciplinary relationships).

He used the example of the Blacks-Scholes models for derivatives valuation and that stated limitations in the model made it inappropriate for use in many circumstances. However, this model is widely used in ALL derivatives valuations and therefore leaves models with large vulnerabilities in their assumptions.

The punch line?

Integrative thinking is a framework which systematically creates people who ask the right questions to make the right decisions.

Monday, May 25, 2009

2 Week Hiatus

I will be going on a two week hiatus in preparation for the CFA Examination on Saturday June 6th, 2009.

However, I will continue to post on Amongst the Stars, my Investment Blog, with a focus on CFA level I examination related topics and concepts.

Wish me luck!

Friday, May 22, 2009

Business Continuity Series, pt 5 - Implementing the Plan

Once all the homework has been done understanding the relationships and inter-dependencies and once the plan has been put together, it's time to test and implement the plan.

At this stage there is a tricky conundrum. On one hand, in order to do a realistic test, planned outages are required for live services to ensure that systems will be resilient in the manner anticipated (reducing the shock of discovering additional failures during an actual disaster). However, deliberately causing outages is the last resort of any service provider.

Even in the best conditions, where service consumers are notified in advance with a long lead times and everything goes according to plan it is usually heavily orchestrated event that consumes many non-revenue generating resources.

Testing the plan should attempt to avoid being disruptive. As with any change management procedure, downtime should be kept to a minimum and attempts should be made to reduce the impact on live customers (usually translating into "off-peak testing", coming in late on a Saturday night or early Sunday morning).

The shutdown of highly technical and regulated services like nuclear power plants usually requires all hands on deck at the most ungodly hours of the night (colleagues of mine working with in nuclear power remind me that their credo is "Never forget that you work in a very unforgiving industry").

In this stage, often managers and professionals discover more inter-dependencies implying that their plans are either incomplete or not as robust as they had anticipated. This is where GAP analysis comes into play to further develop the plans.

Even in the event of an ideal and perfect implementation of a BCP plan, there is still the requirement of ongoing vigilance. This is because that as the environment changes, certain assumptions which become obsolete suddenly cause vulnerabilities to appear in the system. At this point, BCP projects evolve into on-going BCP maintenance programs.

Thursday, May 21, 2009

Business Continuity Series, pt 4 - Building a BCP Plan

Before you can build a plan you need to understand what value you are deriving out of the system. Unfortunately, in the real world, Business Continuity Program planning is constrained by resource allocation like any other project, so understanding the value derived from the program. It is possible to quantifying the problem by understanding:
  • Frequency of outages
  • Average duration of outage
  • Time value of outage
  • Value of data lost
  • Opportunity cost of capital investment in plan
Total cost of outages = Frequency x Duration x Time Value

This basic consideration will give you a foundation for justifying budgeting more or less funds into your Business Continuity Program.

When you've arrived at a stage where you need to begin to start choosing a strategy, there are several categories for recovery strategies, each with an escalating financial and resource commitment and proportional recovery / resiliency benefit:
  • Passive-Passive - Cold solution. New equipment may need to be ordered at the time of the event. Capital on-hand 'just-in-case'. Can be improved with planning (better use of capital). Essentially a "do nothing" solution. Probably manifests as a paper plan only with no physically available resources.
  • Active-Passive - Warm redundant systems - Literally: Turn-key or push button solutions. There is equipment ready but it is not currently in use. It is on hand and can be activated on short notice. This is usually because of technology or financial limitations.
  • Active-Active - Traffic is load balanced across multiple systems. Disrupted systems are by-passed and traffic is routed to different machines. Usually minor disruptions pass unnoticed. Only catastrophic events knocking out the entire system are noticed by users. The main concerns of an Active-Active system are costs and capacity. Problems generally only become visible when enough modules are knocked out such that the system is over capacity.
As usual, better plans usually cost more resources, however sometimes there are non-zero sum gains to be had. For instance, a Passive-Passive solution might be to have $5M allocated in the budget as "contingency" in the event of a disaster. Perhaps rather than have $5M budgeted as "contingency" you can employ $1M in capital expenditures to build resiliency into your processes. Although this investment will depreciate over time, it could potentially be better than keeping the capital idle and the economic loss of the internal rate of return (IRR) of $5M.

Also, systems which are heavily used or mission critical will require more active plans. For instance, if Google or 911 suffered any downtime, people would notice.

When putting together a plan there are other important considerations. For instance, is the a correlation between risk factors and support infrastructure? What is the geographical distance between my redundant systems and what is the possibility of a single event knocking out both my systems? Understanding and process mapping all interdependency is paramount in any BCP endeavour.

Before you think it is too unlikely, recall the power outage in the summer of 2003 which knocked out power for the Ontario and North East USA. If you located redundant system for Toronto was in New York (or vice versa) thinking that locating in a different country was enough insulation and redundancy, this event showed that it sometimes isn't enough.

Wednesday, May 20, 2009

Business Continuity Series, pt 3 - Service metrics - What are your goals?

Although we try our best to avoid failures with methodologies and goals like six sigma (the idea that output from processes should be contained within six standard deviations or approximately 3.4 failures per million) there are still some failures which need to be dealt with.

In the event of a system failure, there are two key metrics which are a good indicator of resiliency: Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

RPO refers to the amount of assets lost which can be quickly recoverable. For instance, an RPO of 24 hours for a database server means that if there is a failure (server crash, hard drive failure, building burns down) then the data that is restored is at most 24 hours old (or in other words, all data created in the last 24 hours is lost as a worst case scenario). RPO describes how current the information in your back up from auxiliary sources is.

RTO refers to the amount of time the process /service unavailability (time til service resumes). An RTO of 48 hours for cable television means that if a cable TV signal is disrupted (damaged line, transmitter failure, etc) that it will take the cable company 48 hours to restore service to your house.

The counter balance to achieving excellent RPOs and RTO's is cost. Generally speaking, the less latency for RPO and the less delay for RTO required, the more exponentially costly the solution (inversely proportional relationship).

Using a project management framework, the RTO of system system recovery is based on the critical path of recovering services (which in turn is heavily dependent on the system module with the longest RTO). And without a proper context most data will be useless so the weakest RPO in the system usually reflects the RPO of the system in general (a series relationship).

Email Example: A consultant backups their email every month locally on their laptop and their office mail server experiences an outage for 3 hours. The RPO in this scenario is one month (all the emails on their laptop) and the RTO is however long it takes the IT staff to restore email service (3 hours).

Tuesday, May 19, 2009

Business Continuity Series, pt 2 - Parallel versus Serial Failure and Resiliency

Before we can delve into the world of business continuity, we need to understand the underlying logic of systems design and the probability mechanics of describing failure. Taking a systems approach to redundancy planning, let's look at the mathematics behind failure probabilities of parallel systems and systems in series.

First let's look at a system in series:
The system above contains three modules in series, each with an 80% success rate. Each is independent of the others. The success rate of the system is the probability union of all three modules, in other words, in order for this system to work, you must traverse all three modules. The probability of success is as follows:

Success = 80% x 80% x 80% = 51.2%

Look familiar? It should. This is the exact same model I used for my post about the failure of communication between organizational levels and why smart people say stupid things with CEO's being on the left and mid-level managers on the right.

Note that even though each individual module has a fairly high success rate (80%) each incremental and potential failure compounds the overall success of the system. In series, all modules have to work in order for the system to work. This means that a system in series is vulnerable to single points of failure. If there is one point which goes down in the process, the whole system shuts down.

In human resources planning or even individual career development, being irreplaceable is identical to being a single point of failure.

Next let's look at a system in parallel:
The assumptions here is that each module is interchangeable with any other. That is to say that if one system fails, the other systems will pick up the slack. Here each module is fairly mediocre with a 60% success rate (or a 40% failure rate). However, for the system to fail, all three modules have to fail simultaneously. The probability of that happening is the union of all the failures:

Failure = 40% x 40% x 40% = 6.4%
Success = 1 - Failure = 93.6%

Notice that even while each individual component is not of particularly good quality, when they work together to ensure success they collectively cover for each other in the event of individual failures.

This model is analogous to electrical circuits (and the idea of resistance and conductance):
  • Modules are equivalent to resisters (from the perspective of conductance). Where conductance is a process channel.
  • Electrical current is work done.
  • Voltage differential potential work waiting to be done.
Remember, that formulas for electrical are analogous to fluid mechanics (if you come from a chemical or mechanical engineering background and feel more comfortable with those terms).
  • Modules are pipes
  • Water flow is work done
  • Pressure is potential work
With all these analogies, there are also problems associated with capacity. Although an individual failure might not disrupt a system with parallel components, if the system as a whole is operating at 90% capacity, the loss of one third of it's capacity is also a serious problem (system over capacity) and this will manifest in a variety of ways:
  • Unstable queue growth (work is coming in faster than you can process it)
  • Large (and growing) delay times (backlog)
  • Mechanical failures / server crashes / employee sickness (overworked)
In the next section, we will look at the goals of continuity planning, how to set goals and understand how to measure performance in an environment where an anticipated failure has occurred.

Monday, May 18, 2009

Business Continuity Series, pt 1 - Overview

Business continuity was an extremely hot topic during 9/11 as well as with current worries about avian and swine flu. The question posed is this: "How resilient are your business process to disruptions"? Whether that be a building fire, a crashed server or the loss of key personnel due to illness, companies need to know the inter-dependency of related systems as well as the redundancies (or lack thereof).

The next series will look at the math and mechanics of business resiliency planning

  • Part 1. Overview (This post)
  • Part 2. Parallel versus Serial Failure and Resiliency
  • Part 3. Service metrics - What are your goals?
  • Part 4. Building a BCP Plan
  • Part 5. Implementing the Plan

What is important is to differentiate between fear mongering and understanding real business risks associated with the operating environment and taking appropriate steps to mitigate them efficiently and effectively.

The material that will be covered in this series is a combination of engineering statistics principles coupled with business continuity planning as described by the Disaster Recovery Institute (DRI) as part of the Associate and Certified Business Continuity Professional level certifications (ABCP and CBCP respectively).

Thursday, May 14, 2009

Gap Analysis and Integrative Thinking

Gap analysis is a framework enabling a management team to evaluation actual performance with its potential. At its core are two questions: "Where are we?" and "Where do we want to be?". In game theory, this manifests as potential non-zero sum gains and in economics manifests as working within the potential frontier curve (both describing inefficient processes).

Gap analysis is a logical step following common size analysis and other forms of bench marking. It helps identify the causes of performance shortfalls and areas for improvement.

Many managers who are familiar with ISO 9000 for quality management (which implements a Plan, Do, Check, Act (PDCA) framework) will notice it has many similar characteristics and goals to GAP analysis applied recursively.

Sometimes gaps are easy to quantify: Our competitors computer model has 20% more computing power. Other times it is not: Our brand equity is weak relative to comparable fashion designers.

Understanding the factors that produce these distances is the first step in bridging them. The initial steps of an integrative thinking framework also has similar characteristics in using Salience, Causality, Architecture and Resolution to move from problem to solution.

Wednesday, May 13, 2009

Changing the Model - Newspapers as Not-for-profit

The death of newspapers has been an interesting topic in the news lately (notice that I couldn't find a newspaper with the article... Only another blog), and I included them in a previous post on my investment blog about the decline and bailout of the financial and automotive industries.

However, we have heard some interesting discussion about possible solutions. One in particular which I thought was rather insightful was the idea of making newspapers into not-for-profit entities. This idea comes from Jefferson's fundamental idea:
"The basis of our governments being the opinion of the people, the very first object should be to keep that right; and were it left to me to decide whether we should have a government without newspapers or newspapers without a government, I should not hesitate a moment to prefer the latter." ~Thomas Jefferson
While a good idea there are some serious considerations:
  • Government bailout (and part ownership) of media has some potentially devastating Orwellian consequences (in the extreme, think 1984) due to conflict of interest.
  • Newspapers and media should be considered a public service as the dissemination of information is of paramount importance to the operation of a free society. (Even right wing conservative supply side economists must agree that free flowing information is a major component of the assumptions in economic theory).
  • Not-for-profit does not imply no revenue streams. Depending on the government definitions and regulations regarding NPOs, certain fees (to a cap) are excluded from taxation (sales revenue below a certain number, membership dues, etc).
  • A broken business model is a broken business model. There will be no tax to pay if there is no profit to begin with. And there will soon be no profit with declining revenue.
  • Using Profitability Analysis, there needs to be fundamental cost cutting in the way of distribution methods as well as a look at the advertising revenue streams (the bulk of the revenue is from advertising rather than subscription fees).
While a bit of fantasy, this strip from LICD highlights the drastic change in models required:

Sohmer has already received a great deal of criticism for the "practicality" of his idea, but I think what is highlighted is the dramatic nature needed to implement the change. As previously discussed, the newspaper industry finds itself in a crisis change situation.

Tuesday, May 12, 2009

PERT Charts - Project Management

Often large projects with many inter-dependencies can seem quite challenging to plan and execute. It can seem as if there is so much work to be done. While we often hear the advice to "prioritize tasks", what does that actually mean? Experienced project managers (especially those with PMP certifications from the Project Management Institute) will be familiar with Gantt and PERT charts.

Let's create a sample model to explain the methodology of project management:
  • Assume that you have five mile stones to achieve A, B, C, D and E.
  • Milestone A is your starting point
  • Milestone B is dependent on A and requires 2 hours of work to achieve
  • Milestone C is also dependent on A and requires 4 hours of work
  • Milestone D is dependent on Milestone B and take 5 hours of work
  • Milestone E is dependent on Milestone B and requires 2 hours of work, requires 1 hour of work after Milestone C and finally requires 3 hours of work after Milestone D.
The line highlighted in red is the critical path. Assuming that you have enough resources that can work in parallel (and that you cannot add resources to reduce job times) the critical path (the path that determines the minimum time require to finish the job) is:
  • A --> B (2 hours)
  • B --> D (5 hours)
  • D --> E (3 hours)
For a total critical path time of 10 hours. This is because there is no other path from A to E which requires more time. Note that A --> B --> E has a slack time (or float) of 6 hours (10 hours - 4 hours). This means that this path can be delayed as much as 6 hours without an adverse effect on the project as a whole. Similarly, for A --> C --> E which has a slack time of 5 hours (10 - 5 hours).

However, this becomes more complicated if resources can be moved from one task to another to influence (shorten) the job time. In this case it is possible to use graph theory and Dijkstra's algorithm to determine which paths are relatively over capacity (shortest paths) so that they can contribute resources to the critical path to reduce over all project time.

In the example above, resources could be taken from the following tasks and reallocated to the slowest task (B --> D) with the goal being to reduce systematic slack time across any path to 0.
  • A --> C
  • C --> E
  • B --> E
In this case, it becomes more appropriate to describe the tasks work in a quantity of man hours.

SAT Based Example: It takes 4 people 1 hour to paint a house. How many man hours are required for 16 people to paint 8 houses?

Answer: 1 house takes 4 people one hour so each house is 4 man hours of work. 8 houses would be 32 man hours (4 man hours per house x 8 houses) of work.

32 man hours divided amongst 16 people evenly is 2 hours.

Obviously, the examples provided in this post assume that workers are interchangeable, that there are no diminishing returns of labour etc. However, even without these assumptions, if there are large unbalances in the project paths, there is an obvious benefit to moving resources from one area to another to improve the overall project completion time.

Monday, May 11, 2009

Being Irreplaceable - The Good and The Bad

In my work with career development, I often hear stories of professionals who are quite happy with the fact that they are irreplaceable. They take it as a sure sign that they have job security. While in this economy, there are few who would look a horse in the mouth with regards to that view, in the long run, being too irreplaceable has it's downfalls to.

For those who are upwardly mobile, being irreplaceable should have an upward bias. That is to say that if you are looking for a promotion, you should be looking to add value outside of your position which apply to broader scopes.

Being irreplaceable at lower levels is not healthy for individuals or organizations. To have a foundation which rests on one point is extremely unstable and does no one any good. Also, if you are irreplaceable, there is a strong bias NOT to promote you. Not only is it detrimental but also selfish, as it prevents those below you from organic and professional growth as well.

[Case Study] An administrator for NPO was promoted for her work with a one of the organization's leading programs where she was the program head. She moved into an acting director position for all similar programs while continuing to act as program head for her previous team. She had always been proud of her work and the team celebrated the fact that there was no one people in her staff who could replace her. She had years of experience and knew all the in's and out's of past and running projects, fund raising and soliciting contributions from members.

However, after she received her promotion and new responsibilities across a broader field, her previous program began to suffer. She was repeatedly called back to deal with issues and ended up spending more time at her old position than the new to the detriment of both. After much effort, she finally trained a junior team lead to take the position of program head and was finally able to focus on her new position.

[Case Study] A software programmer was developing a module for communication infrastructure. He was absolutely indispensable as he was the only one who was able to do maintenance on the code due to legacy technology issues. As a result, he was a talented programmer whose skills could be transferred to another bigger more profitable project, however because he could not be replaced, he was passed over.

Finally, when he understood the situation, he went to his manager and put forth a proposal: "If I can find a suitable replacement, will you authorize a transfer?" Upon approval from the manager, and mentoring a junior developer, the programmer was able to successfully transition to a new position.

Friday, May 8, 2009

Profitability Analysis Framework, pt 5 - Price: Elasticity and Differentiation

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Price is probably one of the most universally important characteristics about a product or service. It usually acts as the primary (and fundamentally important) characteristic about a product. There are different ways to structure fees and payments.

From a strictly economic point of view, price will influence other factors such as quantity supplied and demanded (your standard micro-economic curves).

Your customer's price elasticity will also affect what price you can charge depending on your customers propensity to consume additional (or less) increments of your products.

One way to capture more consumer surplus is to differentiate your product (assuming that it is non-transferable). There are several strategies for accomplishing this task including product differentiation and consumer segmentation. Another consideration related to differentiation is whether customers would benefit of our product if we could customize certain characteristics. Can we achieve economies of scope in developing new product lines (leveraging technology and skill sets) and use these product lines to further refine our sales practices?

Where economics has a more difficult time modeling pricing is in luxury goods and brand equity (and is probably more interesting as well). Looking at similar competitors products in the market, some similar products sell at multiples of competitors products with similar features. For instance, clothing at stores like Banana Republic will sell Khakis at multiples versus what is available at Gap or Old Navy (all owned under GAP Inc). Here the product line differentiation is coupled with very strong brand identities based on design and style to command higher price points for similar products. To be able to understand what the public wants and to determine the best way to appeal to your customers is the ultimate goal of sales and marketing.

For products or services which are larger outlays versus the customers income (houses, cars etc) the high price and required cash outlay may make the purchases inaccessible. However, with financing plans with reasonable interest, potential customers with good credit still have access to purchase these goods whether the financing is arranged through a bank or through the company itself.

Also, for frequent purchases where there is some negotiation, it is important to look at the discounts being offered to close deals. Compensation models and sales commission structures are an important motivator for your sales staff, but they should not come at the expense of the sales team as a whole. Predatory pricing can be just as market inefficient as collusion.

[Case Study] Airlines need to maximize the capacity of an airplane in order to make a profit, however, they can also begin to differentiate between customers as their product is generally non-transferable.

For instance, a family going on vacation or a student knows that he or she is coming home for the holidays and can therefore plan ahead and book a ticket in advance. However, a business consultant only finds out at the last minute that they need to travel to a client site the next day.

Airlines can differentiate between these two groups by charging the first group a lower rate for booking in advance while charging the consultant a premium for last minute bookings. Also, a business class passenger gets a differentiated product.

Along with the additional lead time before a purchasing decision is made, there is also more flexibility (elasticity) in the first group than the second.

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Thursday, May 7, 2009

Profitability Analysis Framework, pt 4 - Sales: Volume, Brand Equity and Positioning

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

A rather important theme that has reoccurred in the last few posts about fixed and variable costs is the idea of quantity sold (sales volume).

At any given price, quantity sold is directly proportional to the total revenue stream for any given product or service.
What are potential explanations for movement in your sales volume? If you find yourself losing market share it could be either because of substitution to another product (entry by a new competitor) or general decline of the industry (less use of buggy whips). Cross elasticity of substitutes can result in lost sales if you are being undercut by a competitor. Another explanation is it could be a change in the social trend (less hamburger consumption and more salads).

Positioning based on the questions above are of the utmost importance and are often based on the following dimensions:
  • Price as explained above
  • Quality - With different dimensions as defined by the specific product (style for clothing, processing power for computers, horse power for cars etc)
  • Availability accessibility (consumption of cola generally goes up the more convenient it is, hence more vending machines)
  • Consumption of complementary and paired products (consuming more cola with an increase in consumption of pizza slices)
In growth opportunities, an important consideration is the geographic distribution channels and opportunistic sales. Are your customers able to get your product or service when they need it? Or are they going to your competitors? Do you have adequate point of sales to service your customers needs? What are the hottest geographic areas to locate more sales capacity?

[Case Study] Malcolm Gladwell talks about Airwalk as being a company which became famous for being unconventional and targeted directly towards skateboarding subculture of Southern California. Their advertising reflected a lifestyle which was uniquely different and had a special perceived brand equity. This allowed Airwalk to sell their shoes in boutique stores at prices that were much higher than their "competitors".

However, upon growth and expansion, when Airwalk started putting their shoes in more conventional locations (department stores, etc), their brand quickly became diluted as being too "common" and they lost their luster of being unconventional. What had originally been ironic and trendy and had become rather blasé.

Suddenly, by diluting their brand equity customers became disinterested, and their sales numbers suffered.

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Wednesday, May 6, 2009

Profitability Analysis Framework, pt 3 - Variable Costs, Cost of Goods Sold

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Continuing the Profitability Analysis - Framework and Practice series, part 3 will look at variable costs.
Now our graph gets a little more interesting. Variable costs increase with the quantity sold (more resources such as raw material and labour are necessary for each additional unit). Also, we now have the two pieces required to put together a total cost curve (fixed cost + variable cost). Notice now that the fixed cost is the y-intercept and that the derivative (slope) of the variable cost curve (marginal cost) is the same as that of the total cost curve. These are two critical concepts in understanding how to graph and model cost curves.

Variable costs are directly related to costs of goods sold (COGS) including factors such as labour and raw materials. For any individual product line and for any given factor of production, the total product curve takes an S shape. This also implies that the variable cost curve doesn't have to be (and often isn't) perfectly linear (which also emphasizes the points made above about the slope of variable cost being identical to the slope of total cost).

Assuming managers can select the workers for the job from highest margin to lowest (most useful to least), there is an accelerated growth pattern due to economies of scale. However, eventually, the law of diminishing returns is such that marginal utility approaches zero for each additional dollar of value put into the system. If you flip the x and y axis, you now have a basis for adding your variable costs against your fixed costs.

Regarding specific details for variable costs, this includes many factors of production such as materials, resources and labour. Common themes that affect this area include:
  • Cost of materials. If there is a change in the price of raw materials required for production or if there are inventory management issues which cause loses in the form of lost productivity. Also, if numbers are too high, this could be an indication of wastage, theft or some other form of inefficiency.
  • Cost of resources such as computing power, electricity or oil. Consumption of resources at peak demand often results in inefficient and expensive hidden costs. Any ability to offset or time shift demand could dramatically cut costs and reduce systematic stress and dependency. Also if costs are seasonal or otherwise predictable, financial hedges such as oil futures can insulate the company from short term volatility (although not long term changes).
  • Increased costs for overtime pay. This could be an indication that labour capacity is too low and that there needs to be more hires. This is directly related to the seasonality of your organization and could require you to outsource or hire more workers.
[Case Study] A factory is manufacturing cars and has a policy to pay it's workers 1.5 x for overtime worked above 40 hours. With it's current workforce and equipment, a particular factory line is able to produce a maximum of 20 cars per day. However, with a recent spike in the car model produced by this factory line, the demand requires production to increase to 25 cars per day.

While the automated equipment does not need to be augmented to meet demand (lines can simply run longer), there needs to be more human labour to satisfy demand. Should the factory authorize overtime or hire more staff?

[Solution] Looking at the solution, authorizing overtime would increase the labour component of cost of goods sold by 50% (1.5x pay for overtime). For 5 additional cars (or an increase of 25%), this would result in an increase of 37.5% of the total cost of labour (25% increase @ 1.5x labour cost).

In order to get the same increase in labour from hiring, they would simply need to hire 25% more labour (assuming no diminishing returns) would cost 25% more.

Using this logic, it's obvious that it would be better to hire more people and pay them at the stated rate than pay 1.5x for overtime.

So why not just always hire more people rather than pay 'expensive' overtime? If this demand is seasonal rather than permanent, having extra capacity will be detrimental if the demand doesn't last (if the demand for these cars evaporates). In this case, you are left with having to pay the salary for employees who are idle.

The premium for overtime is justified if 1. the incremental revenue is higher than the marginal cost of overtime labour (profitability of marginal units is positive) 2. the demand is temporary and doesn't justify increasing the permanent labour force.

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Tuesday, May 5, 2009

Profitability Analysis Framework, pt 2 - Fixed Costs: Capacity and Investment Decisions

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Total costs are composed of fixed costs and variable costs. In this post, we will decompose total costs and focus on fixed costs. Fixed costs are composed of assets for whom increased production does not influenced total costs and includes items such as administration, sales and marketing, land and equipment. Fixed costs usually involve some form of previous investment decision (having financed the purchase of a factory). Or other costs such as management and administrative costs as well as sales and marketing for building brand equity.
While the diagram above is probably the most boring graph you have ever seen, we will use it as a foundation for building the rest of our framework. Note that the fixed cost is constant regardless of quantity. Investment decisions will affect how this line moves up or down on the graph.

However, looking at the individual performance of fixed costs as it's own class of expenditure, the key metric with regards to fixed costs is actually not total fixed cost, but rather average fixed cost.

Average Fixed Cost = Total Fixed Cost / Quantity Produced

Since fixed cost does not change with quantity produced / sold, the only way to improve the operational advantage of a fixed cost outlay is to ensure that the resource cost (and benefit) is spread over as much of the good or service produced.

The following are specific examples of how this applies to different classes of fixed cost allocations.

Land and Equipment
Usually looking at this area is a result of cost cutting measures.
  • The company is thinking of opening another plant or reducing capacity.
  • Analyzing these performance metrics (such as output versus equipment) should tell a story regarding the operational capacity of equipment. If your relative cost of equipment is too high, you might have too much capacity and you can consider divesting equipment, or leasing your capacity.
Sales and Marketing
Investigating this area is usually a result of discovering a weak profit margin or product line. Cost savings from reducing sales and marketing budgets is generally a bad idea (you can't shrink yourself to greatness).
  • Sales and marketing might be a place to focus as brand equity is dramatically inter-related to many other interesting aspects of products (such as pricing and service).
  • A high sales and marketing budget might allow for higher sales margins (or sales numbers) in the revenue side of the equation.
Management and Administration
  • If a company has a very high cost in this area versus competitors, it might be a sign of operational inefficiencies (bloated management layers, practices or compensation).
Now that we have taken a quick peek at fixed cost, tomorrow we will look at variable costs.

[Case Study] One prime example of fighting fixed costs is in the semi-conductor fabrication industry where there are only a few major players (Intel, AMD, Texas Instruments etc). Fabrication facilities have exorbitant and prohibitive capital requirements.

They also have incredibly low variable cost (the individual products of these fabrication plants is relatively worthless). In order to keep a good profit margin, the capacity of the plants must be running at near 99%.

While each of these three major players has high demand, they cannot fully satisfy the demand requirements themselves (resulting an a high average fixed cost per product).

However, by outsourcing their facilities to other semi-conductor designers, they are able to increase the volume and quantity coming out of their facilities (lowering their average fixed cost). They also have a schedule of production queuing to ensure that the facilities always have work to do (to ensure 99+% of the capacity is always utilized).

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Monday, May 4, 2009

Profitability Analysis Framework, pt 1 - High Level Overview

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

As public corporations are profit seeking entities, one of the most common frameworks in use is the profitability analysis. Over the next week, I'll be posting about the different components that make up this analysis as well as some of the common challenges that arise and some solutions based on case studies of previous companies.

While the underlying math and mechanics of this discussion will usually be quite simple, what is more intriguing is the surprising relationships that surface as a result of an integrated and systematic analysis of the case studies. Most companies are unique (providing them with their own competitive advantages and challenges) however there are some common themes to learn from the cases which are applicable to any business environment.

We will review them as follows:

Part 1 - High Level Overview (This post)
Part 2 - Fixed Costs: Capacity and Investment Decisions
Part 3 - Variable Costs: Cost of Goods Sold
Part 4 - Sales: Volume, Brand Equity and Positioning
Part 5 - Price: Elasticity and Differentiation
Everyone is familiar with the basic formula for profit:

Profit = Revenue - Costs
(individual product lines)

Shareholders Equity = Assets - Liabilities
(symmetrical for companies at large)

Costs can be further subdivided into two categories:

Total Costs = Fixed Costs + Variable Costs

And similarly for revenue:

Revenue = Price x Quantity Demanded

While we have hardly made any ground breaking discoveries here, what should be highlighted at this point is the ability to ask the right questions when confronted with a declining profit scenario. It is important to understand what is happening in the competitive landscape such that profits are declining. By quickly identifying which area should receive attention we can systematically analyze the company's fundamentals to determine where changes are needed).

In Part 2, we will look at fixed costs and how they affect profitability as the need for more investment or more efficiently allocating capacity.

Profitability Analysis Framework Series
[ 1. Overview, 2. Fixed Costs, 3. Variable Costs, 4. Sales, 5. Price ]

Friday, May 1, 2009

Change Management - The Subtle Difference Between Being Inert and Fickle

With the current problems faced by various companies in different sectors (automotive, newspaper, media etc), I thought it might be a good idea to look at change management and how urgency affects decision making as well as overall performance.

Change management is often an issue of hot debate amongst well meaning managers. On one side, there are those who argue "If it ain't broke, don't fix it" while others would warn that "change is needed to compete / differentiate / survive".

Often, there are good arguments to be made on both sides. This is particularly true for live systems that are operating 24 hours. In any change management scenario involving downtime, taking live systems with large traffic loads down is best done during off-peak hours (maintenance on a subway system, telephone network, power plant etc) when the demand of those affected is at a minimum.

However, before that decision is even reached, managers will do cost-benefit analysis and evaluate whether a decision to implement a change should be accepted. You'll notice that generally speaking, there is a trend in that as time progresses, the readiness for change (sense of urgency) increases, but the performance (strategic capability) to change precipitously declines ("is it too late?").
The sweet spot of change management is an anticipatory change right before it becomes reactive, or in other words, making a change at the last minute so that it is ready right when it's needed (the metaphorical holy grail of change management). Why is the performance of change management so low at the beginning? Why don't we invoke changes (assumed to be improvements) right away?

Think about it in terms of discounting the required time and resource "investment" from when it is implemented to when it becomes optimally useful. In the same vein as disruptive technologies (as coined by Clayton M. Christensen in his 1995 article Disruptive Technologies: Catching the Wave) if you provide too much utility too fast, the capacity to efficiently utilize the resource diminishes until your capabilities and needs catch up. In this scenario, the "If it ain't broke, don't fix it" philosophy wins out.

For instance, think of fax machines. When the fax network was "young", owning an expensive fax machine didn't make sense as you could only be connected to a small network. However, as the network grew, having a fax machine to be connected to this larger network became critical to the point where to not have one was a detriment to your business. Same can be said of email and many other technology changes. There is usually a hefty premium to be paid for first mover advantage when it comes to technological improvements.

It is no secret that change management is essentially looking out for the long term at the expense of the short term (which is why it can often be a very difficult decision to make). And while short term results are critical for demonstrating solvency and profitability in the long run, ignoring looking into the horizon for too long can have disastrous effects (as we are seeing now as weak companies struggle in this economic recession).

Thursday, April 30, 2009

PEST Analysis

In continuing with different analysis frameworks such as SWOT and Porter's Five Forces, another good framework to use is PEST (which stands for Political, Economic, Social and Technological factors). Using this framework helps identify macro-environmental factors which influence strategic management. Since PEST takes a higher level view, it can also provide a longer lead time for understanding the evolution and coming changes, however, that is counter balanced by the idea that the farther you look into the future, the fuzzier the picture gets.

Political factors examine how and to what degree a government participates and intervenes in the economy. Specifically, political factors include areas such as tax / subsidy policy, labour laws (minimum wage, safety regulations etc), environmental regulation, trade barriers and tariffs, and political stability. Furthermore, governments have great influence on the health, education, and infrastructure of a nation.

Economic factors include macro-level economic factors such as economic growth (GDP), interest rates, exchange rates, the inflation rate and unemployment rate. For example, interest rates affect a firm's cost of capital and therefore to what extent a business grows and expands. Exchange rates affect the costs of exporting goods and the supply and price of imported goods in an economy.

Social factors include the culture, education level, health, population growth rate, and age distribution. Trends in social factors affect both internal and external factors in the company including the demand for a company's products or services (what the public in that geography demand) and how that company operates (who is available to be hired from the pool of workers).

Technological factors include R&D activity, automation, the rate of technological change as well as the current state of technology (i.e. communications infrastructure). They can determine barriers to entry (patent law) or provide strategic leverage.

By looking at all these factors, an analyst can determine the relative attractiveness of looking at different geographies from a macro-perspective. From a top-down approach to strategic thinking, a PEST analysis is a rudimentary starting point for any decision making.

Wednesday, April 29, 2009

Urban Planning and the Irony of Mass Transit

With the focus on how Obama's administration wants to stimulate the economy by starting shovel ready government projects with an emphasis on sustainability (combined with my experiences commuting by public transit) I thought it might be timely to look at urban planning, specifically as it relates to mass transit.

Particularly, with a mildly satirical tone, I wanted to look into the phenomenon of clustering. In other words, I wanted to answer two questions:
  1. "Why do I always seem to miss buses in pairs?", and
  2. "Every time I try to ride the bus, why do I always get the full one?"
It turns out that there are many circumstances in life for which starting earlier (Or being closer to the finish) doesn't necessarily meaning finishing earlier. Let's build a simple model to help us understand how fundamental mass transit capacity planning works:
To understand what I mean, let's assume:
  • A bus route to a main station has five equally distanced stops A, B, C, D and E.
  • The distance between stops (described as time to traverse from one stop to another) is 2 minutes irregardless of traffic and other factors.
  • It takes 2 minutes to load a bus at each stop regardless of number of passengers, unless there are no passengers (or the bus is full) in which case the bus travels "express mode" and doesn't stop at all.
  • A bus can hold 50 people maximum.
  • That each stop has 15 people (total 75). It will take 2 buses to pick up all the passengers.
Scenario i The first bus will pick up 15 from A, 15 from B, 15 from C and 5 from D (50 total). The second bus will pick up 10 from D and the remaining 15 from E.

Notice that whatever the interval between buses (say 15 minutes) is the minimum wait time that the passengers at D and E have to wait for the second bus (on top of normal travel time if they could get one bus 1).

The travel time for each group is as follows:
Bus 1 (containing Passengers from A, B, C and 5 from D) arrives at the terminal after 18 minutes
Time = 2 min per stop x 4 stops
+ 2 min drive time between 5 stops

Bus 2 (containing the remaining passengers from from D and E) arrives at the terminal after 29 minutes
Time = 2 min per stop x 2 stops
+ 2 min drive time between 5 stops
+ 15 minute delay between Bus 1 and 2

Generally,

Travel time for any given bus = time spent picking up passengers (delay per stop x number of stops)
+ time spent driving between stops (travel time per stop x number of stops)
+ time delay between buses (anticipated wait time for a passenger who 'just missed the bus')

Notice that in this model, a bus that follows another will have a more "efficient route" excluding the delay time between the buses (currently set at 15 minutes) if the delay is less than 11 min, Bus 2 arrives before Bus 1! This is because Bus 1 (assumed to have "first dibs" on the passengers) will be held up in "transactions" picking up passengers.

Scenario ii What would happen in an incremented step by step analysis (if the two buses left at the same time) is as follows:

  1. Bus 1 picks up all passengers at A (2 min) while at the same time
    Bus 2 travels to stop B (4 min).
  2. Bus 2 picks up all passengers at B (2 min) while at the same time
    Bus 1 travels to stop C from A (4 min).
  3. Bus 1 picks up all passengers at C (2 min) while at the same time
    Bus 2 travels to stop D from B (4 min).
  4. Bus 2 picks up all passengers at D (2 min) while at the same time
    Bus 1 travels to stop E (4 min).
  5. Both buses run "express" to the terminal

Both Buses 1 and 2 arrive after 12 min (they share the load equally). This is what happens during non-rush hours and I would describe as "clustering", the phenomenon where buses (even when they start at different times) start to travel together.

As you can tell, this is a horrible situation when it comes to urban planning. For most lines, this means that even if you deliberately stagger buses so that they are 15 minutes apart (assuming that this is also the minimum amount of time someone would have to wait between buses), the truth is that with clustering on non-rush hours it is more likely the wait will be double that (because one bus will naturally catch up with the other if there isn't enough traffic). Hence the answer to: "Why do I always seem to miss buses in pairs?" is because they have a natural tendency to cluster.

Also, implementing queuing and network traffic theory, you can use the analogy that each bus stop is a server node and each bus is a service arrival.

This shows, as in the first scenario (Scenario i), that buses that lead are full. Assuming that occasionally when a few people get off at later stops (rather than waiting for the terminal) this is the only circumstance when a bus frees up more capacity to take on more passengers (also why they ask people to leave from the rear and board from the front). Hence the answer to: "Every time I try to ride the bus, why do I always get the full one?" is because during rush hour, most buses are full to capacity and only buses with marginal capacity available (almost full) stop to pick up more passengers.

Now the system described here only describes an oversimplified one line system. Imagine multiple inter-related lines, time sensitive with daily cyclical traveler arrival patterns, complicated with traffic congestion, traffic lights, construction and other "features" interacting on the road. You certainly can't just throw more buses into the system if you want to improve performance. And we can certainly sympathize with both the Traffic Engineer as well as the person in the car in this xkcd comic:

Monday, April 27, 2009

Michael E. Porter's Five Forces - Industry Competitive Analysis

While an index like the Herfindahl-Hirschman Index (HHI) might give you a nice quantitative number describing the level of competitiveness in a given industry, a framework such as Porter's Five Forces will start to explain why this is the case.

Porter's five forces analysis looks at:
Another way of looking at this is a 360 view around your company's position in an industry. This includes your supply chain (vertical view of suppliers and customers) as well as within your market (horizontal view of entrants and substitutes). Each of Porter's four mutually exclusive forces contribute to the over all competitive rivalry in an industry.

This helps you answer the question, "Should we start a new venture in this industry?"

Let's have a closer look at each category:

The threat of substitute products The greater the number and the closer substitute products imply an increase the propensity of customers to switch between alternatives (high elasticity of demand).
  • buyer propensity to substitute
  • relative price performance of substitutes
  • buyer switching costs
  • perceived level of product differentiation
Example: Coke and Pepsi are (propensity to substitute, "brand loyalty" aside) cost about the same. In a convenience store, there is no cost to switch from one to the other and there may be some small differentiation between brands. The threat of substitution is high. Test this by going into a restaurant that only serves Pepsi and ask for a Coke. Chances are your server will ask "Is Pepsi, ok?" (if they ask at all)

The threat of the entry of new competitors Inefficient or overly profitable markets will attract more firms and capacity investment. More capacity results (for under served markets) results in decreasing profitability. The markets will always seek equilibrium even if that equilibrium is artificially imposed by barriers.
  • the existence of barriers to entry (patents, rights, etc.) - Note the expiry of patents can trigger new a equilibrium and competition rivalry movements in the industry
  • size - capital requirements and economies of scope
  • brand equity
  • access to distribution
  • learning curve advantages - required skill
  • government policies, regulations and licensing requirements
Example: Although the mass production of juice might require specialized equipment for economies of scale, individual producers (lemonade stand) are not prevented from entering the market with smaller equipment investments. Threat of new competitors is high. Other than standard food and health regulations (FCC), there are no licenses required to produce juice.

The bargaining power of customers Also described as the market of outputs. The ability of customers to put the firm under pressure and it also affects the customer's sensitivity to price changes.
  • buyer concentration to firm concentration ratio
  • degree of dependency upon existing channels of distribution
  • bargaining leverage, particularly in industries with high fixed costs
  • buyer volume
  • buyer switching costs relative to firm switching costs
  • ability to backward integrate - can customers do this themselves?
  • availability of existing substitute products
  • buyer price sensitivity
  • differential advantage (inimitable characteristics) of industry products
The bargaining power of suppliers Also described as market of inputs. Suppliers of raw materials, components, labor, and services (such as expertise) to the firm can be a source of power over the firm. Suppliers may refuse to work with the firm, or e.g. charge excessively high prices for unique resources.
  • supplier switching costs relative to firm switching costs
  • degree of differentiation of inputs
  • presence of substitute inputs
  • supplier concentration to firm concentration ratio
  • employee solidarity (e.g. labor unions)
  • threat of forward integration by suppliers relative to backward integration by firms
  • cost of inputs relative to selling price of the product (profit margins)
Example: Bread inputs include flour, eggs, etc (highly fungible and cheap base commodities). Supplier concentrations of these inputs to firm is very high. Individual suppliers do not dominate the market and will probably not forward integrate (an egg distributor / farmer) will generally have no interest in making and selling bread.

The intensity of competitive rivalry For most industries, this is the major determinant of the competitiveness of the industry. Sometimes rivals compete aggressively and sometimes rivals compete in non-price dimensions such as innovation, marketing, etc.
  • number of competitors
  • rate of industry growth
  • intermittent industry overcapacity (like the service industry)
  • exit barriers
  • diversity of competitors
  • informational complexity and asymmetry
  • fixed cost allocation per value added
Example: Cellular carrier companies (Canada: Rogers, Bell, Telus. US: Verison, Sprint, AT&T) requires large economies of scale for infrastructure. Industry suffers from over capacity at off peak hours. There also also high exit barriers (selling cellular infrastructure). Competitors are not particularly diverse and informational complexity is fairly low. Cellular billing (cost per minute) is fairly fixed. Competitiveness is generally high.

Each of these sections are scored and collectively analyzed to understand the competitive forces in any given industry. This framework highlights the key factors which determine any industry's overall competitive rivalry (and attractiveness). Industries which are not competitive may be attractive for other companies to enter (or increase investment), industries which are overly competitive may force out weaker companies and would generally be unattractive for new ventures.

Sunday, April 26, 2009

How We Decide, by Jonah Lehrer

For those who have read Malcolm Gladwell's Blink, this book may seem to get off to a bit of a slow start (Lehrer also refers to the same strawberry jam experiments and makes the same points regarding how the brain works). About a fifth to a third of the book sounds familiar from the start. It needs to set up the same frame work for understanding how the mind works on unconcious levels.

However, Jonah eventually starts to venture into new ground when he begins discussing moral decision making.

His highlights the key take aways of his book, the most important being to "think about thinking":
Whenever you make a decision, be aware of the kind of decision you are making and the kind of though process it requires.
Specifically:
  • Simple problems require reason.
  • Complex problems benefit (ironically) emotional decisions.
  • Novel problems require reason - analyze underlying patterns to find solutions.
  • Embrace uncertainty - deliberately contrarian hypotheses to avoid discounting uncomfortable yet material facts.
  • You know more than you know (paradoxically) - emotions may often be hard to analyze, but they can provide a wealth of information if you know how to use them and when to trust them (and what their limitations are).
  • The best decision making requires analysis and emotions and the best decision makers will have a mixed approach, knowing when to use which.

Sent from my BlackBerry device on the Rogers Wireless Network