Challenges of Bringing the Business Intelligence Tiers Together

The four core business intelligence(BI) components: Data Feeds , Extract-Transform-Load Process , Data Warehouse and Presentation Layer come together and form a complete BI solution. Each tier plays an important role in keeping the system current and running. As you can probably guess, implementing and maintaining a system like this is not easy and is fairly complex.
Very simple errors in the first tiers could have a ripple effect into the entire system, making pieces of the implementation meaningless. Developing on an existing piece of BI software is not trivial. The BI system has a total of four complex tiers that need to communicate with each other effectively. Adding a business requirement to add another piece of data changes the logic in all four tiers of the implementation. Figure below shows the four different tiers that make up the full BI implementation.

Click on image to enlarge  

The BI Presentation Layer (Presentation of Knowledge)

The presentation layer is a logical tier in the architecture where business intelligence client software is used by the business users. The responsibility of these visual tools is to surface the data cleanly from a data warehouse or data mart to the user. This tier is sometimes referred to as the presentation of knowledge, as it is responsible for presenting not just data but insight in an easy-to-consume format.

In a typical BI implementation, usually there isn’t just one type of presentation software used. BI client software includes specific tools for different audiences. For example, a company executive may be interested in a high-level overview of the business and prefer looking at the data in a highly visual format such as a dashboard or a report. Conversely, a financial analyst who is very familiar with the data might prefer the power of a spreadsheet-like format, forgoing some of the simplicity of charts and graphs. This is why most BI software implementations provide a mixed bag of tools that is tailored to not only specific tool functionality but the audience as well.

Presentation tools can take many different forms, including web, desktop, and mobile.
Furthermore, they can be homegrown, custom-developed pieces of software or third-party pieces of software that sit on top of data warehouse structures. For example, Microsoft Performance Point Server is a piece of software that exposes multidimensional data that is found in Analysis Services cubes.

The Data Warehouse

The data warehouse is a storage repository for data that is used in business intelligence(BI) software. The end result of the ETL process is a data repository that is highly optimized for analysis and querying. Data warehouses tend to hold a great deal of historical information and tend to have large storage requirements. Therefore, they are usually stored in enterprise database software (such as Microsoft SQL Server) that allows for optimal use of the server hardware.

The data warehouse can be the primary repository that communicates with BI tools in the
presentation layer or it can be used as a staging area for further data transformations. For example, from our data warehouse, we could create a set of Analysis Services cubes for multidimensional analysis or create secondary smaller data marts for reporting or querying.

Extract-Transform-Load Process

Now that we have isolated the data we want to expose in our BI system, we need a process to move it into our BI platform. This process can be implemented using a multitude of different methodologies. I will focus on a couple of them. The three data feeds make up our global source in this example. We need a process to transform the data and a destination for that transformed data.

The process of converting the data into something usable by BI software is called an extracttransform-load (ETL) process. The ETL process has a source and a destination. The data feeds are the source and the data warehouse (which I’ll talk about in detail in the next section) is the destination. The name itself gives away the three main components of an ETL process:

Extract: This refers to the action that performs the extraction of the raw data from
the data feed. For example, for a database, this could be a select statement on a
table. If the data source is an API, this could call a method that extracts all your
contractor names.

Transform: This refers to the action of transforming the data into the required layout in the data warehouse or data mart. This is where the heavy lifting of the ETL process takes place and is usually the part that takes the most time to complete. The data source is rarely in the format that we want for making BI operations easy. Therefore, it is advantageous to perform different types of transforms to prepare the structure of the data in such a way that it can be consumed inside a BI visualization without the need for these complex structural manipulations. Typically, the transform portion of ETL focuses on several main
tasks: vertical partitioning, horizontal partitioning, aggregations, and other less time-consuming tasks like sorting or splitting up tables.

Vertical partitioning refers to filtering the data sets and stripping off unwanted rows from the data. For example, if we had information in our data feed that spanned the years 1950 to 2010 and only the last decade were relevant, we could simply avoid processing the older years to the destination.

Horizontal partitioning is similar to vertical partitioning. However, horizontal partitioning strips off unwanted columns or attributes from the data. For example, if we had address information (city, state, and ZIP) for our consultants in the data feed and this was deemed not relevant to our BI solution, we could simply ignore those columns. The benefit would be that less space would be taken up in our data warehouse.

Aggregation is essentially taking related data for input and returning a single scalar result (e.g., if we wanted to sum up all the hours our consultants worked in a given time period).

Load: This refers to taking the output of the transformation step and placing it into the appropriate location in the data warehouse, which could be a database or an in-memory data structure. The transform step “massages” the data structure so that it will easily fit into the destination tables.

http://www.blogger.com/post-edit.g?blogID=889889319149004544&postID=5137629408542511137Click on image to enlarge
Figure above: Note that the example consultant entity is being horizontally partitioned (by removing the No rows from the IsEmployed column) and vertically partitioned (by removing the City column) before being transferred into the BI data warehouse.

There are many enterprise ETL tools on the market such as SQL Server Integration Services (which is included in SQL Server 2005 and 2008) that provide a visual way of designing, debugging, deploying, and managing data management processes.

Data Feeds

A business intelligence(BI) system is nothing without a valid data source. When designing a BI system, we first need to determine what data we want to consume for analysis. Most organizations have various information systems that aid them in their day-to-day operations. Internal data from a system that aids in everyday operations of an organization is usually a good candidate for a BI project. Data can also come from external or even public data sources as well. These data sources that provide the information that drives a BI implementation are referred to as data feeds.

Data feed sources can be anything that provides the required data in a well-structured format. They can be exposed in a variety of formats, such as databases, XML files, CSV files, and even API (application programming interface) service calls. There is no one-size-fits-all format type for a data feed. For example, XML files are good sources for smaller data that doesn’t change much. However, data that changes rapidly and is large might be better sourced from a database.

In some cases, the BI architect might not have a choice when consuming external data. For example, if you want to consume public data from a web site that provides guidance on salary information, you may have to use that site’s API. A vendor that provides data as a service is unlikely to make available a backup of its entire database nightly; however, it is more likely to provide an external-facing API as a service.

Figure below shows three separate feeds going into our BI system. Two feeds are internal and are sourced via database repositories. In addition, we are consuming a web service API data feed on salary information for consultants. We don’t know how the data is getting there, but we have architecturally defined what data we want to consume.

Click on image to enlarge

Time entry and HR systems are highly transactional with multiple updates happening every minute. Using the underlying databases directly as the source for data is not a good idea. The data feed sources need to pull data that is transitionally accurate, and pulling them from a live system does not guarantee that. Furthermore, largescale data pulls can adversely affect the performance of the underlying system.
 
Most BI implementations use a snapshot or a backup of the data that happens at a given point in time. The snapshots can be in the form of synchronization that can give an almost real-time feed to the data; alternatively, snapshots of the data can be taken at monthly intervals. This allows the data feed to “bridge” itself from the operations and transactions of the system.
 

Case Study: 5 Categories of BI Identified for McCormick

To represent Business Intelligence (BI) projects of McCormick, use the five categories of BI that identified for McCormick. In actual practice, there might well be more than five projects, but let’s use five that will suffice. There are several ways to go about creating the BI opportunity map, one of which is to have one person array the projects within the opportunity map quadrants as the starting point for discussions with knowledgeable business and IT leaders and managers. If this approach is used, the initial BI opportunity map for McCormick might look like below:

Click on image to enlarge
  • All projects were judged to have a very positive business impact owing to strong alignment with company strategies and core business processes. Manufacturing BI was considered to have relatively less business impact because supply chain costs are a much higher proportion of total finished goods costs than are manufacturing costs. Financial planning and control BI was felt to provide lagging indicators, whereas product development BI, SCM BI, and customer service BI were judged to have more direct impacts on McCormick’s ability to execute its business strategies and value disciplines.
  • The projects were judged to have different risk characteristics based on the relative technical difficulty of acquiring and integrating the data needed to delivery the information from the source systems that contain the data, the availability and quality of the underlying data needed to deliver the information, and a number of organizational readiness factors.

Business and IT leaders and managers can use the initial BI opportunity map as the starting point for discussions addressing the underlying assumptions of the initial project placements, and then they can potentially adjust those placements, as illustrated as below: 

Click on image to enlarge
In the example above, the discussion of risk-reward tradeoffs resulted in a group consensus that:

  • Manufacturing BI had greater business impact and greater risk than originally perceived, as indicated by the manufacturing BI box.
  • SCM BI had greater business impact and less risk than originally perceived, as indicated by the SCM BI box.

Based on these discussions and the relative placement of the projects within the BI opportunity map, McCormick might then prioritize its BI opportunities as follows:
  1. SCM BI
  2. Product development BI
  3. Customer service BI
  4. Financial planning and control BI
  5. Manufacturing BI
These priorities would then establish the order in which the specific BI development projects would be undertaken.

The Need to Make Better Decisions

If only one term had to be used to describe the competitive business environment, it would be cutthroat. No matter what kind of industry or its size, every company is constantly trying hard to get a competitive advantage over its competitor. This is always happened in any small or big scale companies to one-up each other. One way an organization can attain an edge over its competition is by making decisions that have an increased positive impact and reduced risk to attain their goals.

Making the proper decision on any difficult task to selecting the best among alternatives can be hard. This is amplified in business when any decision could lead to a great success or a massive failure. Not having nor being able to understand a key piece of information could easily affect the case for selecting one decision path. Not too long ago, tough business decisions were made by long-time industry experts who had intimate knowledge of the business. These decisions were largely made on past historical or financial situations and rarely took into account data models. This led to high levels of failure, and some successful decisions could be attributed more to luck than effective decision-making techniques.

Processes for making decisions started to involve computers in the ’60s and ’70s. As the computer revolution started making its way from academia and government projects to mainstream businesses, people started leveraging computers to do continuous number crunching. Computers could process more data, and this eliminated some of the human error factors involved with complex statistics. This is where computers have an empirical advantage over humans, as they are tailored for mathematical computations and can be harnessed to run almost 24 hours per day. However, even enterprise-level computers in those days were not even close to the power of what we are used to today. Most of them couldn’t do much more than today’s programmable scientific calculator. The early horsepower of computer systems had to be specifically tailored for basic mathematical computations on data, as anything complex as artificial intelligence (AI) was completely out of the question.

Organizations quickly saw the benefit of having computer systems aid them in their everyday business processes. Even though the early computers weren’t that powerful, they could be used to garner vast amounts of data and perform complex business algorithms on it. The resultant data could then be used in the boardroom to shape corporate strategies via actionable decisions from executive information systems (EISs), group decision support systems (GDSSs), organizational decision support systems (ODSSs), and so on.

Decision Support Systems

The need for company executives to make better decisions and the rapid evolution of computing power led to the birth of decision support systems (DSSs). A DSS is a type of computer information system whose purpose is to support decision making processes. A well-designed DSS is an interactive software system that helps decision makers aggregate useful information from raw data, documents, and business models to solve problems and make decisions.

While these systems were first implemented in executive circles, they have quickly grown to be used by trained professionals as well. Various remnants of DSS software implementations can be found everywhere from the Internet to your local bank branch. For example, when you go to a bank and apply for a loan, complex DSS software is used to determine the risk to the bank based on your financial history. The result of this information will aid the loan officer as to whether the bank should make the decision to loan you money.

DSSs gained tremendous popularity in the late ’80s and early ’90s. The first systems that were deployed targeted large-scale organizations that needed help with large amounts of data which included the government, and the automobile and health care industries. These systems were very successful and delivered tremendous return on investment.

Early DSS projects, while largely successful, did have some challenges however:
  • Customizability: DSS software did not exist in the way it does today. A vendor couldn’t simply download a tool or customize a preexisting system. Usually, these tools had to be designed and programmed from scratch.
  • Multiple vendors: Implementations of early DSSs were a mix of software, hardware, servers, networking, and back-end services. In the ’80s and early ’90s, there wasn’t a single company that could provide all of the necessary components of complex systems at once. Multiple vendors usually worked on a single project together on a single DSS implementation.
  • Uniqueness: Early DSS software was unique and often the first of its kind. This usually meant that a great deal of planning had to be done to get concepts moved from theory into a working information system. Architects and programmers in the early days of DSS couldn’t rely on how-to guides to implement a unique custom system.
  • Long deployments: Projects that included custom software and hardware from multiple vendors obviously led to implementations that took a long time to complete.
  • Expensiveness: DSS systems in the ’80s and ’90s were very expensive and easily carried budgets of tens of millions of dollars.
DSSs allowed for entire organizations to function more effectively, as the underlying software powering those organizations provided insights from large amounts of data. This aided human decision makers to apply data models into their own decision making processes.

DSS software at its start was considered a luxury, as only the largest of organizations could afford its power. Since the software was custom and worked with the cooperation of multiple vendors, it was hard to apply these systems as reusable and resalable deployments. Tens of thousands of hours were invested in making these systems come to life. In the process of designing these complex systems, many innovations and great strides were made in the young software industry. These innovations were screaming to be let out into the wild and used in conjunction with other pieces of software.

The demand for DSS software was ripe and the vendors were beginning to taste the huge amounts of potential profits. If only they could make the software a little more generic and resalable, they could start selling smaller DSS implementations to a much larger audience. This idea led to applying the core innovations of complex DSS software into many smaller principles like data mining, data aggregation, enterprise reporting, and dimensional analysis. Enterprise software vendors started delivering pieces of DSS as separate application packages, and the early seeds of BI were sown.


Case Study: McCormick Driven Business Intelligence Value Creation Opportunities

Based on McCormick’s industry environment, business drivers, strategies, goals, and business design, the following Business Intelligence (BI) opportunities can be idenitified. Each would help McCormick improve profit and performance.  

  • Product development BI. Examples include sales trends by consumer end product categories such as beverages and baked goods, sales trends by McCormick customer and by McCormick ingredient product, and gross margin and volume trends for McCormick products. 
  • Customer service BI. Examples include customer profitability trends by customer and by consumer end-product category, such as dairy products and baked goods, and customer-specific order history, including order line volumes, frequency of orders, frequency of order changes, and order fulfillment metrics. 
  • SCM BI. Examples include demand history by McCormick product and by customer, supplier scorecards for McCormick suppliers, inventory levels by McCormick product and by customer, and performance metrics such as order-tocash cycle time, order-to-ship cycle time, and percentage of perfect orders. 
  • Manufacturing BI. Examples include batch yield history by McCormick product and plant, batch cost history by McCormick product and plant, quality trends by McCormick product and plant, and batch setup and changeover time trends by McCormick product and plant. 
  • Financial planning and control BI. Examples include forecast versus actual order volume, prices, and mix by McCormick product and by customer; forecast versus actual revenues by McCormick region, product, customer, and salesperson; and forecast versus actual gross margin by McCormick product and plant.
By systematically working through the BI opportunity analysis framework, we have identified specific BI opportunities for McCormick. By investing in one or more of these BI opportunities, McCormick would have better business information and analytical tools to inform key business decisions that drive increased profits. For example, industry consolidation puts pressure on profit margins. 

McCormick has chosen to respond to this challenge by adopting a strategy of supply chain collaboration, which seeks to drive costs down by using IT and business process improvements to improve operational efficiency. Toward that end, having SCM BI and customer service BI would allow McCormick manage the key variables and processes that determine supply chain costs, time, asset utilization, service, and quality—all of which contribute to the ability to maintain or improve gross margins in the face of margin pressures.

The McCormick BI opportunity analysis case study illustrates how your company could go about identifying actionable BI opportunities. The process does not stop there, however, as you then need to prioritize those opportunities based on business impact, risk, and project interdependencies. The next part of this BI opportunity analysis overview describes a straightforward method for prioritizing your BI opportunities.

Click on image to enlarge

Figure above shows the continuation of the BI opportunity analysis from the point of having identified opportunities of business-driven BI value creation to the point of having used a portfolio of BI opportunities to create a BI opportunity map. The BI opportunity map is a conceptual framework aimed at prioritizing BI opportunities based on what amounts to a risk-reward tradeoff. 

The opportunity map should not be thought of as a deterministic model, although opportunities are present to use multi-factor quantitative and/or qualitative analyses to support project placement on the business impact scale and/or the risk scale. Rather, the BI opportunity map serves as a basis for riskreward tradeoff discussions between the business and IT leaders and managers who collectively have to sponsor, execute, and leverage the contemplated BI investments so that business value is created. To illustrate the use of the BI opportunity map, let’s continue the analysis of the McCormick case.

What Does Business Analytics Mean?

It’s quite easy to imagine a bank that runs all its customer processes and dialogue programs entirely without using IT—and what really hard work that would be. The point here is, of course, that you can have business analytics(BA) without deploying software and IT solutions. At a basic level, that has been done for centuries, but today, it just wouldn’t stack up. In this book, we look at BA as information systems, consisting of three
elements:
  1. The information systems contain a technological element, which will typically be IT-based, but which in principle could be anything from papyrus scrolls and yellow sticky notes to clever heads with good memories. A characteristic of the technological element is that it can be used to collect, store, and deliver information. In the real world, we’re almost always talking about electronic data, which can be collected merged, and stored for analysts or the so-called front-end systems who will deliver information to end-users. A front-end is the visual presentation of information and data to a user. This can be a sales report in HTML format or graphs in a spreadsheet. A front-end system is thus a whole system of visual presentations and data.
  2. Human competencies form part of the information systems, too. Someone must be able to retrieve data and deliver it as information in, for instance, a front-end system, and analysts must know how to generate knowledge targeted toward specific decision processes. Even more important, those who make the decisions, those who potentially should change their behavior based on the decision support, are people who must be able to grasp the decision support handed to them.
  3. Finally, the information systems must contain some specific business processes that make use of the information or the new knowledge. A business process could be how you optimize inventory or how you price your products. After all, if the organization is not going to make use of the created information, there’s no reason to invest in a data warehouse, a central storage facility that combines and optimizes the organization’s data for business use.
The considerable investment required to establish a data warehouse must render a positive return for the organization through improved organization-wide decision making. If this doesn’t happen, a data warehouse is nothing but a cost that should never have been incurred. An information system is therefore both a facility (for instance a data warehouse, which can store information) as well as competencies that can retrieve and place this information in the right procedural context.

When working with BA, it is therefore not enough to just have an IT technical perspective—that just means seeing the organization as nothing but a system technical landscape, where you add another layer. It is essential to look at the organization as a large number of processes. For instance, the primary process in a manufacturing company will
typically consist of purchasing raw materials and semi-manufactured products from suppliers, manufacturing the products, storing these and selling them on. In relation to this primary process there are a large number of secondary processes, such as repairing machinery, cleaning, employing and training staff, and so on.

Therefore, when working with BA, it is essential to be able to identify which business processes to support via the information system, as well as to identify how added value is achieved. Finally, it’s important to see the company as an accumulation of competencies, and provide the information system with an identification and training of staff, some of whom undertake the technical solution, and others who can bridge the technical and the business-driven side of the organization, with focus on business processes. In terms of added value, this can be achieved in two ways: by an improved deployment of the input resources of the existing process, which means that efficiency increases, or by giving the users of the process added value, which means that what comes out of the process will have increased user or customer satisfaction.

In other words, successful deployment of BA requires a certain level of abstraction. This is because it’s necessary to be able to see the organization as a system technical landscape, an accumulation of competencies as well as a number of processes and, finally, to be able to integrate these three perspectives into each other. To make it all harder, the information systems must be implemented into an organization that perceives itself as a number of departments with different tasks and decision competencies and that occasionally does not even perceive them as being members of the same value chain.

    Case Study: McCormick Opportunity Analysis

    Worth $2 billion McCormick commonly known as manufacturer of food and beverage ingredients. Based solely on publicly available information, this section shows you step by step how McCormick might analyze its Business Intelligence (BI) opportunities and applied that analysis to improve its profits and operating effectiveness. McCormick sells materials to the food and beverage processors, after processing, they sells them to retailers of food and beverages. The food industry is a mature, fragmented, international industry that is undergoing substantial structural changes typical of industry evolution.

    Changes in the food and beverage retailing industry affect the food and beverage processing industry (McCormick’s customers). The resulting changes to the food and beverage processing industry affect McCormick’s business. From a BI strategy perspective, this case study most interested in changes that affect McCormick’s customers and how they make money. The nature and extent of those changes may create opportunities for McCormick to use BI to its strategic and competitive advantage.

    Evolution of McCormick’s Relevant Industries
    There are three key industries were relevant to McCormick’s BI planning: the food and beverage retail industry, the food and beverage processing industry, and the food and beverage ingredients industry. The food and beverage retail industry has historically been fragmented and regional. By 1999, however, it had become increasingly concentrated and global. In the United States, the top 10 supermarket players generated 33% of industry sales in 1995, but by 1999 that figure stood at 45%, not counting Wal-Mart’s 12% market share.

    By 2004, the industry structure had the top 10 players holding between 55% and 70% of the market. Overall, the industry is a mature, consolidating, slow-growth industry with intense competition based on price, which means food and beverage processors receive pressure from the retailers to reduce prices, improve supply chain effectiveness, and differentiate themselves on more than just brand image. It is typical that mature industries spawn more aggressive competition based on cost and service, and that is certainly the model that Wal-Mart has used effectively in the consumer packaged goods industry. It is also typical that profits in the mature industry often fall, sometimes permanently.

    The food and beverage processing industry is affected by trends at the retail level. The McCormick company’s primary interest is in identifying the major trends and their likely impact on the bases of competition in its ingredients businesses. This will suggest potential areas where McCormick can leverage BI.

    The balance of power between food retailers and food processors shifted from 1995 to 2005 in favor of the retailers, which continues to put pressure on pricing and profits. The food processing companies most threatened by retailer consolidation are those with lower-ranking brands. In addition, slow domestic economic growth has intensified competition, motivated global expansion, and driven business process reengineering projects seeking improved margins.

    Many industry leaders spent the mid-90s engaged in cost-cutting initiatives and backward integration into the ingredients industry, and such initiatives have returned as much benefit as they are likely to in the short term. Thus, the food and beverage processing industry is consolidating, which increases buyer power in relation to McCormick and its competitors. Looking forward, McCormick can continue to expect pricing pressures and demands for increased efficiency as its customers seek to maintain their own profitability in the face of slow growth and retailer consolidation.

    The food and beverage ingredients industry is similar in structure to that of the related downstream industries: mature, slow growth, fragmented, and increasingly global. Faced with increasing customer power owing to concentration and supplier consolidation programs, price pressures due to customer industry dynamics, and the threat of backward integration, ingredients industry firms are themselves merging in an attempt to maintain some balance of power.

    Although the overall growth rate of the ingredient industry is low, opportunities for growth in excess of the industry average are present. Industry players segment the market into what might be called macro-categories, for example, beverages, baked goods, dairy, candy/confection, and snack foods. These macro-categories have different growth rates, different leading brands, and different rates of new product development, all of which contribute to different opportunity profiles and growth potential.

    Consistent with this overall environment, McCormick has successfully executed a strategy that is at once focused, differentiated, and based on cost leadership. McCormick is focused because it is only in the ingredients business. It is differentiated because its customer-based product development paradigm was at one time a singular position in the industry, and because it offers a broader product line than its competitors. The McCormick strategy is also based on cost leadership because it consistently focuses on margin improvement, global sourcing, and supply chain management (SCM) as means to achieve low-cost producer status.

    Summary of Food Industry Drivers and Trends
    Given the multiple levels of consolidation in the industry, each customer relationship takes on increased importance. At the same time, it’s also imperative to improve costs, pricing, customer selection, and customer revenue management. This suggests that growth and profitability could be enhanced by effective use of BI that supports those objectives. It also suggests that customer-focused business strategies and operating policies will be at least as important as, and probably more important than, they have traditionally been. Accordingly, BI capabilities that promote top-caliber customer service and make it easy to do business with McCormick are also important. A summary of the food industry drivers and trends is shown below:

    Click on image to enlarge

    Application of the Business Intelligence Opportunity Analysis Framework at McCormick
    Working with the publicly available facts described above, the BI opportunity analysis framework can be applied to systematically identify specific opportunities to use BI to improve profits at McCormick. Both top-down and bottom-up BI opportunity analysis techniques can be used. Although top-down techniques begin with a strategic view and work down into an operational view, many business users are more comfortable discussing operational priorities. In this case, bottom-up techniques are used to discuss BI in relation to business processes, and determine how it can be used to support business strategies and the achievement organizational goals and objectives. The analytical results, abbreviated for sake of illustration, might look like this:

    Business Drivers:
    • Consolidation
    • Wal-Mart factor
    • Increased pricing pressures
    • Slow growth
    • Global expansion
    • IT as a competitive weapon
    McCormick Business Strategies, Goals, and Objectives:
    • Retain/increase revenue and market share through developing a broad line of differentiated products and services.
      Reduce costs and improve service through strengthening supply chain collaboration and improving sales forecasting.
    • Improve profits by utilizing customer segmentation approaches to identify the most profitable customers and retain these customers by providing high-quality, differentiated service and support.
    • Preserve margins by refining pricing strategy to determine the potential short-term and long-term cost/benefit of adjusting prices for different customers and segments; make pricing decisions based on cost/benefit analysis.

    McCormick Business Design
    Value Disciplines
    • Customer knowledge
    • Consumer-focused product development
      Leveraged IT
      Continuous process improvement
      Niche focus
    Core Business Processes
    • Product development
    • Customer service
    • Supply chain management (SCM)
    • Manufacturing
    • Financial planning and control

    Using Business Intelligence to Capture Business Value

    In economic terms, the business value of an investment (an asset) is the net present value of the after-tax cash flows associated with the investment. For example, the business value of an investment in a manufacturing plant is the sum of the incremental after-tax cash flows associated with the sale of the products produced at the plant. Similarly, an investment in BI creates an asset that must be used to generate incremental after-tax cash flow. Accordingly, BI investments should be subjected to a rigorous assessment of how the investment will result in increased revenues, reduced costs, or both.

    Although there are hundreds of ways to express business benefits, no business value is associated with an investment unless the benefits achieved result in increased after-tax cash flows. Again, there is no business value associated with an investment unless the benefits achieved connect to strategic goals. For business, the focus is on primarily increased after-tax cash flows; for government agencies, improved performance and service to citizens. These principles apply to investments in factories, equipment, and BI.

    For example, it is common for BI vendor value propositions to emphasize business benefits such as agility, responsiveness, customer intimacy, information sharing, flexibility, and collaboration. But investing in BI to achieve such business benefits may actually destroy business value unless those attributes can be defined in operational terms and realized through business processes that affect revenues or costs. For example, a $2 million investment in a BI application must result in incremental after-tax cash flow of at least $2 million or the organization will suffer a reduction in assets.

    To illustrate this point, many companies use BI to improve customer segmentation, customer acquisition, and customer retention. These improvements can be linked to reduced customer acquisition costs, increased revenues, and increased customer lifetime value, which translate to increased after-tax cash flows. However, a BI investment that improves demand forecasting will not deliver business value unless the forecasts are actually incorporated into operational business processes that then deliver reduced inventory, reduced order expediting costs, or some other tangible economic benefit. In other words, the business benefit “improved forecasting” is useless unless it is somehow converted into incremental after-tax
    cash flow.

    Looked at more broadly, the quest for delivering business value via BI can be seen
    as a matter of determining how an organization can use BI to:
    • Improve management processes (such as planning, controlling, measuring, monitoring,
      and/or changing) so that management can increase revenues, reduce costs,
      or both
    • Improve operational processes (such as fraud detection, sales campaign execution,
      customer order processing, purchasing, and/or accounts payable processing) so
      that the business can increase revenues, reduce costs, or both


    The Origins of Business Intelligence

    Now that we have a better understanding of what BI is, let’s take a brief look at its origins. This examination will help show where BI fits with other parts of the IT portfolio, such as enterprise transactional applications like enterprise requirements planning (ERP), and will help differentiate BI uses from other IT uses. It’s also important to understand that enabling BI technologies are mature, low-risk technologies that have been used successfully by major companies for more than a decade.

    Although recently the term BI has become one of the new IT buzzwords, the organizational quest for BI is not new. Approaches to BI have evolved over decades of technological innovation and management experience with IT. Two early examples of BI are :

    1. Decision support systems (DSSs): Since the 1970s and 1980s, businesses have used business information and structured business analysis to tackle complex business decisions. Examples include revenue optimization models in asset-intensive businesses such as the airline industry, the hotel industry, and the logistics industry, as well as logistics network optimization techniques used in industries that face complex distribution challenges. DSSs range from sophisticated, customized analytical tools running on mainframe computers to spreadsheet-based products running on personal computers. DSSs vary enormously in price and sophistication and are application-specific. Accordingly, they have not systematically addressed integration and delivery of business information and business analyses to support the range of BI opportunities available to companies today.
    2. Executive information systems (EISs): These were an early attempt to deliver the business information and business analyses to support management planning and control activities. Principally used on mainframes and designed only for use by upper management, these systems were expensive and inflexible. As BI applications and high-performance ITs have come to market, EIS applications have been replaced and extended by BI applications such as scorecards, dashboards, performance management, and other “analytical applications.” These applications combine business information and business analyses to provide custom-built and/or packaged BI solutions.



    What Is Business Intelligence?

    If that’s what BI is not, then what is it? BI combines products, technology, and methods to organize key information that management needs to improve profit and performance.
    • A single product. Although many excellent products can help you implement BI, BI is not a product that can be bought and installed to solve all your problems “out of the box.”
    • A technology. Although DW tools and technologies such as relational databases ETL tools, BI user interface tools, and servers are typically used to support BI applications, BI is not just a technology.
    • A methodology. Although a powerful methodology (such as the our BI Pathway) is essential for success with BI, you need to combine that methodology with appropriate technological solutions and organizational changes.
    More broadly, we think of BI as business information and business analyses within the context of key business processes that lead to decisions and actions and that result in improved business performance. In particular, BI means leveraging information assets within key business processes to achieve improved business performance. It involves business information and analysis that are:
    • Used within a context of key business processes
    • Support decisions and actions
    • Lead to improved business performance


    For business, the primary focus is to increase revenues and/or reduce costs, thereby improving performance and increasing profits. For the public sector, the primary focus is service to citizens, coping with budget constraints, and using resources wisely in support of an agency’s mission.

    Toyota: From Excel to Business Intelligence

    This article illustrates a typical case in which information flow could not meet the needs of managers. Information was late sometimes inaccurate and not shared by all. The old system did not meet the needs to make fast decisions, evaluate large amounts of information that was stored in different locations and collaboration. The solution is a technology called business intelligence (BI) which is based on data warehouse and provides a strategic advantage.

    Problem
    Toyota Motor sales USA (TMS) sells its vehicles to Toyota dealers across the USA used to take 9 to 10 days in transit and an average vehicle costs about $8 per day to keep while in transit. The financial charge was $72 to $80 per car. For 2 million cars per year the cost to the company was $144 to $160 million. The company faced increased problems in its supply chain, operations and car-keeping mounted in the late 1990s. Disappointed with the inability to deliver cars to the dealers unhappy customers purchasing cars from competitors such as Honda. The competition became intensified in 2003 and 2004 when hybrid cars were introduced by Honda. TMS managers used computers that generated huge numbers of directionless reports and data also unable to use such data and reports strategically. Internal departments regularly failed to share information or they did it too slowly. Actionable reports were often produced too late and overlapping reporting systems provided data that were not always accurate. Managers were unable to make timely decisions because they were not certain what portion of the data was accurate. The situation was especially dire in the Toyota logistic Services (TLS) division which is manages the transport of vehicles. The TLS managers require precision tracking and supply chain management to ensure that the right cars go to the right dealers in a timely manner. Manual scheduling and other related business processes that were conducted with incorrect information caused additional problems. If one individual made a data entry mistake when a ship docked the mistake would endure throughout the entire supply chain. The mistake maybe indicated to managers that ships never made it to a port weeks after the ships had safe docked. The information technology (IT) organization was unable to respond to the growing needs to the business.

    Solution
    A new chief information officer (CIO) was hired in 1997 in order to fix the problems. Barbara Cooper the new CIO started by trying to identify the problems. Cooper realized that a data warehouse was needed. A data warehouse is a central repository of historical data organized in such way that it is easy to access using a web browser and it can be manipulated for decision support. Cooper also saw that software tools to process, mine and manipulate the data were needed. A system therefore set up to provide real-time, cal data input into the system included years of human errors that had gone unnoticed including inconsistent duplicated data as well as missing data. The new system lacked capabilities to provide what managers needed. By 1999 it had become clear that the solution did not work. It was the right concept but used the wrong technology from the wrong vendors. In 2000 Toyota switched business intelligence platform. The system also included Hyperion’s dashboard feature which allows executives to visually see hot spots in their business units and investigate further to identify problems and their causes. With the new TLS system which uses colors meaningfully (red for danger) a business manager can see in real-time such as when delivery times are slowing and can immediately find the sources of the problems and even evaluate potential solutions by using ‘what-if’ analysis.

    Results
    Within a few days the new TLS system started to provide eye-popping results. The system helped managers discover that Toyota was getting billed twice for a specific rail shipment an $800,000 error. Overall Toyota USA managed to increase the volume of cars it handled by 40 percent between 2001 and 2005 while increasing head count by just 3%. In addition in-transit time was reduced by more than 5%. Word of the success TLS new business intelligence quickly spread throughout Toyota USA and then all over the company and many other areas of the company started to adopt BI. The more people who use data analysis tools the more money Toyota can earn. The TLS system was upgraded in 2003 and 2005 and tools are continuously added as needed. Toyota Motor Corporation reached the highest profit margins in the automotive industry in 2003. Toyota’s market share has increased consistently.
    An independent study by IDC,Inc. indicates that Toyota achieved a 506% return on its BI investment.
     
    Copyright © 2011. BI Articles and Study Case - All Rights Reserved
    Proudly powered by Blogger