Explore 1.5M+ audiobooks & ebooks free for days

Only $12.99 CAD/month after trial. Cancel anytime.

Mind+Machine: A Decision Model for Optimizing and Implementing Analytics
Mind+Machine: A Decision Model for Optimizing and Implementing Analytics
Mind+Machine: A Decision Model for Optimizing and Implementing Analytics
Ebook513 pages7 hours

Mind+Machine: A Decision Model for Optimizing and Implementing Analytics

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Cut through information overload to make better decisions faster

Success relies on making the correct decisions at the appropriate time, which is only possible if the decision maker has the necessary insights in a suitable format. Mind+Machine is the guide to getting the right insights in the right format at the right time to the right person. Designed to show decision makers how to get the most out of every level of data analytics, this book explores the extraordinary potential to be found in a model where human ingenuity and skill are supported with cutting-edge tools, including automations.

The marriage of the perceptive power of the human brain with the benefits of automation is essential because mind or machine alone cannot handle the complexities of modern analytics. Only when the two come together with structure and purpose to solve a problem are goals achieved.

With various stakeholders in data analytics having their own take on what is important, it can be challenging for a business leader to create such a structure. This book provides a blueprint for decision makers, helping them ask the right questions, understand the answers, and ensure an approach to analytics that properly supports organizational growth.

Discover how to:

  • Harness the power of insightful minds and the speed of analytics technology
  • Understand the demands and claims of various analytics stakeholders
  • Focus on the right data and automate the right processes
·         Navigate decisions with confidence in a fast-paced world

The Mind+Machine model streamlines analytics workflows and refines the never-ending flood of incoming data into useful insights. Thus, Mind+Machine equips you to take on the big decisions and win.

LanguageEnglish
PublisherWiley
Release dateOct 14, 2016
ISBN9781119302971
Mind+Machine: A Decision Model for Optimizing and Implementing Analytics

Related to Mind+Machine

Related ebooks

Business For You

View More

Reviews for Mind+Machine

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Mind+Machine - Marc Vollenweider

    Preface

    Thank you for buying this book.

    In 2015, after 15 years of operations in the field of research and analytics, we decided to adopt the notion of mind+machine at Evalueserve. We believe this marriage of the perceptive power of the human brain with the benefits of automation is essential because neither mind nor machine alone will be able to handle the complexities of analytics in the future.

    The editorial team at John Wiley & Sons approached me in November 2015 to ask if I would like to write a book on how our mind+machine approach could help with the management of information-heavy processes—a topic that is of increasing interest to companies worldwide. We got very positive feedback from clients, friends, and colleagues on the idea, and decided to go ahead.

    Mind+Machine is for generalist mainstream middle and top managers in business functions such as sales, marketing, procurement, R&D, supply chain, and corporate support functions, particularly in business-to-business (B2B) and B2C industries. We're writing for the hopeful beneficiaries and end users of analytics, and for people who might need to make decisions about analytics, now or in the future. The book is not a technical text primarily addressed to data scientists—although I firmly believe that even those specialists have something to learn about the primary problem in generating return on investment (ROI) from analytics.

    We won't be looking at super-advanced but rare analytics use cases—there are specialized textbooks for those. Instead, we're looking at the efficient frontier, offering practical help on dealing with the logistics of managing and improving decision-making support and getting positive ROI at the same time.

    After reading this book, you should know about key issues in the value chain of mind+machine in analytics, and be in a position to ask your data scientists, IT specialists, and vendors the right questions. You should understand the options and approaches available to you before you spend millions of dollars on a new proposal. You'll learn some useful things to demystify the world of analytics.

    We're also proposing a novel approach, the Use Case Methodology (UCM), to give you a set of tangible and tested tools to make your life easier.

    We've included 39 detailed case studies and plenty of real-life anecdotes to illustrate the applications of mind+machine. I'm sure you'll recognize some of your own experiences. And you'll see that you're far from alone in your quest to understand analytics.

    What makes me want to put these ideas about the problems and solutions to analytics issues out in the world is conversations like these two.

    The first words to me from a very senior line manager in a B2B corporation:

    Marc, is this meeting going to be about big data? If so, I'll stop it right here. Vendors are telling me that I need to install a data lake and hire lots of increasingly rare and expensive statisticians and data scientists. My board is telling me that I need to do ‘something' in big data. It all sounds unjustifiably expensive and complex. I just want to make sure that my frontline people are going to get what they need in time. I keep hearing from other companies that after an initial burst of analytics activity, real life caught up with them, the line guys are still complaining about delays, and the CFO is asking a lot of questions about the spend on big data.

    During a meeting with the COO of an asset manager to define the scope of a project:

    We do thousands of pitches to pension funds and other institutional investors every year. We have over 25 different data sources with quantitative data and qualitative information, with lots of regional flavors. However, we still put the pitches together manually and get the sign-offs from the legal department by e-mail. There must be a smarter way of doing this.

    Why is analytics becoming such a controversial and challenging world? Why are managers either daunted by overhyped new initiatives and processes that they don't understand or frustrated by the feeling that there should be a better way to do something, given all this talk about better, bigger, brighter analytics?

    Typical line managers want to get the right decision-making support to the right people at the right time in the right format. The proliferating number of analytics use cases and available data sets is not matched by an expansion in individuals' and companies' capacities to mentally and logistically absorb the information. Additionally, existing and new compliance requirements are piling up at a remarkable speed, especially in industries with a high regulatory focus, such as financial services and health care.

    Analytics itself is not truly the issue. In most cases, the problem is the logistics of getting things done in organizations: defining the workflow and getting it executed efficiently; making decisions on internal alignment, the complexities of getting IT projects done, and other organizational hurdles that hamper the progress. These complexities slow things down or make projects diverge from their original objectives, so that the actual beneficiaries of the analytics (e.g., the key account manager or the procurement manager in the field) don't get what they need in time.

    Many other issues plague the world of analytics: the proliferation of unintuitive jargon about data lakes and neural networks, the often-overlooked psychology of data analytics that drives companies to hold too dearly to the idea of the power of data and makes the implementation more complex than required, and the marketing hype engines making promises that no technology can fulfill.

    Based on hundreds of client interactions at Evalueserve and with my former colleagues in the strategy consulting world, it became increasingly clear that there is a strong unmet need in the general managerial population for a simplified framework to enable efficient and effective navigation of information-heavy decision-support processes. Simplicity should always win over complex and nontransparent processes—the analytics space is no exception.

    I want to demystify analytics. I'll start with the fundamental observation that terms such as big data and artificial intelligence are getting so much attention in the media that the bricks-and-mortar topics of everyday analytics aren't getting the attention they deserve: topics such as problem definition, data gathering, cleansing, analysis, visualization, dissemination, and knowledge management. Applying big data to every analytics problem would be like taking one highly refined chef's tool—a finely balanced sushi knife, for example—and trying to use it for every task. While very useful big data use cases have emerged in several fields, they represent maybe 5 percent of all of the billions of analytics use cases.

    What are the other 95 percent of use cases about? Small data. It is amazing how many analytics use cases require very little data to achieve a lot of impact. My favorite use case that illustrates the point is one where just 800 bits of information saved an investment bank a recurring annual cost of USD 1,000,000. We will discuss the details of this use case in Part I.

    Granted, not every use case performs like that, but I want to illustrate the point that companies have lots of opportunities to analyze their existing data with very simple tools, and that there is very little correlation between ROI and the size of the data set.

    Mind+Machine addresses end-to-end, information-heavy processes that support decision making or produce information-based output, such as sales pitches or research and data products, either for internal recipients or for external clients or customers. This includes all types of data and information: qualitative and quantitative; financial, business, and operational; static and dynamic; big and small; structured and unstructured.

    The concept of mind+machine addresses how the human mind collaborates with machines to improve productivity, time to market, and quality, or to create new capabilities that did not exist before. This book is not about the creation of physical products or using physical machines and robots as in an Industry 4.0 model. Additionally, we will look at the full end-to-end value chain of analytics, which is far broader than just solving the analytics problem or getting some data. And finally, we will ask how to ensure that analytics helps us make money and satisfy our clients.

    In Part I, we'll analyze the current state of affairs in analytics, dispelling the top 12 fallacies that have taken over the perception of analytics. It is surprising how entrenched these fallacies have become in the media and even in very senior management circles. It is hoped that Part I will give you some tools to deal with the marketing hype, senior management expectations, and the jargon of the field. Part I also contains the 800 bits use case. I'm sure you can't wait to read the details.

    In Part II, we'll examine the key trends affecting analytics and driving positive change. These trends are essentially good news for most users and decision makers in the field. It sets the stage for a dramatic simplification of processes requiring less IT spend, shorter development cycles, increasingly user-friendly interfaces, and the basis for new and profitable use cases. We'll examine key questions, including:

    What's happening with the Internet of Things, the cloud, and mobile technologies?

    How does this drive new data, new use cases, and new delivery models?

    How fast is the race for data assets, alternative data, and smart data?

    What are the rapidly changing expectations of end users?

    How should minds and machines support each other?

    Do modern workflow management and automation speed things up?

    How does modern user experience design improve the impact?

    How are commercial models such as pay-as-you-go relevant for analytics?

    How does the regulatory environment affect many analytics initiatives?

    In Part III, we will look at best practices in mind+machine. We will look at the end-to-end value chain of analytics via the Use Case Methodology (UCM), focusing on how to get things done. You will find practical recommendations on how to design and manage individual use cases as well as how to govern whole portfolios of use cases.

    Some of the key questions we'll address are:

    What is an analytics use case?

    How should we think about the client benefits?

    What is the right approach to an analytics use case?

    How much automation do we need?

    How can we reach the end user at the right time and in the right format?

    How do we prepare for the inevitable visit from compliance?

    Where can we get external help, and what are realistic cost and timing expectations?

    How can we reuse use cases in order to shorten development cycles and improve ROI?

    However, just looking at the individual use cases is not enough, as whole portfolios of use cases need to be managed. Therefore, this part will also answer the following questions:

    How do we find and prioritize use cases?

    What level of governance is needed, and how do we set it up?

    How do we find synergies and reuse them between the use cases in our portfolio?

    How do we make sure they actually deliver the expected value and ROI?

    How do we manage and govern the portfolio?

    At the end of Part III you should be in a position to address the main challenges of mind+machine, both for individual use cases and for portfolios of use cases.

    Throughout the book I use numerous analogies from the non-nerd world to make the points, trying to avoid too much specialist jargon. Some of them might be a bit daring, but I hope they are going to be fun reading, loosening up the left-brained topic of analytics. If I could make you smile a few times while reading this book, my goal will have been achieved.

    I'm glad to have you with me for this journey through the world of mind+machine. Thank you for choosing me as your guide. Let us begin!

    Acknowledgments

    My heartfelt thanks to Evalueserve's loyal clients, employees, and partner firms, without whose contributions this book would not have been possible; to our four external contributors and partners: Neil Gardiner of Every Interaction, Michael Müller of Acrea, Alan Day of State of Flux, and Stephen Taylor of Stream Financial; to our brand agency Earnest for their thought leadership in creating our brand; to all the Evalueserve teams and the teams of our partner firms MP Technology, Every Interaction, Infusion, Earnest, and Acrea for creating and positioning InsightBee and other mind+machine platforms; to the creators, owners, and authors of all the use cases in this book and their respective operations teams; to Jean-Paul Ludig, who helped me keep the project on track; to Derek and Seven Victor for their incredible help in editing the book; to Evalueserve's marketing team; to the Evalueserve board and management team for taking a lot of operational responsibilities off my shoulders, allowing me to write this book; to John Wiley & Sons for giving me this opportunity; to Ursula Hueby for keeping my logistics on track during all these years; to Ashish Gupta, our former COO, for being a friend and helping build the company from the very beginning; to Alok Aggarwal for co-founding the company; to his wife Sangeeta Aggarwal for introducing us; and above all to my wonderful wife Gabi for supporting me during all these years, actively participating in all of Evalueserve's key events, being a great partner for both everyday life and grand thought experiments, and for inspiring me to delve into the psychology of those involved at all levels of mind+machine analytics.

    —Marc Vollenweider

    List of Use Cases

    Innovation Analytics: Nascent Industry Growth Index

    Cross-Sell Analytics: Opportunity Dashboard

    Subscription Management: The 800 Bits Use Case

    Innovation Scouting: Finding Suitable Innovations

    Virtual Data Lake: A Use Case by Stream Financial

    InsightBee: The Last Mile

    Market Intelligence Solution Suite: Build for Flexibility

    Intellectual Property: Managing Value-Added IP Alerts

    Investment Banking Analytics: Logo Repository

    Managing Indirect Procurement Market Intelligence: Efficient Procurement

    InsightBee Procurement Intelligence: Efficient Management of Procurement Risks

    Brand Perception Analytics: Assessing Opinions in the Digital Realm

    Wealth Management: InsightBee for Independent Financial Advisors

    Analytics in Internet of Things: Benchmarking Machines Using Sensor Data

    InsightBee: Market Intelligence via Pay-as-You-Go

    Virtual Analyst: Intelligent Pricing & Dynamic Discounting

    InsightBee Sales Intelligence: Proactive Identification of New Sales Opportunities

    Customer Analytics: Aiding Go-to-Market Strategy

    Social Insights: Asian Language Social Media Insights

    Managing Research Flow: Workflow and Self-Service

    Automation in Asset Management: Fund Fact Sheets

    Investment Banking: Automating Routine Tasks

    Mind–Machine Interface: Game Controller

    Financial Benchmark Analytics: Environmental Index

    Industry Sector Update: Marketing Presentations

    Investment Banking: A Global Offshore Research Function

    Financial Benchmark Analytics: Index Reporting

    Energy Retailer: Competitive Pricing Analytics

    Intellectual Property: Identifying and Managing IP Risk

    Market and Customer Intelligence: Market Inventories

    Customer Churn Analytics: B2B Dealer Network

    Preventive Maintenance: Analyzing and Predicting Network Failures

    Supply Chain Framework: Bottlenecks Identification

    Spend Analytics: Category Planning Tool

    Predictive Analytics: Cross-Selling Support

    Operating Excellence Analytics: Efficiency Index

    Financial Services: Investment Banking Studio

    Sales Enablement: Account-Based Marketing Support

    InsightBee: A UX Design Study by Every Interaction

    Part I

    The Top 12 Fallacies about Mind+Machine

    The number of incredible opportunities with great potential for mind+ machine is large and growing. Many companies have already begun successfully leveraging this potential, building whole digital business models around smart minds and effective machines. Despite the potential for remarkable return on investment (ROI), there are pitfalls—particularly if you fall into the trap of believing some of the common wisdoms in analytics, which are exposed as fallacies on closer examination.

    Some vendors might not agree with the view that current approaches have serious limitations, but the world of analytics is showing some clear and undisputable symptoms that all is not well. To ensure you can approach mind+machine successfully, I want to arm you with insights into the traps and falsehoods you will very likely encounter.

    First, let's make sure we all know what successful analytics means: the delivery of the right insight to the right decision makers at the right time and in the right format. Anything else means a lessened impact—which is an unsatisfactory experience for all involved.

    The simplest analogy is to food service. Success in a restaurant means the food is tasty, presented appropriately, and delivered to the table on time. It's not enough to have a great chef if the food doesn't reach the table promptly. And the most efficient service won't save the business if the food is poor quality or served with the wrong utensils.

    The impact on a business from analytics should be clear and strong. However, many organizations struggle, spending millions or even tens of millions on their analytics infrastructure but failing to receive the high-quality insights when they are needed in a usable form—and thus failing to get the right return on their investments. Why is that?

    Analytics serves the fundamental desire to support decisions with facts and data. In the minds of many managers, it's a case of the more, the better. And there is certainly no issue with finding data! The rapid expansion in the availability of relatively inexpensive computing power and storage has been matched by the unprecedented proliferation of information sources. There is a temptation to see more data combined with more computing power as the sole solution to all analytics problems. But the human element cannot be underestimated.

    I vividly remember my first year at McKinsey Zurich. It was 1990, and one of my first projects was a strategy study in the weaving machines market. I was really lucky, discovering around 40 useful data points and some good qualitative descriptions in the 160-page analyst report procured by our very competent library team. We also conducted 15 qualitative interviews and found another useful source.

    By today's standards, the report provided a combined study-relevant data volume of 2 to 3 kilobytes. We used this information to create a small but robust model in Lotus 1-2-3 on a standard laptop. Those insights proved accurate: in 2000, I came across the market estimates again and found that we had been only about 5% off.

    Granted, this may have been luck, but my point is that deriving valuable insight—finding the so what?—required thought, not just the mass of data and raw computing power that many see as the right way to do analytics. Fallacies like this and the ones I outline in this part of the book are holding analytics back from achieving its full potential.

    Fallacy #1

    Big Data Solves Everything

    From Google to start-up analytics firms, many companies have successfully implemented business models around the opportunities offered by big data. The growing number of analytics use cases include media streaming, business-to-consumer (B2C) marketing, risk and compliance in financial services, surveillance and security in the private sector, social media monitoring, and preventive maintenance strategies (Figure I.1). However, throwing big data at every analytics use case isn't always the way to generate the best return on investment (ROI).

    Figure depicting areas of big data impact that is classified into B2C (left), B2B (middle), and public sector (right). B2C comprises consumer insight and advertising, search and information, sales and e-commerce, supply chain and logistics, customer service and maintenance, risk and compliance, Internet of things, and infrastructure. B2B comprises manufacturing, Internet of things, supply chain and logistics, R&D, customer services and maintenance, risk and compliance, and infrastructure. Public sector comprises security and surveillance, law enforcement, traffic, healthcare, science, tax, and infrastructure.

    Figure I.1 Areas of Big Data Impact

    Before we explore the big data fallacy in detail, we need to define analytics use case, a term you'll encounter a lot in this book. Here is a proposed definition:

    "An analytics use case is the end-to-end analytics support solution applied once or repeatedly to a single business issue faced by an end user or homogeneous group of end users who need to make decisions, take actions, or deliver a product or service on time based on the insights delivered."

    What are the implications of this definition? First and foremost, use cases are really about the end users and their needs, not about data scientists, informaticians, or analytics vendors. Second, the definition does not specify the data as small or big, qualitative or quantitative, static or dynamic—the type, origin, and size of the data input sets are open. Whether humans or machines or a combination thereof deliver the solution is also not defined. However, it is specific on the need for timely insights and on the end-to-end character of the solution, which means the complete workflow from data creation to delivery of the insights to the decision maker.

    Now, getting back to big data: the list of big data use cases has grown significantly over the past decade and will continue to grow. With the advent of social media and the Internet of Things, we are faced with a vast number of information sources, with more to come. Continuous data streams are becoming increasingly prevalent. As companies offering big data tools spring up like mushrooms, people are dreaming up an increasing number of analytics possibilities.

    One of the issues with talking about big data, or indeed small data, is the lack of a singular understanding of what the term means. It's good hype in action: an attractive name with a fuzzy definition. I found no less than 12 different definitions of big data while researching this book! I'm certainly not going to list all of them, but I can help you understand them by categorizing them into two buckets: the geek's concept and the anthropologist's view.

    Broadly speaking, tech geeks define big data in terms of volumes; velocity (speed); variety (types include text, voice, and video); structure (which can mean structured, such as tables and charts, or unstructured, such as user comments from social media channels); variability over time; and veracity (i.e., the level of quality assurance). There are two fundamental problems with this definition. First, nobody has laid down any commonly accepted limits for what counts as big or small, obviously because this is a highly moving target, and second, there is no clear so what? from this definition. Why do all of these factors matter to the end user when they are all so variable?

    That brings us to the anthropologist's view, which focuses on the objective. Wikipedia provides an elegant definition that expresses the ambiguity, associated activities, and ultimate objective:

    Big data is a term for data sets that are so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, querying, updating and information privacy. The term often refers simply to the use of predictive analytics or certain other advanced methods to extract value from data, and seldom to a particular size of data set. Accuracy in big data may lead to more confident decision making, and better decisions can result in greater operational efficiency, cost reduction and reduced risk.

    High-ROI use cases for big data existed before the current hype. Examples are B2C marketing analytics and advertising, risk analytics, and fraud detection. They've been proven in the market and have consistently delivered value. There are also use cases for scientific research and for national security and surveillance, where ROI is hard to measure but there is a perceived gain in knowledge and security level (although this latter gain is often debated).

    We've added a collection of use cases throughout this book to help give you insight into the real-world applications of what you're learning. They all follow the same format to help you quickly find the information of greatest interest to you.

    Analytics Use Case Format

    Context: A brief outline of where the use case comes from: industry, business function, and geography

    Business Challenge: What the solution needed to achieve for the client(s)

    Solution: An illustration of the solution or processes used to create that solution

    Approach: Details on the steps involved in creating the solutions along with the mind+machine intensity diagram, illustrating the change in the balance between human effort and automation at key stages during the implementation of the solution

    Analytics Challenges: The key issues to be solved along with an illustration of the relative complexity of the mind+machine aspects applied in solving the case

    Benefits: The positive impact on productivity, time to market, and quality, and the new capabilities stemming from the solution

    Implementation: The key achievements and the investment and/or effort required to make the solution a reality (development, implementation, and maintenance, as applicable), illustrated where possible

    I wanted to include some of the more exciting projects currently under development to show the possibilities of analytics. In these cases, some of the productivity gain and investment metrics are estimates and are labeled (E).

    Figure depicting nascent industry growth index. A graphical representation where mind intensity (%) is plotted on the y-axis on a scale of 0–100 and machine intensity (%) on the x-axis on a scale of 0–100. The straight line in the graph representing the variation of mind intensity with machine intensity at 0, 1, and 2 years.The upper part in the figure depicting the components of machine. These components: analysis (5), productivity (5), workflow (3), dissemination (2), and knowledge management (3) are represented by vertical bars. The lower part in the figure depicting the components of mind. These components: project management (2), business acumen (4), analysis (4), insight (4), and innovation (5) are represented by vertical bars. A bar graphical representation where mind intensity (FTEs) is plotted on the y-axis and time (months) on the x-axis on a scale of 0–15.

    The big data hype has its origin in three factors: the appearance of new data types or sources, such as social media; the increasing availability of connected devices, from mobile phones to machine sensors; and the evolution of ways to analyze large data sets in short periods of time. The sense

    Enjoying the preview?
    Page 1 of 1