Sechs Eigenschaften einer modernen Business Intelligence

Völlig unabhängig von der Branche, in der Sie tätig sind, benötigen Sie Informationssysteme, die Ihre geschäftlichen Daten auswerten, um Ihnen Entscheidungsgrundlagen zu liefern. Diese Systeme werden gemeinläufig als sogenannte Business Intelligence (BI) bezeichnet. Tatsächlich leiden die meisten BI-Systeme an Mängeln, die abstellbar sind. Darüber hinaus kann moderne BI Entscheidungen teilweise automatisieren und umfassende Analysen bei hoher Flexibilität in der Nutzung ermöglichen.


english-flagRead this article in English:
“Six properties of modern Business Intelligence”


Lassen Sie uns die sechs Eigenschaften besprechen, die moderne Business Intelligence auszeichnet, die Berücksichtigungen von technischen Kniffen im Detail bedeuten, jedoch immer im Kontext einer großen Vision für die eigene Unternehmen-BI stehen:

1.      Einheitliche Datenbasis von hoher Qualität (Single Source of Truth)

Sicherlich kennt jeder Geschäftsführer die Situation, dass sich seine Manager nicht einig sind, wie viele Kosten und Umsätze tatsächlich im Detail entstehen und wie die Margen pro Kategorie genau aussehen. Und wenn doch, stehen diese Information oft erst Monate zu spät zur Verfügung.

In jedem Unternehmen sind täglich hunderte oder gar tausende Entscheidungen auf operative Ebene zu treffen, die bei guter Informationslage in der Masse sehr viel fundierter getroffen werden können und somit Umsätze steigern und Kosten sparen. Demgegenüber stehen jedoch viele Quellsysteme aus der unternehmensinternen IT-Systemlandschaft sowie weitere externe Datenquellen. Die Informationsbeschaffung und -konsolidierung nimmt oft ganze Mitarbeitergruppen in Anspruch und bietet viel Raum für menschliche Fehler.

Ein System, das zumindest die relevantesten Daten zur Geschäftssteuerung zur richtigen Zeit in guter Qualität in einer Trusted Data Zone als Single Source of Truth (SPOT) zur Verfügung stellt. SPOT ist das Kernstück moderner Business Intelligence.

Darüber hinaus dürfen auch weitere Daten über die BI verfügbar gemacht werden, die z. B. für qualifizierte Analysen und Data Scientists nützlich sein können. Die besonders vertrauenswürdige Zone ist jedoch für alle Entscheider diejenige, über die sich alle Entscheider unternehmensweit synchronisieren können.

2.      Flexible Nutzung durch unterschiedliche Stakeholder

Auch wenn alle Mitarbeiter unternehmensweit auf zentrale, vertrauenswürdige Daten zugreifen können sollen, schließt das bei einer cleveren Architektur nicht aus, dass sowohl jede Abteilung ihre eigenen Sichten auf diese Daten erhält, als auch, dass sogar jeder einzelne, hierfür qualifizierte Mitarbeiter seine eigene Sicht auf Daten erhalten und sich diese sogar selbst erstellen kann.

Viele BI-Systeme scheitern an der unternehmensweiten Akzeptanz, da bestimmte Abteilungen oder fachlich-definierte Mitarbeitergruppen aus der BI weitgehend ausgeschlossen werden.

Moderne BI-Systeme ermöglichen Sichten und die dafür notwendige Datenintegration für alle Stakeholder im Unternehmen, die auf Informationen angewiesen sind und profitieren gleichermaßen von dem SPOT-Ansatz.

3.      Effiziente Möglichkeiten zur Erweiterung (Time to Market)

Bei den Kernbenutzern eines BI-Systems stellt sich die Unzufriedenheit vor allem dann ein, wenn der Ausbau oder auch die teilweise Neugestaltung des Informationssystems einen langen Atem voraussetzt. Historisch gewachsene, falsch ausgelegte und nicht besonders wandlungsfähige BI-Systeme beschäftigen nicht selten eine ganze Mannschaft an IT-Mitarbeitern und Tickets mit Anfragen zu Änderungswünschen.

Gute BI versteht sich als Service für die Stakeholder mit kurzer Time to Market. Die richtige Ausgestaltung, Auswahl von Software und der Implementierung von Datenflüssen/-modellen sorgt für wesentlich kürzere Entwicklungs- und Implementierungszeiten für Verbesserungen und neue Features.

Des Weiteren ist nicht nur die Technik, sondern auch die Wahl der Organisationsform entscheidend, inklusive der Ausgestaltung der Rollen und Verantwortlichkeiten – von der technischen Systemanbindung über die Datenbereitstellung und -aufbereitung bis zur Analyse und dem Support für die Endbenutzer.

4.      Integrierte Fähigkeiten für Data Science und AI

Business Intelligence und Data Science werden oftmals als getrennt voneinander betrachtet und geführt. Zum einen, weil Data Scientists vielfach nur ungern mit – aus ihrer Sicht – langweiligen Datenmodellen und vorbereiteten Daten arbeiten möchten. Und zum anderen, weil die BI in der Regel bereits als traditionelles System im Unternehmen etabliert ist, trotz der vielen Kinderkrankheiten, die BI noch heute hat.

Data Science, häufig auch als Advanced Analytics bezeichnet, befasst sich mit dem tiefen Eintauchen in Daten über explorative Statistik und Methoden des Data Mining (unüberwachtes maschinelles Lernen) sowie mit Predictive Analytics (überwachtes maschinelles Lernen). Deep Learning ist ein Teilbereich des maschinellen Lernens (Machine Learning) und wird ebenfalls für Data Mining oder Predictvie Analytics angewendet. Bei Machine Learning handelt es sich um einen Teilbereich der Artificial Intelligence (AI).

In der Zukunft werden BI und Data Science bzw. AI weiter zusammenwachsen, denn spätestens nach der Inbetriebnahme fließen die Prädiktionsergebnisse und auch deren Modelle wieder in die Business Intelligence zurück. Vermutlich wird sich die BI zur ABI (Artificial Business Intelligence) weiterentwickeln. Jedoch schon heute setzen viele Unternehmen Data Mining und Predictive Analytics im Unternehmen ein und setzen dabei auf einheitliche oder unterschiedliche Plattformen mit oder ohne Integration zur BI.

Moderne BI-Systeme bieten dabei auch Data Scientists eine Plattform, um auf qualitativ hochwertige sowie auf granularere Rohdaten zugreifen zu können.

5.      Ausreichend hohe Performance

Vermutlich werden die meisten Leser dieser sechs Punkte schon einmal Erfahrung mit langsamer BI gemacht haben. So dauert das Laden eines täglich zu nutzenden Reports in vielen klassischen BI-Systemen mehrere Minuten. Wenn sich das Laden eines Dashboards mit einer kleinen Kaffee-Pause kombinieren lässt, mag das hin und wieder für bestimmte Berichte noch hinnehmbar sein. Spätestens jedoch bei der häufigen Nutzung sind lange Ladezeiten und unzuverlässige Reports nicht mehr hinnehmbar.

Ein Grund für mangelhafte Performance ist die Hardware, die sich unter Einsatz von Cloud-Systemen bereits beinahe linear skalierbar an höhere Datenmengen und mehr Analysekomplexität anpassen lässt. Der Einsatz von Cloud ermöglicht auch die modulartige Trennung von Speicher und Rechenleistung von den Daten und Applikationen und ist damit grundsätzlich zu empfehlen, jedoch nicht für alle Unternehmen unbedingt die richtige Wahl und muss zur Unternehmensphilosophie passen.

Tatsächlich ist die Performance nicht nur von der Hardware abhängig, auch die richtige Auswahl an Software und die richtige Wahl der Gestaltung von Datenmodellen und Datenflüssen spielt eine noch viel entscheidender Rolle. Denn während sich Hardware relativ einfach wechseln oder aufrüsten lässt, ist ein Wechsel der Architektur mit sehr viel mehr Aufwand und BI-Kompetenz verbunden. Dabei zwingen unpassende Datenmodelle oder Datenflüsse ganz sicher auch die neueste Hardware in maximaler Konfiguration in die Knie.

6.      Kosteneffizienter Einsatz und Fazit

Professionelle Cloud-Systeme, die für BI-Systeme eingesetzt werden können, bieten Gesamtkostenrechner an, beispielsweise Microsoft Azure, Amazon Web Services und Google Cloud. Mit diesen Rechnern – unter Einweisung eines erfahrenen BI-Experten – können nicht nur Kosten für die Nutzung von Hardware abgeschätzt, sondern auch Ideen zur Kostenoptimierung kalkuliert werden. Dennoch ist die Cloud immer noch nicht für jedes Unternehmen die richtige Lösung und klassische Kalkulationen für On-Premise-Lösungen sind notwendig und zudem besser planbar als Kosten für die Cloud.

Kosteneffizienz lässt sich übrigens auch mit einer guten Auswahl der passenden Software steigern. Denn proprietäre Lösungen sind an unterschiedliche Lizenzmodelle gebunden und können nur über Anwendungsszenarien miteinander verglichen werden. Davon abgesehen gibt es jedoch auch gute Open Source Lösungen, die weitgehend kostenfrei genutzt werden dürfen und für viele Anwendungsfälle ohne Abstriche einsetzbar sind.

Die Total Cost of Ownership (TCO) gehören zum BI-Management mit dazu und sollten stets im Fokus sein. Falsch wäre es jedoch, die Kosten einer BI nur nach der Kosten für Hardware und Software zu bewerten. Ein wesentlicher Teil der Kosteneffizienz ist komplementär mit den Aspekten für die Performance des BI-Systems, denn suboptimale Architekturen arbeiten verschwenderisch und benötigen mehr und teurere Hardware als sauber abgestimmte Architekturen. Die Herstellung der zentralen Datenbereitstellung in adäquater Qualität kann viele unnötige Prozesse der Datenaufbereitung ersparen und viele flexible Analysemöglichkeiten auch redundante Systeme direkt unnötig machen und somit zu Einsparungen führen.

In jedem Fall ist ein BI für Unternehmen mit vielen operativen Prozessen grundsätzlich immer günstiger als kein BI zu haben. Heutzutage könnte für ein Unternehmen nichts teurer sein, als nur nach Bauchgefühl gesteuert zu werden, denn der Markt tut es nicht und bietet sehr viel Transparenz.

Dennoch sind bestehende BI-Architekturen hin und wieder zu hinterfragen. Bei genauerem Hinsehen mit BI-Expertise ist die Kosteneffizienz und Datentransparenz häufig möglich.

Data Analytics and Mining for Dummies

Data Analytics and Mining is often perceived as an extremely tricky task cut out for Data Analysts and Data Scientists having a thorough knowledge encompassing several different domains such as mathematics, statistics, computer algorithms and programming. However, there are several tools available today that make it possible for novice programmers or people with no absolutely no algorithmic or programming expertise to carry out Data Analytics and Mining. One such tool which is very powerful and provides a graphical user interface and an assembly of nodes for ETL: Extraction, Transformation, Loading, for modeling, data analysis and visualization without, or with only slight programming is the KNIME Analytics Platform.

KNIME, or the Konstanz Information Miner, was developed by the University of Konstanz and is now popular with a large international community of developers. Initially KNIME was originally made for commercial use but now it is available as an open source software and has been used extensively in pharmaceutical research since 2006 and also a powerful data mining tool for the financial data sector. It is also frequently used in the Business Intelligence (BI) sector.

KNIME as a Data Mining Tool

KNIME is also one of the most well-organized tools which enables various methods of machine learning and data mining to be integrated. It is very effective when we are pre-processing data i.e. extracting, transforming, and loading data.

KNIME has a number of good features like quick deployment and scaling efficiency. It employs an assembly of nodes to pre-process data for analytics and visualization. It is also used for discovering patterns among large volumes of data and transforming data into more polished/actionable information.

Some Features of KNIME:

  • Free and open source
  • Graphical and logically designed
  • Very rich in analytics capabilities
  • No limitations on data size, memory usage, or functionalities
  • Compatible with Windows ,OS and Linux
  • Written in Java and edited with Eclipse.

A node is the smallest design unit in KNIME and each node serves a dedicated task. KNIME contains graphical, drag-drop nodes that require no coding. Nodes are connected with one’s output being another’s input, as a workflow. Therefore end-to-end pipelines can be built requiring no coding effort. This makes KNIME stand out, makes it user-friendly and make it accessible for dummies not from a computer science background.

KNIME workflow designed for graduate admission prediction

KNIME workflow designed for graduate admission prediction

KNIME has nodes to carry out Univariate Statistics, Multivariate Statistics, Data Mining, Time Series Analysis, Image Processing, Web Analytics, Text Mining, Network Analysis and Social Media Analysis. The KNIME node repository has a node for every functionality you can possibly think of and need while building a data mining model. One can execute different algorithms such as clustering and classification on a dataset and visualize the results inside the framework itself. It is a framework capable of giving insights on data and the phenomenon that the data represent.

Some commonly used KNIME node groups include:

  • Input-Output or I/O:  Nodes in this group retrieve data from or to write data to external files or data bases.
  • Data Manipulation: Used for data pre-processing tasks. Contains nodes to filter, group, pivot, bin, normalize, aggregate, join, sample, partition, etc.
  • Views: This set of nodes permit users to inspect data and analysis results using multiple views. This gives a means for truly interactive exploration of a data set.
  • Data Mining: In this group, there are nodes that implement certain algorithms (like K-means clustering, Decision Trees, etc.)

Comparison with other tools 

The first version of the KNIME Analytics Platform was released in 2006 whereas Weka and R studio were released in 1997 and 1993 respectively. KNIME is a proper data mining tool whereas Weka and R studio are Machine Learning tools which can also do data mining. KNIME integrates with Weka to add machine learning algorithms to the system. The R project adds statistical functionalities as well. Furthermore, KNIME’s range of functions is impressive, with more than 1,000 modules and ready-made application packages. The modules can be further expanded by additional commercial features.

Process Mining Tools – Artikelserie

Process Mining ist nicht länger nur ein Buzzword, sondern ein relevanter Teil der Business Intelligence. Process Mining umfasst die Analyse von Prozessen und lässt sich auf alle Branchen und Fachbereiche anwenden, die operative Prozesse haben, die wiederum über operative IT-Systeme erfasst werden. Um die zunehmende Bedeutung dieser Data-Disziplin zu verstehen, reicht ein Blick auf die Entwicklung der weltweiten Datengenerierung an. Waren es 2010 noch 2 Zettabytes (ZB), sind laut Statista für das Jahr 2020 mehr als 50 ZB an Daten zu erwarten. Für 2025 wird gar mit einem Bestand von 175 ZB gerechnet.

Hier wird das Datenvolumen nach Jahren angezeit

Abbildung 1 zeigt die Entwicklung des weltweiten Datenvolumen (Stand 2018). Quelle: https://www.statista.com/statistics/871513/worldwide-data-created/

Warum jetzt eigentlich Process Mining?

Warum aber profitiert insbesondere Process Mining von dieser Entwicklung? Der Grund liegt in der Unordnung dieser Datenmenge. Die Herausforderung der sich viele Unternehmen gegenübersehen, liegt eben genau in der Analyse dieser unstrukturierten Daten. Hinzu kommt, dass nahezu jeder Prozess Datenspuren in Informationssystemen hinterlässt. Die Betrachtung von Prozessen auf Datenebene birgt somit ein enormes Potential, welches in Anbetracht der Entwicklung zunehmend an Bedeutung gewinnt.

Was war nochmal Process Mining?

Process Mining ist eine Analysemethodik, welche dazu befähigt, aus den abgespeicherten Datenspuren der Informationssysteme eine Rekonstruktion der realen Prozesse zu schaffen. Diese Prozesse können anschließend als Prozessflussdiagramm dargestellt und ausgewertet werden. Die klassischen Anwendungsfälle reichen von dem Aufspüren (Discovery) unbekannter Prozesse, über einen Soll-Ist-Vergleich (Conformance) bis hin zur Anpassung/Verbesserung (Enhancement) bestehender Prozesse. Mittlerweile setzen viele Firmen darüber hinaus auf eine Integration von RPA und Data Science im Process Mining. Und die Analyse-Tiefe wird zunehmen und bis zur Analyse einzelner Klicks reichen, was gegenwärtig als sogenanntes „Task Mining“ bezeichnet wird.

Hier wird ein typischer Process Mining Workflow dargestellt

Abbildung 2 zeigt den typischen Workflow eines Process Mining Projektes. Oftmals dient das ERP-System als zentrale Datenquelle. Die herausgearbeiteten Event-Logs werden anschließend mittels Process Mining Tool visualisiert.

In jedem Fall liegt meistens das Gros der Arbeit auf die Bereitstellung und Vorbereitung der Daten und der Transformation dieser in sogenannte „Event-Logs“, die den Input für die Process Mining Tools darstellen. Deshalb arbeiten viele Anbieter von Process Mining Tools schon länger an Lösungen, um die mit der Datenvorbereitung verbundenen zeit -und arbeitsaufwendigen Schritte zu erleichtern. Während fast alle Tool-Anbieter vorgefertigte Protokolle für Standardprozesse anbieten, gehen manche noch weiter und bieten vollumfängliche Plattform Lösungen an, welche eine effiziente Integration der aufwendigen ETL-Prozesse versprechen. Der Funktionsumfang der Process Mining Tools geht daher mittlerweile deutlich über eine reine Darstellungsfunktion hinaus und deckt ggf. neue Trends sowie optimierte Einsteigerbarrieren mit ab.

Motivation dieser Artikelserie

Die Motivation diesen Artikel zu schreiben liegt nicht in der Erläuterung der Methode des Process Mining. Hierzu gibt es mittlerweile zahlreiche Informationsquellen. Eine besonders empfehlenswerte ist das Buch „Process Mining“ von Will van der Aalst, einem der Urväter des Process Mining. Die Motivation dieses Artikels liegt viel mehr in der Betrachtung der zahlreichen Process Mining Tools am Markt. Sehr oft erlebe ich als Data-Consultant, dass Process Mining Projekte im Vorfeld von der Frage nach dem „besten“ Tool dominiert werden. Diese Fragestellung ist in Ihrer Natur sicherlich immer individuell zu beantworten. Da individuelle Projekte auch einen individuellen Tool-Einsatz bedingen, beschäftige ich mich meist mit einem großen Spektrum von Process Mining Tools. Daher ist es mir in dieser Artikelserie ein Anliegen einen allgemeingültigen Überblick zu den üblichen Process Mining Tools zu erarbeiten. Dabei möchte ich mich nicht auf persönliche Erfahrungen stützen, sondern die Tools anhand von Testdaten einem praktischen Vergleich unterziehen, der für den Leser nachvollziehbar ist.

Um den Umfang der Artikelserie zu begrenzen, werden die verschiedenen Tools nur in Ihren Kernfunktionen angewendet und verglichen. Herausragende Funktionen oder Eigenschaften der jeweiligen Tools werden jedoch angemerkt und ggf. in anderen Artikeln vertieft. Das Ziel dieser Artikelserie soll sein, dem Leser einen ersten Einblick über die am Markt erhältlichen Tools zu geben. Daher spricht dieser Artikel insbesondere Einsteiger aber auch Fortgeschrittene im Process Mining an, welche einen Überblick über die Tools zu schätzen wissen und möglicherweise auch mal über den Tellerand hinweg schauen mögen.

Die Tools

Die Gruppe der zu betrachteten Tools besteht aus den folgenden namenhaften Anwendungen:

Die Auswahl der Tools orientiert sich an den „Market Guide for Process Mining 2019“ von Gartner. Aussortiert habe ich jene Tools, mit welchen ich bisher wenig bis gar keine Berührung hatte. Diese Auswahl an Tools verspricht meiner Meinung nach einen spannenden Einblick von verschiedene Process Mining Tools am Markt zu bekommen.

Die Anwendung in der Praxis

Um die Tools realistisch miteinander vergleichen zu können, werden alle Tools die gleichen Datengrundlage benutzen. Die Datenbasis wird folglich über die gesamte Artikelserie hinweg für die Darstellungen mit den Tools genutzt. Ich werde im nächsten Artikel explizit diese Datenbasis kurz erläutern.

Das Ziel der praktischen Untersuchung soll sein, die Beispieldaten in die verschiedenen Tools zu laden, um den enthaltenen Prozess zu visualisieren. Dabei möchte ich insbesondere darauf achten wie bedienbar und anpassungsfähig/flexibel die Tools mir erscheinen. An dieser Stelle möchte ich eindeutig darauf hinweisen, dass dieser Vergleich und seine Bewertung meine Meinung ist und keineswegs Anspruch auf Vollständigkeit beansprucht. Da der Markt in Bewegung ist, behalte ich mir ferner vor, diese Artikelserie regelmäßig anzupassen.

Die Kriterien

Neben der Bedienbarkeit und der Anpassungsfähigkeit der Tools möchte ich folgende zusätzliche Gesichtspunkte betrachten:

  • Bedienbarkeit: Wie leicht gehen die Analysen von der Hand? Wie einfach ist der Einstieg?
  • Anpassungsfähigkeit: Wie flexibel reagiert das Tool auf meine Daten und Analyse-Wünsche?
  • Zukunftsfähigkeit: Wie steht es um Machine Learning, ETL-Modeller oder Task Mining?
  • Integrationsfähigkeit: Welche Schnittstellen bringt das Tool mit? Läuft es auch oder nur in der Cloud?
  • Skalierbarkeit: Ist das Tool dazu in der Lage, auch große und heterogene Daten zu verarbeiten?
  • Preisgestaltung: Nach welchem Modell bestimmt sich der Preis?

Die Datengrundlage

Die Datenbasis bildet ein Demo-Datensatz der von Celonis für die gesamte Artikelserie netter Weise zur Verfügung gestellt wurde. Dieser Datensatz bildet einen Versand Prozess vom Zeitpunkt des Kaufes bis zur Auslieferung an den Kunden ab. In der folgenden Abbildung ist der Soll Prozess abgebildet.

Hier wird die Variante 1 der Demo Daten von Celonis als Grafik dargestellt

Abbildung 4 zeigt den gewünschten Versand Prozess der Datengrundlage von dem Kauf des Produktes bis zur Auslieferung.

Die Datengrundlage besteht aus einem 60 GB großen Event-Log, welcher lokal in einer Microsoft SQL Datenbank vorgehalten wird. Da diese Tabelle über 600 Mio. Events beinhaltet, wird die Datengrundlage für die Analyse der einzelnen Tools auf einen Ausschnitt von 60 Mio. Events begrenzt. Um die Performance der einzelnen Tools zu testen, wird jedoch auf die gesamte Datengrundlage zurückgegriffen. Der Ausschnitt der Event-Log Tabelle enthält 919 verschiedene Varianten und weisst somit eine ausreichende Komplexität auf, welche es mit den verschiednene Tools zu analysieren gilt.

Folgender Veröffentlichungsplan gilt für diese Artikelserie und wird mit jeder Veröffentlichung verlinkt:

  1. Celonis (erscheint demnächst)
  2. PAFnow (erscheint demnächst)
  3. MEHRWERK (erscheint demnächst)
  4. Lana Labs (erscheint demnächst)
  5. Signavio (erscheint demnächst)
  6. Process Gold (erscheint demnächst)
  7. Fluxicon Disco (erscheint demnächst)
  8. Aris Process Mining der Software AG (erscheint demnächst)

Six properties of modern Business Intelligence

Regardless of the industry in which you operate, you need information systems that evaluate your business data in order to provide you with a basis for decision-making. These systems are commonly referred to as so-called business intelligence (BI). In fact, most BI systems suffer from deficiencies that can be eliminated. In addition, modern BI can partially automate decisions and enable comprehensive analyzes with a high degree of flexibility in use.


Read this article in German:
“Sechs Eigenschaften einer modernen Business Intelligence“


Let us discuss the six characteristics that distinguish modern business intelligence, which mean taking technical tricks into account in detail, but always in the context of a great vision for your own company BI:

1. Uniform database of high quality

Every managing director certainly knows the situation that his managers do not agree on how many costs and revenues actually arise in detail and what the margins per category look like. And if they do, this information is often only available months too late.

Every company has to make hundreds or even thousands of decisions at the operational level every day, which can be made much more well-founded if there is good information and thus increase sales and save costs. However, there are many source systems from the company’s internal IT system landscape as well as other external data sources. The gathering and consolidation of information often takes up entire groups of employees and offers plenty of room for human error.

A system that provides at least the most relevant data for business management at the right time and in good quality in a trusted data zone as a single source of truth (SPOT). SPOT is the core of modern business intelligence.

In addition, other data on BI may also be made available which can be useful for qualified analysts and data scientists. For all decision-makers, the particularly trustworthy zone is the one through which all decision-makers across the company can synchronize.

2. Flexible use by different stakeholders

Even if all employees across the company should be able to access central, trustworthy data, with a clever architecture this does not exclude that each department receives its own views of this data. Many BI systems fail due to company-wide inacceptance because certain departments or technically defined employee groups are largely excluded from BI.

Modern BI systems enable views and the necessary data integration for all stakeholders in the company who rely on information and benefit equally from the SPOT approach.

3. Efficient ways to expand (time to market)

The core users of a BI system are particularly dissatisfied when the expansion or partial redesign of the information system requires too much of patience. Historically grown, incorrectly designed and not particularly adaptable BI systems often employ a whole team of IT staff and tickets with requests for change requests.

Good BI is a service for stakeholders with a short time to market. The correct design, selection of software and the implementation of data flows / models ensures significantly shorter development and implementation times for improvements and new features.

Furthermore, it is not only the technology that is decisive, but also the choice of organizational form, including the design of roles and responsibilities – from the technical system connection to data preparation, pre-analysis and support for the end users.

4. Integrated skills for Data Science and AI

Business intelligence and data science are often viewed and managed separately from each other. Firstly, because data scientists are often unmotivated to work with – from their point of view – boring data models and prepared data. On the other hand, because BI is usually already established as a traditional system in the company, despite the many problems that BI still has today.

Data science, often referred to as advanced analytics, deals with deep immersion in data using exploratory statistics and methods of data mining (unsupervised machine learning) as well as predictive analytics (supervised machine learning). Deep learning is a sub-area of ​​machine learning and is used for data mining or predictive analytics. Machine learning is a sub-area of ​​artificial intelligence (AI).

In the future, BI and data science or AI will continue to grow together, because at the latest after going live, the prediction models flow back into business intelligence. BI will probably develop into ABI (Artificial Business Intelligence). However, many companies are already using data mining and predictive analytics in the company, using uniform or different platforms with or without BI integration.

Modern BI systems also offer data scientists a platform to access high-quality and more granular raw data.

5. Sufficiently high performance

Most readers of these six points will probably have had experience with slow BI before. It takes several minutes to load a daily report to be used in many classic BI systems. If loading a dashboard can be combined with a little coffee break, it may still be acceptable for certain reports from time to time. At the latest, however, with frequent use, long loading times and unreliable reports are no longer acceptable.

One reason for poor performance is the hardware, which can be almost linearly scaled to higher data volumes and more analysis complexity using cloud systems. The use of cloud also enables the modular separation of storage and computing power from data and applications and is therefore generally recommended, but not necessarily the right choice for all companies.

In fact, performance is not only dependent on the hardware, the right choice of software and the right choice of design for data models and data flows also play a crucial role. Because while hardware can be changed or upgraded relatively easily, changing the architecture is associated with much more effort and BI competence. Unsuitable data models or data flows will certainly bring the latest hardware to its knees in its maximum configuration.

6. Cost-effective use and conclusion

Professional cloud systems that can be used for BI systems offer total cost calculators, such as Microsoft Azure, Amazon Web Services and Google Cloud. With these computers – with instruction from an experienced BI expert – not only can costs for the use of hardware be estimated, but ideas for cost optimization can also be calculated. Nevertheless, the cloud is still not the right solution for every company and classic calculations for on-premise solutions are necessary.

Incidentally, cost efficiency can also be increased with a good selection of the right software. Because proprietary solutions are tied to different license models and can only be compared using application scenarios. Apart from that, there are also good open source solutions that can be used largely free of charge and can be used for many applications without compromises.

However, it is wrong to assess the cost of a BI only according to its hardware and software costs. A significant part of cost efficiency is complementary to the aspects for the performance of the BI system, because suboptimal architectures work wastefully and require more expensive hardware than neatly coordinated architectures. The production of the central data supply in adequate quality can save many unnecessary processes of data preparation and many flexible analysis options also make redundant systems unnecessary and lead to indirect savings.

In any case, a BI for companies with many operational processes is always cheaper than no BI. However, if you take a closer look with BI expertise, cost efficiency is often possible.

Interview – Predictive Maintenance and how it can unleash cost savings

Interview with Dr. Kai Goebel, Principal Scientist at PARC, a Xerox Company, about Predictive Maintenance and how it can unleash cost savings.

Dr. Kai Goebel is principal scientist as PARC with more than two decades experience in corporate and government research organizations. He is responsible for leading applied research on state awareness, prognostics and decision-making using data analytics, AI, hybrid methods and physics-base methods. He has also fielded numerous applications for Predictive Maintenance at General Electric, NASA, and PARC for uses as diverse as rocket launchpads, jet engines, and chemical plants.

Data Science Blog: Mr. Goebel, predictive maintenance is not just a hype since industrial companies are already trying to establish this use case of predictive analytics. What benefits do they really expect from it?

Predictive Maintenance is a good example for how value can be realized from analytics. The result of the analytics drives decisions about when to schedule maintenance in advance of an event that might cause unexpected shutdown of the process line. This is in contrast to an uninformed process where the decision is mostly reactive, that is, maintenance is scheduled because equipment has already failed. It is also in contrast to a time-based maintenance schedule. The benefits of Predictive Maintenance are immediately clear: one can avoid unexpected downtime, which can lead to substantial production loss. One can manage inventory better since lead times for equipment replacement can be managed well. One can also manage safety better since equipment health is understood and safety averse situations can potentially be avoided. Finally, maintenance operations will be inherently more efficient as they shift significant time from inspection to mitigation of.

Data Science Blog: What are the most critical success factors for implementing predictive maintenance?

Critical for success is to get the trust of the operator. To that end, it is imperative to understand the limitations of the analytics approach and to not make false performance promises. Often, success factors for implementation hinge on understanding the underlying process and the fault modes reasonably well. It is important to be able to recognize the difference between operational changes and abnormal conditions. It is equally important to recognize rare events reliably while keeping false positives in check.

Data Science Blog: What kind of algorithm does predictive maintenance work with? Do you differentiate between approaches based on classical machine learning and those based on deep learning?

Well, there is no one kind of algorithm that works for Predictive Mantenance everywhere. Instead, one should look at the plurality of all algorithms as tools in a toolbox. Then analyze the problem – how many examples for run-to-failure trajectories are there; what is the desired lead time to report on a problem; what is the acceptable false positive/false negative rate; what are the different fault modes; etc – and use the right kind of tool to do the job. Just because a particular approach (like the one you mentioned in your question) is all the hype right now does not mean it is the right tool for the problem. Sometimes, approaches from what you call “classical machine learning” actually work better. In fact, one should consider approaches even outside the machine learning domain, either as stand-alone approach as in a hybrid configuration. One may also have to invent new methods, for example to perform online learning of the dynamic changes that a system undergoes through its (long) life. In the end, a customer does not care about what approach one is using, only if it solves the problem.

Data Science Blog: There are several providers for predictive analytics software. Is it all about software tools? What makes the difference for having success?

Frequently, industrial partners lament that they have to spend a lot of effort in teaching a new software provider about the underlying industrial processes as well as the equipment and their fault modes. Others are tired of false promises that any kind of data (as long as you have massive amounts of it) can produce any kind of performance. If one does not physically sense a certain modality, no algorithmic magic can take place. In other words, it is not just all about the software. The difference for having success is understanding that there is no cookie cutter approach. And that realization means that one may have to role up the sleeves and to install new instrumentation.

Data Science Blog: What are coming trends? What do you think will be the main topic 2020 and 2021?

Predictive Maintenance is slowly evolving towards Prescriptive Maintenance. Here, one does not only seek to inform about an impending problem, but also what to do about it. Such an approach needs to integrate with the logistics element of an organization to find an optimal decision that trades off several objectives with regards to equipment uptime, process quality, repair shop loading, procurement lead time, maintainer availability, safety constraints, contractual obligations, etc.

Daten als Frühwarnsystem einsetzen

In der klassischen Business Intelligence haben Unternehmen jahrelang Daten in Data Warehouses gesammelt und analysiert, um aus der Vergangenheit Lehren für die Zukunft zu ziehen. Zu seiner Zeit war das eine Revolution, aber da es sich dabei vor allem um Daten aus Transaktionssystemen handelte, war der Nutzen begrenzt. Erst mit der Verbreitung des IoT und von Sensoren, die permanent Daten liefern, konnten auch Gründe für Fehler oder Maschinenausfälle ausgelesen werden. Und wenn diese Gründe bestimmten Mustern folgen, liegt es nahe, einzugreifen, bevor ein Problem auftritt – das ist der Grundgedanke hinter dem Konzept von Predictive Analytics.

Großes bisher meist ungenutztes Potential

Systeme, die Risiken und Abweichungen als Frühwarnsystem erkennen, besitzen ein enormes wirtschaftliches Potential. In der Produktion beispielsweise können Maschinen länger reibungsfrei laufen und auch die IT-Infrastruktur profitiert. Predictive Analytics verändern aber auch die Unternehmensführung von Grund auf: Wenn Entscheidungen nur noch auf Basis von Daten anstatt von einem „Bauchgefühl” getroffen werden, verändert sich auch das Machtgefälle zugunsten der IT.

Wenn Entscheider sich nur noch auf Daten verlassen sollen/wollen und ihr Bauchgefühl ausschalten müssen, dann führt das zu einer Art “kultureller Überforderung” wie die Studie „Predictive Analytics 2018“ von IDG Research Services zeigt. Aber den meisten von ihnen ist klar, dass an dem Thema auf lange Sicht kein Weg vorbei führt. Zum Zeitpunkt der Befragung, die schon etwas zurückliegt, stuften bereits 47 Prozent der Unternehmen die Relevanz von Predictive Analytics als sehr hoch (18 Prozent) oder hoch (29 Prozent) ein. Über ein Drittel war aber bereits der Überzeugung, das Predictive Analytics spätestens 2021 eine sehr wichtige Rolle spielen wird.

Intelligenz in den Workflow bringen

Für Managed-Cloud-Unternehmen wie Adacor gewinnt Predictive Analytics in zweierlei Hinsicht an Bedeutung. Zum einen lassen sich damit Prozesse verbessern, mit denen bereits in der Vergangenheit Themen wie das Management von Server-Log-Daten oder CPU-Auslastungen automatisiert und vorausschauend gesteuert wurden.

Für Private Cloud Services, der maßgeschneiderten Erweiterung von internen Rechenzentren bedeutet dies, Teile des Live-Monitorings nach und nach in ein Predictive-Monitoring umzuwandeln und so auf mögliche Ausfälle oder Beeinträchtigungen von Servern im Vorfeld zu reagieren, um so auch den Ausfall für den Kunden zu verhindern. In einem einfachen Beispiel bewertet ein Deep-Learning Modell, ob auf einem beliebigen System die Festplattenfüllstände in der Zukunft stabil verlaufen werden oder ob mit instabilem Verhalten zu rechnen ist. Wird Stabilität erwartet, dann kann ein simpleres Vorhersagemodell diese Stabilität nutzen und die Füllstände vorhersagen. Ist mit instabilem Verhalten zu rechnen, dann wissen die Administratoren, dass sie ein besonderes Auge auf das entsprechende System werfen sollten. So wird durch vergleichsweise einfache Predictive-Monitoring Methoden bereits eine deutlich erhöhte Ausfallsicherheit der Systeme gewährleistet.

Neben stark individualisierten Cloud-Lösungen werden in Zukunft standardisierte Angebote immer mehr nachgefragt werden, die durch Predictive-Analytic-Tools „intelligenter” werden. Übersetzt bedeutet das, maschinelles Lernen nach Möglichkeit automatisch auf neue Prozesse anzuwenden und so Server bzw. die Cloud noch leistungsfähiger und sicherer zu machen.

Size matters

Die Studie zeigte, dass vor allem große Unternehmen Ressourcen für Analytics-Projekte bereitstellen. Über ein Drittel von ihnen hatte bereits Analytics-Projekte umgesetzt, mehr als die Hälfte davon im Bereich Predictive Analytics. Kleine und mittelständische Firmen hingegen verfügten noch wenig über umfangreiche Analytics-Systeme. Die Ergebnisse aus den Predictive-Analytics-Projekten beeinflussen im Wesentlichen auch die Management-Entscheidungen. 94 Prozent der Firmen, die Predictive Analytics anwenden, steuern über die Auswertungen Prozesse vor allem im IT-Bereich, im strategischen Management sowie in Produktion und Fertigung. Die großen Unternehmen sind also größtenteils schon dabei, sich die Vorteile zu nutzen zu machen. Bei mittelständischen und kleineren Unternehmen besteht noch deutlicher Nachholbedarf. Schon die technischen Voraussetzungen genügen häufig nicht den Anforderungen.

Fast alle Branchen können profitieren

Das erstaunt, denn Predictive Analytics kann in vielen Bereichen als eine Art Frühwarnsystem eingesetzt werden. Es hilft nicht nur dabei, Maschinenausfälle bei Produktionsunternehmen durch vorausschauende Wartungen zu minimieren. Es kann zum Beispiel auch den Vertrieb von Handelsunternehmen optimieren. In der Medizin kommen bereits Methoden zum Einsatz, durch die sich Risikofaktoren schneller identifizieren und die Behandlung von Krankheiten insgesamt verbessern lässt. Versicherungen und auch Finanzinstitute kalkulieren ihre Produkte und Prämien seit jeher erfolgreich auf Basis von Wahrscheinlichkeitsanalysen und Hochrechnungen. Auch im Bereich der Betrugsprävention werden entsprechend Methoden und Tools verstärkt eingesetzt, um Kriminellen das Handwerk zu legen.

Man sieht, es lohnt sich für Unternehmen, die Daten sammeln, ihre Strategie an die neuen Technologien anzupassen. Die aktuellen Möglichkeiten zur Analyse und Aggregierung von Daten und Informationen sind extrem groß. Es kommt darauf an, Muster in den „Big Data” zu erkennen und diese richtig zu interpretieren – anstatt dieselben Fehler immer und immer wieder zu machen.

“Saubere Ablage“ bringt Unternehmen nicht weiter

Unternehmen, die Daten sammeln, um diese lediglich sauber abzulegen und zu archivieren, sollten Ihre Strategie an die neuen Möglichkeiten des Predictive Analytics anpassen. Die aktuellen Möglichkeiten zur Auswertung und zur Verdichtung von Daten zu Informationen und somit zur Generierung von Wissen sind extrem groß. Nur wer Muster im großen Reich der Daten erkennt und diese auch richtig interpretieren kann, wird kann mit Predicitve Analytics ein Frühwarnsystem zu seinen Gunsten aufbauen.

The Importance of Equipment Calibration in Maintaining Data Integrity

Image by Unsplash.

New data-collection technologies, like internet of things (IoT) sensors, enable businesses across industries to collect accurate, minute-to-minute data that they can use to improve business processes and drive decision-making.

However, as data becomes more central to business processes and as more and more data is collected, collection errors become both more possible and more costly.

Here is why equipment calibration is key in maintaining data integrity — in every industry.

Bad Calibration, Bad Data

If a sensor or piece of equipment is improperly calibrated, the data it records could be incomplete, inaccurate or totally incorrect. This misinformation could be detrimental for businesses that integrate data-driven policies and strategies, as they rely on complete, up-to-date and accurate data.

In fact, poor calibration cost manufacturers an average of $1.7 million every year, according to a 2008 survey.

Poorly calibrated sensors and testing equipment can also present risks for consumers — which is why some industries control calibration. In medicine, for example, the FDA regulates equipment calibration. Medical manufacturers must regularly inspect and test monitoring equipment. Effective measuring and test equipment are vital for producing batches of drugs that are useful and safe for patient health.

Bad calibration can even lead to machine failure in businesses that rely on predictive maintenance, which is the use of IoT sensors to collect machine data that can help analysts predict machine failure before it happens. If a business’ data scientists are working with bad information, they are less likely to realize a particular machine or robot is failing. As a result, they won’t intervene with a repair until failure has occurred — a costly error that can effectively shut down some workflows.

Worse, if a business has come to depend on predictive maintenance, it may be caught off-guard by that machine’s failure — even more than if the same company relied on traditional maintenance strategies, rather than predictive analytics.

How to Ensure Equipment Calibration

Fortunately, businesses can ensure the continued quality of their data-collecting processes by committing to regular equipment calibration.

While not all industries are subject to equipment calibration regulations, standards from other industries — like those established by the FDA — could provide useful best practice frameworks.

Businesses that don’t have a dedicated equipment maintenance team can choose an external calibration solution or hire or train a team to handle equipment calibration. Some businesses — such as manufacturers who work with numerous advanced or highly sensitive machines — might need multiple calibration teams or companies with specialized experience.

In general, businesses and manufacturers should establish a regular calibration and inspection schedule. Each time someone calibrates a piece of equipment, they should document that process. Documentation should include the date of the last calibration, the results of any tests conducted and the due date for the next calibration. This process can help establish a pattern of sensor error that equipment maintenance teams can use to better predict and respond to glitches.

Even if a business only uses a certain kind of data from one sensor on a piece of testing equipment, workers should test every sensor on that machine. Errors from other sensors can influence properly calibrated sensors, even if no one is actively using the data they collect. This will become even truer as smart analysis technologies and IoT platforms become more common and algorithms handle larger portions of the data analysis process.

Calibrating Equipment for Accurate Data

Data is one of the most valuable resources available to modern businesses. However, a cost comes with relying too heavily on data and not properly calibrating the equipment that collects that data.

Equipment calibration is key to maintaining data integrity. If testing equipment and sensors aren’t properly calibrated, they can record incorrect data, which may lead to delays or lower product quality. Regular equipment calibration can help businesses ensure the data they receive is accurate and of the highest caliber.

Visual Question Answering with Keras – Part 2: Making Computers Intelligent to answer from images

Making Computers Intelligent to answer from images

This is my second blog on Visual Question Answering, in the last blog, I have introduced to VQA, available datasets and some of the real-life applications of VQA. If you have not gone through then I would highly recommend you to go through it. Click here for more details about it.

In this blog post, I will walk through the implementation of VQA in Keras.

You can download the dataset from here: https://visualqa.org/index.html. All my experiments were performed with VQA v2 and I have used a very tiny subset of entire dataset i.e all samples for training and testing from the validation set.

Table of contents:

  1. Preprocessing Data
  2. Process overview for VQA
  3. Data Preprocessing – Images
  4. Data Preprocessing through the spaCy library- Questions
  5. Model Architecture
  6. Defining model parameters
  7. Evaluating the model
  8. Final Thought
  9. References

NOTE: The purpose of this blog is not to get the state-of-art performance on VQA. But the idea is to get familiar with the concept. All my experiments were performed with the validation set only.

Full code on my Github here.


1. Preprocessing Data:

If you have downloaded the dataset then the question and answers (called as annotations) are in JSON format. I have provided the code to extract the questions, annotations and other useful information in my Github repository. All extracted information is stored in .txt file format. After executing code the preprocessing directory will have the following structure.

All text files will be used for training.

 

2. Process overview for VQA:

As we have discussed in previous post visual question answering is broken down into 2 broad-spectrum i.e. vision and text.  I will represent the Neural Network approach to this problem using the Convolutional Neural Network (for image data) and Recurrent Neural Network(for text data). 

If you are not familiar with RNN (more precisely LSTM) then I would highly recommend you to go through Colah’s blog and Andrej Karpathy blog. The concepts discussed in this blogs are extensively used in my post.

The main idea is to get features for images from CNN and features for the text from RNN and finally combine them to generate the answer by passing them through some fully connected layers. The below figure shows the same idea.

 

I have used VGG-16 to extract the features from the image and LSTM layers to extract the features from questions and combining them to get the answer.

3. Data Preprocessing – Images:

Images are nothing but one of the input to our model. But as you already may know that before feeding images to the model we need to convert into the fixed-size vector.

So we need to convert every image into a fixed-size vector then it can be fed to the neural network. For this, we will use the VGG-16 pretrained model. VGG-16 model architecture is trained on millions on the Imagenet dataset to classify the image into one of 1000 classes. Here our task is not to classify the image but to get the bottleneck features from the second last layer.

Hence after removing the softmax layer, we get a 4096-dimensional vector representation (bottleneck features) for each image.

Image Source: https://www.cs.toronto.edu/~frossard/post/vgg16/

 

For the VQA dataset, the images are from the COCO dataset and each image has unique id associated with it. All these images are passed through the VGG-16 architecture and their vector representation is stored in the “.mat” file along with id. So in actual, we need not have to implement VGG-16 architecture instead we just do look up into file with the id of the image at hand and we will get a 4096-dimensional vector representation for the image.

4. Data Preprocessing through the spaCy library- Questions:

spaCy is a free, open-source library for advanced Natural Language Processing (NLP) in Python. As we have converted images into a fixed 4096-dimensional vector we also need to convert questions into a fixed-size vector representation. For installing spaCy click here

You might know that for training word embeddings in Keras we have a layer called an Embedding layer which takes a word and embeds it into a higher dimensional vector representation. But by using the spaCy library we do not have to train the get the vector representation in higher dimensions.

 

This model is actually trained on billions of tokens of the large corpus. So we just need to call the vector method of spaCy class and will get vector representation for word.

After fitting, the vector method on tokens of each question will get the 300-dimensional fixed representation for each word.

5. Model Architecture:

In our problem the input consists of two parts i.e an image vector, and a question, we cannot use the Sequential API of the Keras library. For this reason, we use the Functional API which allows us to create multiple models and finally merge models.

The below picture shows the high-level architecture idea of submodules of neural network.

After concatenating the 2 different models the summary will look like the following.

The below plot helps us to visualize neural network architecture and to understand the two types of input:

 

6. Defining model parameters:

The hyperparameters that we are going to use for our model is defined as follows:

If you know what this parameter means then you can play around it and can get better results.

Time Taken: I used the GPU on https://colab.research.google.com and hence it took me approximately 2 hours to train the model for 5 epochs. However, if you train it on a PC without GPU, it could take more time depending on the configuration of your machine.

7. Evaluating the model:

Since I have used the very small dataset for performing these experiments I am not able to get very good accuracy. The below code will calculate the accuracy of the model.

 

Since I have trained a model multiple times with different parameters you will not get the same accuracy as me. If you want you can directly download mode.h5 file from my google drive.

 

8. Final Thoughts:

One of the interesting thing about VQA is that it a completely new field. So there is absolutely no end to what you can do to solve this problem. Below are some tips while replicating the code.

  1. Start with a very small subset of data: When you start implementing I suggest you start with a very small amount of data. Because once you are ready with the whole setup then you can scale it any time.
  2. Understand the code: Understanding code line by line is very much helpful to match your theoretical knowledge. So for that, I suggest you can take very few samples(maybe 20 or less) and run a small chunk (2 to 3 lines) of code to get the functionality of each part.
  3. Be patient: One of the mistakes that I did while starting with this project was to do everything at one go. If you get some error while replicating code spend 4 to 5 days harder on that. Even after that if you won’t able to solve, I would suggest you resume after a break of 1 or 2 days. 

VQA is the intersection of NLP and CV and hopefully, this project will give you a better understanding (more precisely practically) with most of the deep learning concepts.

If you want to improve the performance of the model below are few tips you can try:

  1. Use larger datasets
  2. Try Building more complex models like Attention, etc
  3. Try using other pre-trained word embeddings like Glove 
  4. Try using a different architecture 
  5. Do more hyperparameter tuning

The list is endless and it goes on.

In the blog, I have not provided the complete code you can get it from my Github repository.

9. References:

  1. https://blog.floydhub.com/asking-questions-to-images-with-deep-learning/
  2. https://tryolabs.com/blog/2018/03/01/introduction-to-visual-question-answering/
  3. https://github.com/sominwadhwa/vqamd_floyd

Visual Question Answering with Keras – Part 1

This is Part I of II of the Article Series Visual Question Answering with Keras

Making Computers Intelligent to answer from images

If we look closer in the history of Artificial Intelligence (AI), the Deep Learning has gained more popularity in the recent years and has achieved the human-level performance in the tasks such as Speech Recognition, Image Classification, Object Detection, Machine Translation and so on. However, as humans, not only we but also a five-year child can normally perform these tasks without much inconvenience. But the development of such systems with these capabilities has always considered an ambitious goal for the researchers as well as for developers.

In this series of blog posts, I will cover an introduction to something called VQA (Visual Question Answering), its available datasets, the Neural Network approach for VQA and its implementation in Keras and the applications of this challenging problem in real life. 

Table of Contents:

1 Introduction

2 What is exactly Visual Question Answering?

3 Prerequisites

4 Datasets available for VQA

4.1 DAQUAR Dataset

4.2 CLEVR Dataset

4.3 FigureQA Dataset

4.4 VQA Dataset

5 Real-life applications of VQA

6 Conclusion

 

  1. Introduction:

Let’s say you are given a below picture along with one question. Can you answer it?

I expect confidently you all say it is the Kitchen without much inconvenience which is also the right answer. Even a five-year child who just started to learn things might answer this question correctly.

Alright, but can you write a computer program for such type of task that takes image and question about the image as an input and gives us answer as output?

Before the development of the Deep Neural Network, this problem was considered as one of the difficult, inconceivable and challenging problem for the AI researcher’s community. However, due to the recent advancement of Deep Learning the systems are capable of answering these questions with the promising result if we have a required dataset.

Now I hope you have got at least some intuition of a problem that we are going to discuss in this series of blog posts. Let’s try to formalize the problem in the below section.

  1. What is exactly Visual Question Answering?:

We can define, “Visual Question Answering(VQA) is a system that takes an image and natural language question about the image as an input and generates natural language answer as an output.”

VQA is a research area that requires an understanding of vision(Computer Vision)  as well as text(NLP). The main beauty of VQA is that the reasoning part is performed in the context of the image. So if we have an image with the corresponding question then the system must able to understand the image well in order to generate an appropriate answer. For example, if the question is the number of persons then the system must able to detect faces of the persons. To answer the color of the horse the system need to detect the objects in the image. Many of these common problems such as face detection, object detection, binary object classification(yes or no), etc. have been solved in the field of Computer Vision with good results.

To summarize a good VQA system must be able to address the typical problems of CV as well as NLP.

To get a better feel of VQA you can try online VQA demo by CloudCV. You just go to this link and try uploading the picture you want and ask the related question to the picture, the system will generate the answer to it.

 

  1. Prerequisites:

In the next post, I will walk you through the code for this problem using Keras. So I assume that you are familiar with:

  1. Fundamental concepts of Machine Learning
  2. Multi-Layered Perceptron
  3. Convolutional Neural Network
  4. Recurrent Neural Network (especially LSTM)
  5. Gradient Descent and Backpropagation
  6. Transfer Learning
  7. Hyperparameter Optimization
  8. Python and Keras syntax
  1. Datasets available for VQA:

As you know problems related to the CV or NLP the availability of the dataset is the key to solve the problem. The complex problems like VQA, the dataset must cover all possibilities of questions answers in real-world scenarios. In this section, I will cover some of the datasets available for VQA.

4.1 DAQUAR Dataset:

The DAQUAR dataset is the first dataset for VQA that contains only indoor scenes. It shows the accuracy of 50.2% on the human baseline. It contains images from the NYU_Depth dataset.

Example of DAQUAR dataset

Example of DAQUAR dataset

The main disadvantage of DAQUAR is the size of the dataset is very small to capture all possible indoor scenes.

4.2 CLEVR Dataset:

The CLEVR Dataset from Stanford contains the questions about the object of a different type, colors, shapes, sizes, and material.

It has

  • A training set of 70,000 images and 699,989 questions
  • A validation set of 15,000 images and 149,991 questions
  • A test set of 15,000 images and 14,988 questions

Image Source: https://cs.stanford.edu/people/jcjohns/clevr/?source=post_page

 

4.3 FigureQA Dataset:

FigureQA Dataset contains questions about the bar graphs, line plots, and pie charts. It has 1,327,368 questions for 100,000 images in the training set.

4.4 VQA Dataset:

As comapred to all datasets that we have seen so far VQA dataset is relatively larger. The VQA dataset contains open ended as well as multiple choice questions. VQA v2 dataset contains:

  • 82,783 training images from COCO (common objects in context) dataset
  • 40, 504 validation images and 81,434 validation images
  • 443,757 question-answer pairs for training images
  • 214,354 question-answer pairs for validation images.

As you might expect this dataset is very huge and contains 12.6 GB of training images only. I have used this dataset in the next post but a very small subset of it.

This dataset also contains abstract cartoon images. Each image has 3 questions and each question has 10 multiple choice answers.

  1. Real-life applications of VQA:

There are many applications of VQA. One of the famous applications is to help visually impaired people and blind peoples. In 2016, Microsoft has released the “Seeing AI” app for visually impaired people to describe the surrounding environment around them. You can watch this video for the prototype of the Seeing AI app.

Another application could be on social media or e-commerce sites. VQA can be also used for educational purposes.

  1. Conclusion:

I hope this explanation will give you a good idea of Visual Question Answering. In the next blog post, I will walk you through the code in Keras.

If you like my explanations, do provide some feedback, comments, etc. and stay tuned for the next post.

Attribution Models in Marketing

Attribution Models

A Business and Statistical Case

INTRODUCTION

A desire to understand the causal effect of campaigns on KPIs

Advertising and marketing costs represent a huge and ever more growing part of the budget of companies. Studies have found out this share is as high as 10% and increases with the size of companies (CMO study by American Marketing Association and Duke University, 2017). Measuring precisely the impact of a specific marketing campaign on the sales of a company is a critical step towards an efficient allocation of this budget. Would the return be higher for an euro spent on a Facebook ad, or should we better spend it on a TV spot? How much should I spend on Twitter ads given the volume of sales this channel is responsible for?

Attribution Models have lately received great attention in Marketing departments to answer these issues. The transition from offline to online marketing methods has indeed permitted the collection of multiple individual data throughout the whole customer journey, and  allowed for the development of user-centric attribution models. In short, Attribution Models use the information provided by Tracking technologies such as Google Analytics or Webtrekk to understand customer journeys from the first click on a Facebook ad to the final purchase and adequately ponderate the different marketing campaigns encountered depending on their responsibility in the final conversion.

Issues on Causal Effects

A key question then becomes: how to declare a channel is responsible for a purchase? In other words, how can we isolate the causal effect or incremental value of a campaign ?

          1. A/B-Tests

One method to estimate the pure impact of a campaign is the design of randomized experiments, wherein a control and treated groups are compared.  A/B tests belong to this broad category of randomized methods. Provided the groups are a priori similar in every aspect except for the treatment received, all subsequent differences may be attributed solely to the treatment. This method is typically used in medical studies to assess the effect of a drug to cure a disease.

Main practical issues regarding Randomized Methods are:

  • Assuring that control and treated groups are really similar before treatment. Uually a random assignment (i.e assuring that on a relevant set of observable variables groups are similar) is realized;
  • Potential spillover-effects, i.e the possibility that the treatment has an impact on the non-treated group as well (Stable unit treatment Value Assumption, or SUTVA in Rubin’s framework);
  • The costs of conducting such an experiment, and especially the costs linked to the deliberate assignment of individuals to a group with potentially lower results;
  • The number of such experiments to design if multiple treatments have to be measured;
  • Difficulties taking into account the interaction effects between campaigns or the effect of spending levels. Indeed, usually A/B tests are led by cutting off temporarily one campaign entirely and measuring the subsequent impact on KPI’s compared to the situation where this campaign is maintained;
  • The dynamical reproduction of experiments if we assume that treatment effects may change over time.

In the marketing context, multiple campaigns must be tested in a dynamical way, and treatment effect is likely to be heterogeneous among customers, leading to practical issues in the lauching of A/B tests to approximate the incremental value of all campaigns. However, sites with a lot of traffic and conversions can highly benefit from A/B testing as it provides a scientific and straightforward way to approximate a causal impact. Leading companies such as Uber, Netflix or Airbnb rely on internal tools for A/B testing automation, which allow them to basically test any decision they are about to make.

References:

Books:

Experiment!: Website conversion rate optimization with A/B and multivariate testing, Colin McFarland, ©2013 | New Riders  

A/B testing: the most powerful way to turn clicks into customers. Dan Siroker, Pete Koomen; Wiley, 2013.

Blogs:

https://eng.uber.com/xp

https://medium.com/airbnb-engineering/growing-our-host-community-with-online-marketing-9b2302299324

Study:

https://cmosurvey.org/wp-content/uploads/sites/15/2018/08/The_CMO_Survey-Results_by_Firm_and_Industry_Characteristics-Aug-2018.pdf

        2. Attribution models

Attribution Models do not demand to create an experimental setting. They take into account existing data and derive insights from the variability of customer journeys. One key difficulty is then to differentiate correlation and causality in the links observed between the exposition to campaigns and purchases. Indeed, selection effects may bias results as exposure to campaigns is usually dependant on user-characteristics and thus may not be necessarily independant from the customer’s baseline conversion probabilities. For example, customers purchasing from a discount price comparison website may be intrinsically different from customers buying from FB ad and this a priori difference may alone explain post-exposure differences in purchasing bahaviours. This intrinsic weakness must be remembered when interpreting Attribution Models results.

                          2.1 General Issues

The main issues regarding the implementation of Attribution Models are linked to

  • Causality and fallacious reasonning, as most models do not take into account the aforementionned selection biases.
  • Their difficult evaluation. Indeed, in almost all attribution models (except for those based on classification, where the accuracy of the model can be computed), the additionnal value brought by the use of a given attribution models cannot be evaluated using existing historical data. This additionnal value can only be approximated by analysing how the implementation of the conclusions of the attribution model have impacted a given KPI.
  • Tracking issues, leading to an uncorrect reconstruction of customer journeys
    • Cross-device journeys: cross-device issue arises from the use of different devices throughout the customer journeys, making it difficult to link datapoints. For example, if a customer searches for a product on his computer but later orders it on his mobile, the AM would then mistakenly consider it an order without prior campaign exposure. Though difficult to measure perfectly, the proportion of cross-device orders can approximate 20-30%.
    • Cookies destruction makes it difficult to track the customer his the whole journey. Both regulations and consumers’ rising concerns about data privacy issues mitigate the reliability and use of cookies.1 – From 2002 on, the EU has enacted directives concerning privacy regulation and the extended use of cookies for commercial targeting purposes, which have highly impacted marketing strategies, such as the ‘Privacy and Electronic Communications Directive’ (2002/58/EC). A research was conducted and found out that the adoption of this ‘Privacy Directive’ had led to 64% decrease in advertising methods compared to the rest of the world (Goldfarb et Tucker (2011)). The effect was stronger for generalized sites (Yahoo) than for specialized sites.2 – Users have grown more and more conscious of data privacy issues and have adopted protective measures concerning data privacy, such as automatic destruction of cookies after a session is ended, or simply giving away less personnal information (Goldfarb et Tucker (2012) ) .Valuable user information may be lost, though tracking technologies evolution have permitted to maintain tracking by other means. This issue may be particularly important in countries highly concerned with data privacy issues such as Germany.
    • Offline/Online bridge: an Attribution Model should take into account all campaigns to draw valuable insights. However, the exposure to offline campaigns (TV, newspapers) are difficult to track at the user level. One idea to tackle this issue would be to estimate the proportion of conversions led by offline campaigns through AB testing and deduce this proportion from the credit assigned to the online campaigns accounted for in the Attribution Model.
    • Touch point information available: clicks are easy to follow but irrelevant to take into account the influence of purely visual campaigns such as display ads or video.

                          2.2 Today’s main practices

Two main families of Attribution Models exist:

  • Rule-Based Attribution Models, which have been used for in the last decade but from which companies are gradualy switching.

Attribution depends on the individual journeys that have led to a purchase and is solely based on the rank of the campaign in the journey. Some models focus on a single touch points (First Click, Last Click) while others account for multi-touch journeys (Bathtube, Linear). It can be calculated at the customer level and thus doesn’t require large amounts of data points. We can distinguish two sub-groups of rule-based Attribution Models:

  • One Touch Attribution Models attribute all credit to a single touch point. The First-Click model attributes all credit for a converion to the first touch point of the customer journey; last touch attributes all credit to the last campaign.
  • Multi-touch Rule-Based Attribution Models incorporate information on the whole customer journey are thus an improvement compared to one touch models. To this family belong Linear model where credit is split equally between all channels, Bathtube model where 40% of credit is given to first and last clicks and the remaining 20% is distributed equally between the middle channels, or time-decay models where credit assigned to a click diminishes as the time between the click and the order increases..

The main advantages of rule-based models is their simplicity and cost effectiveness. The main problems are:

– They are a priori known and can thus lead to optimization strategies from competitors
– They do not take into account aggregate intelligence on customer journeys and actual incremental values.
– They tend to bias (depending on the model chosen) channels that are over-represented at the beggining or end of the funnel, according to theoretical assumptions that have no observationnal back-ups.

  • Data-Driven Attribution Models

These models take into account the weaknesses of rule-based models and make a relevant use of available data. Being data-driven, following attribution models cannot be computed using single user level data. On the contrary values are calculated through data aggregation and thus require a certain volume of customer journey information.

References:

https://dspace.mit.edu/handle/1721.1/64920

 

        3. Data-Driven Attribution Models in practice

                          3.1 Issues

Several issues arise in the computation of campaigns individual impact on a given KPI within a data-driven model.

  • Selection biases: Exposure to certain types of advertisement is usually highly correlated to non-observable variables which are in turn correlated to consumption practices. Differences in the behaviour of users exposed to different campaigns may thus only be driven by core differences in conversion probabilities between groups whether than by the campaign effect.
  • Complementarity: it may be that campaigns A and B only have an effect when combined, so that measuring their individual impact would lead to misleading conclusions. The model could then try to assess the effect of combinations of campaigns on top of the effect of individual campaigns. As the number of possible non-ordered combinations of k campaigns is 2k, it becomes clear that inclusing all possible combinations would however be time-consuming.
  • Order-sensitivity: The effect of a campaign A may depend on the place where it appears in the customer journey, meaning the rank of a campaign and not merely its presence could be accounted for in the model.
  • Relative Order-sensitivity: it may be that campaigns A and B only have an effect when one is exposed to campaign A before campaign B. If so, it could be useful to assess the effect of given combinations of campaigns as well. And this for all campaigns, leading to tremendous numbers of possible combinations.
  • All previous phenomenon may be present, increasing even more the potential complexity of a comprehensive Attribution Model. The number of all possible ordered combination of k campaigns is indeed :

 

                          3.2 Main models

                                  A) Logistic Regression and Classification models

If non converting journeys are available, Attribition Model can be shaped as a simple classification issue. Campaign types or campaigns combination and volume of campaign types can be included in the model along with customer or time variables. As we are interested in inference (on campaigns effect) whether than prediction, a parametric model should be used, such as Logistic Regression. Non paramatric models such as Random Forests or Neural Networks can also be used though the interpretation of campaigns value would be more difficult to derive from the model results.

A common pitfall is the usual issue of spurious correlations on one hand and the correct interpretation of coefficients in business terms.

An advantage if the possibility to evaluate the relevance of the model using common model validation methods to evaluate its predictive power (validation set \ AUC \pseudo R squared).

                                  B) Shapley Value

Theory

The Shapley Value is based on a Game Theory framework and is named after its creator, the Nobel Price Laureate Lloyd Shapley. Initially meant to calculate the marginal contribution of players in cooperative games, the model has received much attention in research and industry and has lately been applied to marketing issues. This model is typically used by Google Adords and other ad bidding vendors. Campaigns or marketing channels are in this model seen as compementary players looking forward to increasing a given KPI.
Contrarily to Logistic Regressions, it is a non-parametric model. Contrarily to Markov Chains, all results are built using existing journeys, and not simulated ones.

Channels are considered to enter the game sequentially under a certain joining order. Shapley value try to The Shapley value of channel i is the weighted sum of the marginal values that channel i adds to all possible coalitions that don’t contain channel i.
In other words, the main logic is to analyse the difference of gains when a channel i is added after a coalition Ck of k channels, k<=n. We then sum all the marginal contributions over all possible ordered combination Ck of all campaigns excluding i, with k<=n-1.

Subsets framework

A first an most usual way to compute the Shapley Vaue is to consider that when a channel enters coalition, its additionnal value is the same irrelevant of the order in which previous channels have appeared. In other words, journeys (A>B>C) and (B>A>C) trigger the same gains.
Shapley value is computed as the gains associated to adding a channel i to a subset of channels, weighted by the number of (ordered) sequences that the (unordered) subset represents, summed up on all possible subsets of the total set of campaigns where the channel i is not present.
The Shapley value of the channel ???????? is then:

where |S| is the number of campaigns of a coalition S and the sum extends over all subsets S that do not not contain channel j. ????(????)  is the value of the coalition S and ????(???? ∪ {????????})  the value of the coalition formed by adding ???????? to coalition S. ????(???? ∪ {????????}) − ????(????) is thus the marginal contribution of channel ???????? to the coalition S.

The formula can be rewritten and understood as:

This method is convenient when data on the gains of on all possible permutations of all unordered k subsets of the n campaigns are available. It is also more convenient if the order of campaigns prior to the introduction of a campaign is thought to have no impact.

Ordered sequences

Let us define ????((A>B)) as the value of the sequence A then B. What is we let ????((A>B)) be different from ????((B>A)) ?
This time we would need to sum over all possible permutation of the S campaigns present before  ???????? and the N-(S+1) campaigns after ????????. Doing so we will sum over all possible orderings (i.e all permutations of the n campaigns of the grand coalition containing all campaigns) and we can remove the permutation coefficient s!(p-s+1)!.

This method is convenient when the order of channels prior to and after the introduction of another channel is assumed to have an impact. It is also necessary to possess data for all possible permutations of all k subsets of the n campaigns, and not only on all (unordered) k-subsets of the n campaigns, k<=n. In other words, one must know the gains of A, B, C, A>B, B>A, etc. to compute the Shapley Value.

Differences between the two approaches

We simulate an ordered case where the value for each ordered sequence k for k<=3 is known. We compare it to the usual Shapley value calculated based on known gains of unordered subsets of campaigns. So as to compare relevant values, we have built the gains matrix so that the gains of a subset A, B i.e  ????({B,A}) is the average of the gains of ordered sequences made up with A and B (assuming the number of journeys where A>B equals the number of journeys where B>A, we have ????({B,A})=0.5( ????((A>B)) + ????((B>A)) ). We let the value of the grand coalition be different depending on the order of campaigns-keeping the constraints that it averages to the value used for the unordered case.

Note: mvA refers to the marginal value of A in a given sequence.
With traditionnal unordered coalitions:

With ordered sequences used to compute the marginal values:

 

We can see that the two approaches yield very different results. In the unordered case, the Shapley Value campaign C is the highest, culminating at 20, while A and B have the same Shapley Value mvA=mvB=15. In the ordered case, campaign A has the highest Shapley Value and all campaigns have different Shapley Values.

This example illustrates the inherent differences between the set and sequences approach to Shapley values. Real life data is more likely to resemble the ordered case as conversion probabilities may for any given set of campaigns be influenced by the order through which the campaigns appear.

Advantages

Shapley value has become popular in allocation problems in cooperative games because it is the unique allocation which satisfies different axioms:

  • Efficiency: Shaple Values of all channels add up to the total gains (here, orders) observed.
  • Symmetry: if channels A and B bring the same contribution to any coalition of campaigns, then their Shapley Value i sthe same
  • Null player: if a channel brings no additionnal gains to all coalitions, then its Shapley Value is zero
  • Strong monotony: the Shapley Value of a player increases weakly if all its marginal contributions increase weakly

These properties make the Shapley Value close to what we intuitively define as a fair attribution.

Issues

  • The Shapley Value is based on combinatory mathematics, and the number of possible coalitions and ordered sequences becomes huge when the number of campaigns increases.
  • If unordered, the Shapley Value assumes the contribution of campaign A is the same if followed by campaign B or by C.
  • If ordered, the number of combinations for which data must be available and sufficient is huge.
  • Channels rarely present or present in long journeys will be played down.
  • Generally, gains are supposed to grow with the number of players in the game. However, it is plausible that in the marketing context a journey with a high number of channels will not necessarily bring more orders than a journey with less channels involved.

References:

R package: GameTheoryAllocation

Article:
Zhao & al, 2018 “Shapley Value Methods for Attribution Modeling in Online Advertising “
https://link.springer.com/content/pdf/10.1007/s13278-017-0480-z.pdf
Courses: https://www.lamsade.dauphine.fr/~airiau/Teaching/CoopGames/2011/coopgames-7%5b8up%5d.pdf
Blogs: https://towardsdatascience.com/one-feature-attribution-method-to-supposedly-rule-them-all-shapley-values-f3e04534983d

                                  B) Markov Chains

Markov Chains are used to model random processes, i.e events that occur in a sequential manner and in such a way that the probability to move to a certain state only depends on the past steps. The number of previous steps that are taken into account to model the transition probability is called the memory parameter of the sequence, and for the model to have a solution must be comprised between 0 and 4. A Markov Chain process is thus defined entirely by its Transition Matrix and its initial vector (i.e the starting point of the process).

Markov Chains are applied in many scientific fields. Typically, they are used in weather forecasting, with the sequence of Sunny and Rainy days following a Markov Process of memory parameter 0, so that for each given day the probability that the next day will be rainy or sunny only depends on the weather of the current day. Other applications can be found in sociology to understand the dynamics of social classes intergenerational reproduction. To get more both mathematical and applied illustration, I recommend the reading of this course.

In the marketing context, Markov Chains are an interesting way to model the conversion funnel. To go from the from the Markov Model to the Attribution logic, we calculate the Removal Effect of each channel, i.e the difference in conversions that happen if the channel is removed. Please read below for an introduction to the methodology.

The first step in a Markov Chains Attribution Model is to build the transition matrix that captures the transition probabilities between the campaigns accross existing customer journeys. This Matrix is to be read as a “From state A to state B” table, from the left to the right. A first difficulty is finding the right memory parameter to use. A large memory parameter would allow to take more into account interraction effects within the conversion funnel but would lead to increased computationnal time, a non-readable transition matrix, and be more sensitive to noisy data. Please note that this transition matrix provides useful information on the conversion funnel and on the relationships between campaigns and can be used as such as an analytical tool. I suggest the clear and easily R code which can be found here or here.

Here is an illustration of a Markov Chain with memory Parameter of 0: the probability to go to a certain campaign B in the next step only depend on the campaign we are currently at:

The associated Transition Matrix is then (with null probabilities left as Blank):

The second step is  to compute the actual responsibility of a channel in total conversions. As mentionned above, the main philosophy to do so is to calculate the Removal Effect of each channel, i.e the changes in the number of conversions when a channel is entirely removed. All customer journeys which went through this channel are settled out to be unsuccessful. This calculation is done by applying the transition matrix with and without the removed channels to an initial vector that contains the number of desired simulations.

Building on our current example, we can then settle an initial vector with the desired number of simulations, e.g 10 000:

 

It is possible at this stage to add a constraint on the maximum number of times the matrix is applied to the data, i.e on the maximal number of campaigns a simulated journey is allowed to have.

Advantages

  • The dynamic journey is taken into account, as well as the transition between two states. The funnel is not assumed to be linear.
  • It is possile to build a conversion graph that maps the customer journey provides valuable insights.
  • It is possible to evaluate partly the accuracy of the Attribution Model based on Markov Chains. It is for example possible to see how well the transition matrix help predict the future by analysing the number of correct predictions at any given step over all sequences.

Disadvantages

  • It can be somewhat difficult to set the memory parameter. Complementarity effects between channels are not well taken into account if the memory is low, but a parameter too high will lead to over-sensitivity to noise in the data and be difficult to implement if customer journeys tend to have a number of campaigns below this memory parameter.
  • Long journeys with different channels involved will be overweighted, as they will count many times in the Removal Effect.  For example, if there are n-1 channels in the customer journey, this journey will be considered as failure for the n-1 channel-RE. If the volume effects (i.e the impact of the overall number of channels in a journey, irrelevant from their type° are important then results may be biased.

References:

R package: ChannelAttribution

Git:

https://github.com/MatCyt/Markov-Chain/blob/master/README.md

Course:

https://www.ssc.wisc.edu/~jmontgom/markovchains.pdf

Article:

“Mapping the Customer Journey: A Graph-Based Framework for Online Attribution Modeling”; Anderl, Eva and Becker, Ingo and Wangenheim, Florian V. and Schumann, Jan Hendrik, 2014. Available at SSRN: https://ssrn.com/abstract=2343077 or http://dx.doi.org/10.2139/ssrn.2343077

“Media Exposure through the Funnel: A Model of Multi-Stage Attribution”, Abhishek & al, 2012

“Multichannel Marketing Attribution Using Markov Chains”, Kakalejčík, L., Bucko, J., Resende, P.A.A. and Ferencova, M. Journal of Applied Management and Investments, Vol. 7 No. 1, pp. 49-60.  2018

Blogs:

https://analyzecore.com/2016/08/03/attribution-model-r-part-1

https://analyzecore.com/2016/08/03/attribution-model-r-part-2

                          3.3 To go further: Tackling selection biases with Quasi-Experiments

Exposure to certain types of advertisement is usually highly correlated to non-observable variables. Differences in the behaviour of users exposed to different campaigns may thus only be driven by core differences in converison probabilities between groups whether than by the campaign effect. These potential selection effects may bias the results obtained using historical data.

Quasi-Experiments can help correct this selection effect while still using available observationnal data.  These methods recreate the settings on a randomized setting. The goal is to come as close as possible to the ideal of comparing two populations that are identical in all respects except for the advertising exposure. However, populations might still differ with respect to some unobserved characteristics.

Common quasi-experimental methods used for instance in Public Policy Evaluation are:

  • Discontinuity Regressions
  • Matching Methods, such as Exact Matching,  Propensity-score matching or k-nearest neighbourghs.

References:

Article:

“Towards a digital Attribution Model: Measuring the impact of display advertising on online consumer behaviour”, Anindya Ghose & al, MIS Quarterly Vol. 40 No. 4, pp. 1-XX, 2016

https://pdfs.semanticscholar.org/4fa6/1c53f281fa63a9f0617fbd794d54911a2f84.pdf

        4. First Steps towards a Practical Implementation

Identify key points of interests

  • Identify the nature of touchpoints available: is the data based on clicks? If so, is there a way to complement the data with A/B tests to measure the influence of ads without clicks (display, video) ? For example, what happens to sales when display campaign is removed? Analysing this multiplier effect would give the overall responsibility of display on sales, to be deduced from current attribution values given to click-based channels. More interestingly, what is the impact of the removal of display campaign on the occurences of click-based campaigns ? This would give us an idea of the impact of display ads on the exposure to each other campaigns, which would help correct the attribution values more precisely at the campaign level.
  • Define the KPI to track. From a pure Marketing perspective, looking at purchases may be sufficient, but from a financial perspective looking at profits, though a bit more difficult to compute, may drive more interesting results.
  • Define a customer journey. It may seem obvious, but the notion needs to be clarified at first. Would it be defined by a time limit? If so, which one? Does it end when a conversion is observed? For example, if a customer makes 2 purchases, would the campaigns he’s been exposed to before the first order still be accounted for in the second order? If so, with a time decay?
  • Define the research framework: are we interested only in customer journeys which have led to conversions or in all journeys? Keep in mind that successful customer journeys are a non-representative sample of customer journeys. Models built on the analysis of biased samples may be conservative. Take an extreme example: 80% of customers who see campaign A buy the product, VS 1% for campaign B. However, campaign B exposure is great and 100 Million people see it VS only 1M for campaign A. An Attribution Model based on successful journeys will give higher credit to campaign B which is an auguable conclusion. Taking into account costs per campaign (in the case where costs are calculated by clicks) may of course tackle this issue partly, as campaign A could then exhibit higher returns, but a serious fallacious reasonning is at stake here.

Analyse the typical customer journey    

  • Performing a duration analysis on the data may help you improve the definition of the customer journey to be used by your organization. After which days are converison probabilities null? Should we consider the effect of campaigns disappears after x days without orders? For example, if 99% of orders are placed in the 30 days following a first click, it might be interesting to define the customer journey as a 30 days time frame following the first oder.
  • Look at the distribution of the number of campaigns in a typical journey. If you choose to calculate the effect of campaigns interraction in your Attribution Model, it may indeed help you determine the maximum number of campaigns to be included in a combination. Indeed, you may not need to assess the impact of channel combinations with above than 4 different channels if 95% of orders are placed after less then 4 campaigns.
  • Transition matrixes: what if a campaign A systematically leads to a campaign B? What happens if we remove A or B? These insights would give clues to ask precise questions for a latter AB test, for example to find out if there is complementarity between channels A and B – (implying none should be removed) or mere substitution (implying one can be given up).
  • If conversion rates are available: it can be interesting to perform a survival analysis i.e to analyse the likelihood of conversion based on duration since first click. This could help us excluse potential outliers or individuals who have very low conversion probabilities.

Summary

Attribution is a complex topic which will probably never be definitively solved. Indeed, a main issue is the difficulty, or even impossibility, to evaluate precisely the accuracy of the attribution model that we’ve built. Attribution Models should be seen as a good yet always improvable approximation of the incremental values of campaigns, and be presented with their intrinsinc limits and biases.