Tag Archive for: AI

KI-gestützte Datenanalysen als Kompass für Unternehmen: Chancen und Herausforderungen

IT-Verantwortliche, Datenadministratoren, Analysten und Führungskräfte, sie alle stehen vor der Aufgabe, eine Flut an Daten effizient zu nutzen, um die Wettbewerbsfähigkeit ihres Unternehmens zu steigern. Die Fähigkeit, diese gewaltigen Datenmengen effektiv zu analysieren, ist der Schlüssel, um souverän durch die digitale Zukunft zu navigieren. Gleichzeitig wachsen die Datenmengen exponentiell, während IT-Budgets zunehmend schrumpfen, was Verantwortliche unter enormen Druck setzt, mit weniger Mitteln schnell relevante Insights zu liefern. Doch veraltete Legacy-Systeme verlängern Abfragezeiten und erschweren Echtzeitanalysen großer und komplexer Datenmengen, wie sie etwa für Machine Learning (ML) erforderlich sind. An dieser Stelle kommt die Integration von Künstlicher Intelligenz (KI) ins Spiel. Sie unterstützt Unternehmen dabei, Datenanalysen schneller, kostengünstiger und flexibler zu gestalten und erweist sich über verschiedenste Branchen hinweg als unentbehrlich.

Was genau macht KI-gestützte Datenanalyse so wertvoll?

KI-gestützte Datenanalyse verändern die Art und Weise, wie Unternehmen Daten nutzen. Präzise Vorhersagemodelle antizipieren Trends und Kundenverhalten, minimieren Risiken und ermöglichen proaktive Planung. Beispiele sind Nachfrageprognosen, Betrugserkennung oder Predictive Maintenance. Diese Echtzeitanalysen großer Datenmengen führen zu fundierteren, datenbasierten Entscheidungen.

Ein aktueller Report zur Nutzung von KI-gestützter Datenanalyse zeigt, dass Unternehmen, die KI erfolgreich implementieren, erhebliche Vorteile erzielen: schnellere Entscheidungsfindung (um 25%), reduzierte Betriebskosten (bis zu 20%) und verbesserte Kundenzufriedenheit (um 15%). Die Kombination von KI, Data Analytics und Business Intelligence (BI) ermöglicht es Unternehmen, das volle Potenzial ihrer Daten auszuschöpfen. Tools wie AutoML integrieren sich in Analytics-Datenbanken und ermöglichen BI-Teams, ML-Modelle eigenständig zu entwickeln und zu testen, was zu Produktivitätssteigerungen führt.

Herausforderungen und Chancen der KI-Implementierung

Die Implementierung von KI in Unternehmen bringt zahlreiche Herausforderungen mit sich, die IT-Profis und Datenadministratoren bewältigen müssen, um das volle Potenzial dieser Technologien zu nutzen.

  1. Technologische Infrastruktur und Datenqualität: Veraltete Systeme und unzureichende Datenqualität können die Effizienz der KI-Analyse erheblich beeinträchtigen. So sind bestehende Systeme häufig überfordert mit der Analyse großer Mengen aktueller und historischer Daten, die für verlässliche Predictive Analytics erforderlich sind. Unternehmen müssen zudem sicherstellen, dass ihre Daten vollständig, aktuell und präzise sind, um verlässliche Ergebnisse zu erzielen.
  2. Klare Ziele und Implementierungsstrategien: Ohne klare Ziele und eine durchdachte Strategie, die auch auf die Geschäftsstrategie einzahlt, können KI-Projekte ineffizient und ergebnislos verlaufen. Eine strukturierte Herangehensweise ist entscheidend für den Erfolg.
  3. Fachkenntnisse und Schulung: Die Implementierung von KI erfordert spezialisiertes Wissen, das in vielen Unternehmen fehlt. Die Kosten für Experten oder entsprechende Schulungen können eine erhebliche finanzielle Hürde darstellen, sind aber Grundlage dafür, dass die Technologie auch effizient genutzt wird.
  4. Sicherheit und Compliance: Auch Governance-Bedenken bezüglich Sicherheit und Compliance können ein Hindernis darstellen. Eine strategische Herangehensweise, die sowohl technologische, ethische als auch organisatorische Aspekte berücksichtigt, ist also entscheidend. Unternehmen müssen sicherstellen, dass ihre KI-Lösungen den rechtlichen Anforderungen entsprechen, um Datenschutzverletzungen zu vermeiden. Flexible Bereitstellungsoptionen in der Public Cloud, Private Cloud, On-Premises oder hybriden Umgebungen sind entscheidend, um Plattform- und Infrastrukturbeschränkungen zu überwinden.

Espresso AI von Exasol: Ein Lösungsansatz

Exasol hat mit Espresso AI eine Lösung entwickelt, die Unternehmen bei der Implementierung von KI-gestützter Datenanalyse unterstützt und KI mit Business Intelligence (BI) kombiniert. Espresso AI ist leistungsstark und benutzerfreundlich, sodass auch Teammitglieder ohne tiefgehende Data-Science-Kenntnisse mit neuen Technologien experimentieren und leistungsfähige Modelle entwickeln können. Große und komplexe Datenmengen können in Echtzeit verarbeitet werden – besonders für datenintensive Branchen wie den Einzelhandel oder E-Commerce ist die Lösung daher besonders geeignet. Und auch in Bereichen, in denen sensible Daten im eigenen Haus verbleiben sollen oder müssen, wie dem Finanz- oder Gesundheitsbereich, bietet Espresso die entsprechende Flexibilität – die Anwender haben Zugriff auf Realtime-Datenanalysen, egal ob sich ihre Daten on-Premise, in der Cloud oder in einer hybriden Umgebung befinden. Dank umfangreicher Integrationsmöglichkeiten mit bestehenden IT-Systemen und Datenquellen wird eine schnelle und reibungslose Implementierung gewährleistet.

Chancen durch KI-gestützte Datenanalysen

Der Einsatz von KI-gestützten Datenintegrationswerkzeugen automatisiert viele der manuellen Prozesse, die traditionell mit der Vorbereitung und Bereinigung von Daten verbunden sind. Dies entlastet Teams nicht nur von zeitaufwändiger Datenaufbereitung und komplexen Datenintegrations-Workflows, sondern reduziert auch das Risiko menschlicher Fehler und stellt sicher, dass die Daten für die Analyse konsistent und von hoher Qualität sind. Solche Werkzeuge können Daten aus verschiedenen Quellen effizient zusammenführen, transformieren und laden, was es den Teams ermöglicht, sich stärker auf die Analyse und Nutzung der Daten zu konzentrieren.

Die Integration von AutoML-Tools in die Analytics-Datenbank eröffnet Business-Intelligence-Teams neue Möglichkeiten. AutoML (Automated Machine Learning) automatisiert viele der Schritte, die normalerweise mit dem Erstellen von ML-Modellen verbunden sind, einschließlich Modellwahl, Hyperparameter-Tuning und Modellvalidierung.

Über Exasol-CEO Martin Golombek

Mathias Golombek ist seit Januar 2014 Mitglied des Vorstands der Exasol AG. In seiner Rolle als Chief Technology Officer verantwortet er alle technischen Bereiche des Unternehmens, von Entwicklung, Produkt Management über Betrieb und Support bis hin zum fachlichen Consulting.

Über Mathias Golombek

Mathias Golombek von Exasol

Nach seinem Informatikstudium, in dem er sich vor allem mit Datenbanken, verteilten Systemen, Softwareentwicklungsprozesse und genetischen Algorithmen beschäftigte, stieg Mathias Golombek 2004 als Software Developer bei der Nürnberger Exasol AG ein. Seitdem ging es für ihn auf der Karriereleiter steil nach oben: Ein Jahr danach verantwortete er das Database-Optimizer-Team. Im Jahr 2007 folgte die Position des Head of Research & Development. 2014 wurde Mathias Golombek schließlich zum Chief Technology Officer (CTO) und Technologie-Vorstand von Exasol benannt. In seiner Rolle als Chief Technology Officer verantwortet er alle technischen Bereiche des Unternehmens, von Entwicklung, Product Management über Betrieb und Support bis hin zum fachlichen Consulting.

Er ist der festen Überzeugung, dass sich jedes Unternehmen durch seine Grundwerte auszeichnet und diese stets gelebt werden sollten. Seit seiner Benennung zum CTO gibt Mathias Golombek in Form von Fachartikeln, Gastbeiträgen, Diskussionsrunden und Interviews Einblick in die Materie und fördert den Wissensaustausch.

Benjamin Aunkofer über Karriere mit Daten, Datenkompetenz und Datenstrategie

Data Jobs – Podcast-Folge mit Benjamin Aunkofer

In der heutigen Geschäftswelt ist der Einsatz von Daten unerlässlich, insbesondere für Unternehmen mit über 100 Mitarbeitern, die erfolgreich bleiben möchten. In der Podcast-Episode “Data Jobs – Was brauchst Du, um im Datenbereich richtig Karriere zu machen?” diskutieren Dr. Christian Krug und Benjamin Aunkofer, Gründer von DATANOMIQ, wie Angestellte ihre Datenkenntnisse verbessern und damit ihre berufliche Laufbahn aktiv vorantreiben können. Dies steigert nicht nur ihren persönlichen Erfolg, sondern erhöht auch den Nutzen und die Wettbewerbsfähigkeit des Unternehmens. Datenkompetenz ist demnach ein wesentlicher Faktor für den Erfolg sowohl auf individueller als auch auf Unternehmensebene.

In dem Interview erläutert Benjamin Aunkofer, wie man den Einstieg auch als Quereinsteiger schafft. Das Sprichwort „Ohne Fleiß kein Preis“ trifft besonders auf die Entwicklung beruflicher Fähigkeiten zu, insbesondere im Bereich der Datenverarbeitung und -analyse. Anstelle den Abend mit Serien auf Netflix zu verbringen, könnte man die Zeit nutzen, um sich durch Fachliteratur weiterzubilden. Es gibt eine Vielzahl von Büchern zu Themen wie Data Science, Künstliche Intelligenz, Process Mining oder Datenstrategie, die wertvolle Einblicke und Kenntnisse bieten können.

Der Nutzen steht in einem guten Verhältnis zum Aufwand, so Benjamin Aunkofer. Für diejenigen, die wirklich daran interessiert sind, in eine Datenkarriere einzusteigen, stehen die Türen offen. Der Einstieg erfordert zwar Engagement und Lernbereitschaft, ist aber für entschlossene Individuen absolut machbar. Dabei muss man nicht unbedingt eine Laufbahn als Data Scientist anstreben. Jede Fachkraft und insbesondere Führungskräfte können erheblich davon profitieren, die Grundlagen von Data Engineering und Data Science zu verstehen. Diese Kenntnisse ermöglichen es, fundiertere Entscheidungen zu treffen und die Potenziale der Datenanalyse optimal für das Unternehmen zu nutzen.

Podcast-Folge mit Benjamin Aunkofer und Dr. Christian Krug darüber, wie Menschen mit Daten Karriere machen und den Unternehmenserfolg herstellen!

Podcast-Folge mit Benjamin Aunkofer und Dr. Christian Krug darüber, wie Menschen mit Daten Karriere machen und den Unternehmenserfolg herstellen.

 

Zur Podcast-Folge auf Spotify: https://open.spotify.com/show/6Ow7ySMbgnir27etMYkpxT?si=dc0fd2b3c6454bfa

Zur Podcast-Folge auf iTunes: https://podcasts.apple.com/de/podcast/unf-ck-your-data/id1673832019

Zur Podcast-Folge auf Google: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vdW5mY2steW91ci1kYXRhLw?ep=14

Zur Podcast-Folge auf Deezer: https://deezer.page.link/FnT5kRSjf2k54iib6

Espresso AI: Q&A mit Mathias Golombek, CTO bei Exasol

Nahezu alle Unternehmen beschäftigen sich heute mit dem Thema KI und die überwiegende Mehrheit hält es für die wichtigste Zukunftstechnologie – dennoch tun sich nach wie vor viele schwer, die ersten Schritte in Richtung Einsatz von KI zu gehen. Woran scheitern Initiativen aus Ihrer Sicht?

Zu den größten Hindernissen zählen Governance-Bedenken, etwa hinsichtlich Themen wie Sicherheit und Compliance, unklare Ziele und eine fehlende Implementierungsstrategie. Mit seinen flexiblen Bereitstellungsoptionen in der Public/Private Cloud, on-Premises oder in hybriden Umgebungen macht Exasol seine Kunden unabhängig von bestimmten Plattform- und Infrastrukturbeschränkungen, sorgt für die unkomplizierte Integration von KI-Funktionalitäten und ermöglicht Zugriff auf Datenerkenntnissen in real-time – und das, ohne den gesamten Tech-Stack austauschen zu müssen.

Dies ist der eine Teil – der technologische Teil – die Schritte, die die Unternehmen  –selbst im Vorfeld gehen müssen, sind die Festlegung von klaren Zielen und KPIs und die Etablierung einer Datenkultur. Das Management sollte für Akzeptanz sorgen, indem es die Vorteile der Nutzung klar beleuchtet, Vorbehalte ernst nimmt und sie ausräumt. Der Weg zum datengetriebenen Unternehmen stellt für viele, vor allem wenn sie eher traditionell aufgestellt sind, einen echten Paradigmenwechsel dar. Führungskräfte sollten hier Orientierung bieten und klar darlegen, welche Rolle die Nutzung von Daten und der Einsatz neuer Technologien für die Zukunftsfähigkeit von Unternehmen und für jeden Einzelnen spielen. Durch eine Kultur der offenen Kommunikation werden Teams dazu ermutigt, digitale Lösungen zu finden, die sowohl ihren individuellen Anforderungen als auch den Zielen des Unternehmens entsprechen. Dazu gehört es natürlich auch, die eigenen Teams zu schulen und mit dem entsprechenden Know-how auszustatten.

Wie unterstützt Exasol die Kunden bei der Implementierung von KI?

Datenabfragen in natürlicher Sprache können, das ist spätestens seit dem Siegeszug von ChatGPT klar, generativer KI den Weg in die Unternehmen ebnen und ihnen ermöglichen, sich datengetrieben aufzustellen. Mit der Integration von Veezoo sind auch die Kunden von Exasol Espresso in der Lage, Datenabfragen in natürlicher Sprache zu stellen und KI unkompliziert in ihrem Arbeitsalltag einzusetzen.  Mit dem integrierten autoML-Tool von TurinTech können Anwender zudem durch den Einsatz von ML-Modellen die Performance ihrer Abfragen direkt in ihrer Datenbank maximieren. So gelingt BI-Teams echte Datendemokratisierung und sie können mit ML-Modellen experimentieren, ohne dabei auf Support von ihren Data-Science-Teams angewiesen zu sei.

All dies trägt zur Datendemokratisierung – ein entscheidender Punkt auf dem Weg zum datengetriebenen Unternehmen, denn in der Vergangenheit scheiterte die Umsetzung einer unternehmensweiten Datenstrategie häufig an Engpässen, die durch Data Analytics oder Data Science Teams hervorgerufen werden. Espresso AI ermöglicht Unternehmen einen schnelleren und einfacheren Zugang zu Echtzeitanalysen.

Was war der Grund, Exasol Espresso mit KI-Funktionen anzureichern?

Immer mehr Unternehmen suchen nach Möglichkeiten, sowohl traditionelle als auch generative KI-Modelle und -Anwendungen zu entwickeln – das entsprechende Feedback unserer Kunden war einer der Hauptfaktoren für die Entwicklung von Espresso AI.

Ziel der Unternehmen ist es, ihre Datensilos aufzubrechen – oft haben Data Science Teams viele Jahre lang in Silos gearbeitet. Mit dem Siegeszug von GenAI durch ChatGPT hat ein deutlicher Wandel stattgefunden – KI ist greifbarer geworden, die Technologie ist zugänglicher und auch leistungsfähiger geworden und die Unternehmen suchen nach Wegen, die Technologie gewinnbringend einzusetzen.

Um sich wirklich datengetrieben aufzustellen und das volle Potenzial der eigenen Daten und der Technologien vollumfänglich auszuschöpfen, müssen KI und Data Analytics sowie Business Intelligence in Kombination gebracht werden. Espresso AI wurde dafür entwickelt, um genau das zu tun.

Und wie sieht die weitere Entwicklung aus? Welche Pläne hat Exasol?

 Eines der Schlüsselelemente von Espresso AI ist das AI Lab, das es Data Scientists ermöglicht, die In-Memory-Analytics-Datenbank von Exasol nahtlos und schnell in ihr bevorzugtes Data-Science-Ökosystem zu integrieren. Es unterstützt jede beliebige Data-Science-Sprache und bietet eine umfangreiche Liste von Technologie-Integrationen, darunter PyTorch, Hugging Face, scikit-learn, TensorFlow, Ibis, Amazon Sagemaker, Azure ML oder Jupyter.

Weitere Integrationen sind ein wichtiger Teil unserer Roadmap. Während sich die ersten auf die Plattformen etablierter Anbieter konzentrierten, werden wir unser AI Lab weiter ausbauen und es werden Integrationen mit Open-Source-Tools erfolgen. Nutzer werden so in der Lage sein, eine Umgebung zu schaffen, in der sich Data Scientists wohlfühlen. Durch die Ausführung von ML-Modellen direkt in der Exasol-Datenbank können sie so die maximale Menge an Daten nutzen und das volle Potenzial ihrer Datenschätze ausschöpfen.

Über Exasol-CEO Martin Golombek

Mathias Golombek ist seit Januar 2014 Mitglied des Vorstands der Exasol AG. In seiner Rolle als Chief Technology Officer verantwortet er alle technischen Bereiche des Unternehmens, von Entwicklung, Produkt Management über Betrieb und Support bis hin zum fachlichen Consulting.

Über Exasol und Espresso AI

Sie leiden unter langsamer Business Intelligence, mangelnder Datenbank-Skalierung und weiteren Limitierungen in der Datenanalyse? Exasol bietet drei Produkte an, um Ihnen zu helfen, das Maximum aus Analytics zu holen und schnellere, tiefere und kostengünstigere Insights zu erzielen.

Kein Warten mehr auf das “Spinning Wheel”. Von Grund auf für Geschwindigkeit konzipiert, basiert Espresso auf einer einmaligen Datenbankarchitektur aus In-Memory-Caching, spaltenorientierter Datenspeicherung, “Massively Parallel Processing” (MPP), sowie Auto-Tuning. Damit können selbst die komplexesten Analysen beschleunigt und bessere Erkenntnisse in atemberaubender Geschwindigkeit geliefert werden.

Benjamin Aunkofer im Interview mit Atreus Interim Management über Daten & KI in Unternehmen

Video Interview – Interim Management für Daten & KI

Data & AI im Unternehmen zu etablieren ist ein Prozess, der eine fachlich kompetente Führung benötigt. Hier kann Interim Management die Lösung sein.

Unternehmer stehen dabei vor großen Herausforderungen und stellen sich oft diese oder ähnliche Fragen:

  • Welche Top-Level Strategie brauche ich?
  • Wo und wie finde ich die ersten Show Cases im Unternehmen?
  • Habe ich aktuell den richtigen Daten back-bone?

Diese Fragen beantwortet Benjamin Aunkofer (Gründer von DATANOMIQ und AUDAVIS) im Interview mit Atreus Interim Management. Er erläutert, wie Unternehmen die Disziplinen Data Science, Business Intelligence, Process Mining und KI zusammenführen können, und warum Interim Management dazu eine gute Idee sein kann.

Video Interview “Meet the Manager” auf Youtube mit Franz Kubbillum von Atreus Interim Management und Benjamin Aunkofer von DATANOMIQ.

Über Benjamin Aunkofer

Benjamin Aunkofer - Interim Manager für Data & AI, Gründer von DATANOMIQ und AUDAVIS.

Benjamin Aunkofer – Interim Manager für Data & AI, Gründer von DATANOMIQ und AUDAVIS.

Benjamin Aunkofer ist Gründer des Beratungs- und Implementierungspartners für Daten- und KI-Lösungen namens DATANOMIQ sowie Co-Gründer der AUDAVIS, einem AI as a Service für die Wirtschaftsprüfung.

Nach seiner Ausbildung zum Software-Entwickler (FI-AE IHK) und seinem Einstieg als Consultant bei Deloitte, gründete er 2015 die DATANOMIQ GmbH in Berlin und unterstütze mit mehreren kleinen Teams Unternehmen aus unterschiedlichen Branchen wie Handel, eCommerce, Finanzdienstleistungen und der produzierenden Industrie (Pharma, Automobilzulieferer, Maschinenbau). Er partnert mit anderen Unternehmensberatungen und unterstütze als externer Dienstleister auch Wirtschaftsprüfungsgesellschaften.

Der Projekteinstieg in Unternehmen erfolgte entweder rein projekt-basiert (Projektangebot) oder über ein Interim Management z. B. als Head of Data & AI, Chief Data Scientist oder Head of Process Mining.

Im Jahr 2023 gründete Benjamin Aunkofer mit zwei Mitgründern die AUDAVIS GmbH, die eine Software as a Service Cloud-Plattform bietet für Wirtschaftsprüfungsgesellschaften, Interne Revisionen von Konzernen oder für staatliche Prüfung von Finanztransaktionen.

 

Benjamin Aunkofer - Podcast - KI in der Wirtschaftsprüfung

Podcast – KI in der Wirtschaftsprüfung

Die Verwendung von Künstlicher Intelligenz (KI) in der Wirtschaftsprüfung, wie Sie es beschreiben, klingt in der Tat revolutionär. Die Integration von KI in diesem Bereich könnte enorme Vorteile mit sich bringen, insbesondere in Bezug auf Effizienzsteigerung und Genauigkeit.

Benjamin Aunkofer - KI in der WirtschaftsprüfungDie verschiedenen von Ihnen genannten Lernmethoden wie (Un-)Supervised Learning, Reinforcement Learning und Federated Learning bieten unterschiedliche Ansätze, um KI-Systeme für spezifische Anforderungen der Wirtschaftsprüfung zu trainieren. Diese Methoden ermöglichen es, aus großen Datenmengen Muster zu erkennen, Vorhersagen zu treffen und Entscheidungen zu optimieren.

Der Artificial Auditor von AUDAVIS, der auf einer Kombination von verschiedenen KI-Verfahren basiert, könnte beispielsweise in der Lage sein, 100% der Buchungsdaten zu analysieren, was mit herkömmlichen Methoden praktisch unmöglich wäre. Dies würde nicht nur die Genauigkeit der Prüfung verbessern, sondern auch Betrug und Fehler effektiver aufdecken.

Der Punkt, den Sie über den Podcast Unf*ck Your Datavon Dr. Christian Krug und die Aussagen von Benjamin Aunkofer ansprechen, ist ebenfalls interessant. Es scheint, dass die Diskussion darüber, wie Datenautomatisierung und KI die Wirtschaftsprüfung effizienter gestalten können, bereits im Gange ist und dabei hilft, das Bewusstsein für diese Technologien zu schärfen und ihre Akzeptanz in der Branche zu fördern.

Es wird dabei im Podcast betont, dass die Rolle des menschlichen Prüfers durch KI nicht ersetzt, sondern ergänzt wird. KI kann nämlich dabei helfen, Routineaufgaben zu automatisieren und komplexe Datenanalysen durchzuführen, während menschliche Experten weiterhin für ihre Fachkenntnisse, ihr Urteilsvermögen und ihre Fähigkeit, den Kontext zu verstehen, unverzichtbar bleiben.

Insgesamt spricht Benjamin Aunkofer darüber, dass die Integration von KI in die Wirtschaftsprüfung bzw. konkret in der Jahresabschlussprüfung ein aufregender Schritt in Richtung einer effizienteren und effektiveren Zukunft sei, der sowohl Unternehmen als auch die gesamte Volkswirtschaft positiv beeinflussen wird.

Benjamin Aunkofer - Podcast - KI in der Wirtschaftsprüfung

Benjamin Aunkofer – Podcast – KI in der Wirtschaftsprüfung

Object-centric Data Modelling for Process Mining and BI

Object-centric Process Mining on Data Mesh Architectures

In addition to Business Intelligence (BI), Process Mining is no longer a new phenomenon, but almost all larger companies are conducting this data-driven process analysis in their organization.

The database for Process Mining is also establishing itself as an important hub for Data Science and AI applications, as process traces are very granular and informative about what is really going on in the business processes.

The trend towards powerful in-house cloud platforms for data and analysis ensures that large volumes of data can increasingly be stored and used flexibly. This aspect can be applied well to Process Mining, hand in hand with BI and AI.

New big data architectures and, above all, data sharing concepts such as Data Mesh are ideal for creating a common database for many data products and applications.

The Event Log Data Model for Process Mining

Process Mining as an analytical system can very well be imagined as an iceberg. The tip of the iceberg, which is visible above the surface of the water, is the actual visual process analysis. In essence, a graph analysis that displays the process flow as a flow chart. This is where the processes are filtered and analyzed.

The lower part of the iceberg is barely visible to the normal analyst on the tool interface, but is essential for implementation and success: this is the Event Log as the data basis for graph and data analysis in Process Mining. The creation of this data model requires the data connection to the source system (e.g. SAP ERP), the extraction of the data and, above all, the data modeling for the event log.

Simple Data Model for a Process Mining Event Log

Simple Data Model for a Process Mining Event Log.

As part of data engineering, the data traces that indicate process activities are brought into a log-like schema. A simple event log is therefore a simple table with the minimum requirement of a process number (case ID), a time stamp and an activity description.

Event Log in Process Mining

Example Event Log for Process Mining

An Event Log can be seen as one big data table containing all the process information. Splitting this big table into several data tables is due to the goal of increasing the efficiency of storing the data in a normalized database.

The following example SQL-query is inserting Event-Activities from a SAP ERP System into an existing event log database table (one big table). It shows that events are based on timestamps (CPUDT, CPUTM) and refer each to one of a list of possible activities (dependent on VGABE).

Attention: Please see this SQL as a pure example of event mining for a classic (single table) event log! It is based on a German SAP ERP configuration with customized processes.

An Event Log can also include many other columns (attributes) that describe the respective process activity in more detail or the higher-level process context.

Incidentally, Process Mining can also work with more than just one timestamp per activity. Even the small Process Mining tool Fluxicon Disco made it possible to handle two activities from the outset. For example, when creating an order in the ERP system, the opening and closing of an input screen could be recorded as a timestamp and the execution time of the micro-task analyzed. This concept is continued as so-called task mining.

Task Mining

Task Mining is a subtype of Process Mining and can utilize user interaction data, which includes keystrokes, mouse clicks or data input on a computer. It can also include user recordings and screenshots with different timestamp intervals.

As Task Mining provides a clearer insight into specific sub-processes, program managers and HR managers can also understand which parts of the process can be automated through tools such as RPA. So whenever you hear that Process Mining can prepare RPA definitions you can expect that Task Mining is the real deal.

Machine Learning for Process and Task Mining on Text and Video Data

Process Mining and Task Mining is already benefiting a lot from Text Recognition (Named-Entity Recognition, NER) by Natural Lamguage Processing (NLP) by identifying events of processes e.g. in text of tickets or e-mails. And even more Task Mining will benefit form Computer Vision since videos of manufacturing processes or traffic situations can be read out. Even MTM analysis can be done with Computer Vision which detects movement and actions in video material.

Object-Centric Process Mining

Object-centric Process Data Modeling is an advanced approach of dynamic data modelling for analyzing complex business processes, especially those involving multiple interconnected entities. Unlike classical process mining, which focuses on linear sequences of activities of a specific process chain, object-centric process mining delves into the intricacies of how different entities, such as orders, items, and invoices, interact with each other. This method is particularly effective in capturing the complexities and many-to-many relationships inherent in modern business processes.

Note from the author: The concept and name of object-centric process mining was introduced by Wil M.P. van der Aalst 2019 and as a product feature term by Celonis in 2022 and is used extensively in marketing. This concept is based on dynamic data modelling. I probably developed my first event log made of dynamic data models back in 2016 and used it for an industrial customer. At that time, I couldn’t use the Celonis tool for this because you could only model very dedicated event logs for Celonis and the tool couldn’t remap the attributes of the event log while on the other hand a tool like Fluxicon disco could easily handle all kinds of attributes in an event log and allowed switching the event perspective e.g. from sales order number to material number or production order number easily.

An object-centric data model is a big deal because it offers the opportunity for a holistic approach and as a database a single source of truth for Process Mining but also for other types of analytical applications.

Enhancement of the Data Model for Obect-Centricity

The Event Log is a data model that stores events and their related attributes. A classic Event Log has next to the Case ID, the timestamp and a activity description also process related attributes containing information e.g. about material, department, user, amounts, units, prices, currencies, volume, volume classes and much much more. This is something we can literally objectify!

The problem of this classic event log approach is that this information is transformed and joined to the Event Log specific to the process it is designed for.

An object-centric event log is a central data store for all kind of events mapped to all relevant objects to these events. For that reason our event log – that brings object into the center of gravity – we need a relational bridge table (Event_Object_Relation) into the focus. This tables creates the n to m relation between events (with their timestamps and other event-specific values) and all objects.

For fulfillment of relational database normalization the object table contains the object attributes only but relates their object attribut values from another table to these objects.

Advanced Event Log with dynamic Relations between Objects and Events

Advanced Event Log with dynamic Relations between Objects and Events

The above showed data model is already object-centric but still can become more dynamic in order to object attributes by object type (e.g. the type material will have different attributes then the type invoice or department). Furthermore the problem that not just events and their activities have timestamps but also objects can have specific timestamps (e.g. deadline or resignation dates).

Advanced Event Log with dynamic Relations between Objects and Events and dynamic bounded attributes and their values to Events - And the same for Objects.

Advanced Event Log with dynamic Relations between Objects and Events and dynamic bounded attributes and their values to Events – And the same for Objects.

A last step makes the event log data model more easy to analyze with BI tools: Adding a classical time dimension adding information about each timestamp (by date, not by time of day), e.g. weekdays or public holidays.

Advanced Event Log with dynamic Relations between Objects and Events and dynamic bounded attributes and their values to Events and Objects. The measured timestamps (and duration times in case of Task Mining) are enhanced with a time-dimension for BI applications.

Advanced Event Log with dynamic Relations between Objects and Events and dynamic bounded attributes and their values to Events and Objects. The measured timestamps (and duration times in case of Task Mining) are enhanced with a time-dimension for BI applications.

For analysis the way of Business Intelligence this normalized data model can already be used. On the other hand it is also possible to transform it into a fact-dimensional data model like the star schema (Kimball approach). Also Data Science related use cases will find granular data e.g. for training a regression model for predicting duration times by process.

Note from the author: Process Mining is often regarded as a separate discipline of analysis and this is a justified classification, as process mining is essentially a graph analysis based on the event log. Nevertheless, process mining can be considered a sub-discipline of business intelligence. It is therefore hardly surprising that some process mining tools are actually just a plugin for Power BI, Tableau or Qlik.

Storing the Object-Centrc Analytical Data Model on Data Mesh Architecture

Central data models, particularly when used in a Data Mesh in the Enterprise Cloud, are highly beneficial for Process Mining, Business Intelligence, Data Science, and AI Training. They offer consistency and standardization across data structures, improving data accuracy and integrity. This centralized approach streamlines data governance and management, enhancing efficiency. The scalability and flexibility provided by data mesh architectures on the cloud are very beneficial for handling large datasets useful for all analytical applications.

Note from the author: Process Mining data models are very similar to normalized data models for BI reporting according to Bill Inmon (as a counterpart to Ralph Kimball), but are much more granular. While classic BI is satisfied with the header and item data of orders, process mining also requires all changes to these orders. Process mining therefore exceeds this data requirement. Furthermore, process mining is complementary to data science, for example the prediction of process runtimes or failures. It is therefore all the more important that these efforts in this treasure trove of data are centrally available to the company.

Central single source of truth models also foster collaboration, providing a common data language for cross-functional teams and reducing redundancy, leading to cost savings. They enable quicker data processing and decision-making, support advanced analytics and AI with standardized data formats, and are adaptable to changing business needs.

DATANOMIQ Data Mesh Cloud Architecture - This image is animated! Click to enlarge!

DATANOMIQ Data Mesh Cloud Architecture – This image is animated! Click to enlarge!

 

Central data models in a cloud-based Data Mesh Architecture (e.g. on Microsoft Azure, AWS, Google Cloud Platform or SAP Dataverse) significantly improve data utilization and drive effective business outcomes. And that´s why you should host any object-centric data model not in a dedicated tool for analysis but centralized on a Data Lakehouse System.

About the Process Mining Tool for Object-Centric Process Mining

Celonis is the first tool that can handle object-centric dynamic process mining event logs natively in the event collection. However, it is not neccessary to have Celonis for using object-centric process mining if you have the dynamic data model on your own cloud distributed with the concept of a data mesh. Other tools for process mining such as Signavio, UiPath, and process.science or even the simple desktop tool Fluxicon Disco can be used as well. The important point is that the data mesh approach allows you to easily generate classic event logs for each analysis perspective using the dynamic object-centric data model which can be used for all tools of process visualization…

… and you can also use this central data model to generate data extracts for all other data applications (BI, Data Science, and AI training) as well!

Data Literacy Day 2023

Data Literacy Day 2023 by StackFuel

Der Data Literacy Day 2023 findet am 7. November 2023 in Berlin oder bequem von zu Hause aus statt. Eine hybride Veranstaltung zum Thema Datenkompetenz.

Darum geht es bei der hybriden Daten-Konferenz.

Data Literacy ist heutzutage ein Must-have – beruflich wie privat. Seit 2021 wird Datenkompetenz von der Bundesregierung als unverzichtbares Grundwissen eingestuft. Doch der Umgang mit Daten will gelernt sein. Wie man Data Literacy in der deutschen Bevölkerung verankert und wie Bürger:innen zu Data Citizens werden, kannst Du am 7. November 2023 mit den wichtigsten Köpfen der Branche am #DLD23 im Basecamp Berlin oder online von zu Hause aus diskutieren.

Lerne von den Besten der Branche.

Am Data Literacy Day 2023 kommen führende Expert:innen aus den Bereichen Politik, Wirtschaft und Forschung zusammen.
In Diskussionen, Vorträgen und Roundtables sprechen wir über Initiativen, mit dessen Hilfe Datenkompetenzen flächendeckend über alle Berufs- und Gesellschaftsbereiche hinweg in Deutschland verankert werden. 

Data Literacy Day 2023 - Benjamin Aunkofer

Unser Data Science Blog Author, Gründer der DATANOMIQ und AUDAVIS, und Interim Head of Data, Benjamin Aunkofer, nimmt ebenfalls an diesem Event teil.

6 weitere Gründe, warum Du Dir jetzt ein Freiticket schnappen solltest.

  1. Hybrid-Teilnahme: Vor Ort in Berlin-Mitte oder online.
  2. Thematischer Fokus auf Deutschlands Datenzukunft.
  3. Expert:innen aus Politik, Wirtschaft und Wissenschaft sprechen über Data Literacy.
  4. Diskussion über Top-Initiativen in Deutschland, die bereits realisiert werden.
  5. Interaktiver Austausch mit Professionals in Roundtables und Netzwerkveranstaltungen.
  6. Der Eintritt zur Konferenz ist komplett kostenfrei.”

Das volle Programm kann hier direkt abgerufen werden: https://stackfuel.com/de/events/data-literacy-day-2023/

Über den Organisator, StackFuel:

stackfuel_logo

StackFuel garantiert den Schulungserfolg mit bewährtem Trainingskonzept dank der Online-Lernumgebung.  Ob im Data Science Onlinekurs oder Python-Weiterbildung, mit StackFuel lernen Studenten und Arbeitskräfte, wie mit Daten in der Wirklichkeit nutzbringend umgegangen und das volle Potenzial herrausgeholt werden kann.

Monitoring of Jobskills with Data Engineering & AI

On own account, we from DATANOMIQ have created a web application that monitors data about job postings related to Data & AI from multiple sources (Indeed.com, Google Jobs, Stepstone.de and more).

The data is obtained from the Internet via APIs and web scraping, and the job titles and the skills listed in them are identified and extracted from them using Natural Language Processing (NLP) or more specific from Named-Entity Recognition (NER).

The skill clusters are formed via the discipline of Topic Modelling, a method from unsupervised machine learning, which show the differences in the distribution of requirements between them.

The whole web app is hosted and deployed on the Microsoft Azure Cloud via CI/CD and Infrastructure as Code (IaC).

The presentation is currently limited to the current situation on the labor market. However, we collect these over time and will make trends secure, for example how the demand for Python, SQL or specific tools such as dbt or Power BI changes.

Why we did it? It is a nice show-case many people are interested in. Over the time, it will provides you the answer on your questions related to which tool to learn! For DATANOMIQ this is a show-case of the coming Data as a Service (DaaS) Business.

Was ist eine Vektor-Datenbank? Und warum spielt sie für AI eine so große Rolle?

Wie können Unternehmen und andere Organisationen sicherstellen, dass kein Wissen verloren geht? Intranet, ERP, CRM, DMS oder letztendlich einfach Datenbanken mögen die erste Antwort darauf sein. Doch Datenbanken sind nicht gleich Datenbanken, ganz besonders, da operative IT-Systeme meistens auf relationalen Datenbanken aufsetzen. In diesen geht nur leider dann doch irgendwann das Wissen verloren… Und das auch dann, wenn es nie aus ihnen herausgelöscht wird!

Die meisten Datenbanken sind darauf ausgelegt, Daten zu speichern und wieder abrufbar zu machen. Neben den relationalen Datenbanken (SQL) gibt es auch die NoSQL-Datenbanken wie den Key-Value-Store, Dokumenten- und Graph-Datenbanken mit recht speziellen Anwendungsgebieten. Vektor-Datenbanken sind ein weiterer Typ von Datenbank, die unter Einsatz von AI (Deep Learning, n-grams, …) Wissen in Vektoren übersetzen und damit vergleichbarer und wieder auffindbarer machen. Diese Funktion der Datenbank spielt seinen Vorteil insbesondere bei vielen Dimensionen aus, wie sie Text- und Bild-Daten haben.

Databases Types: Vector Database, Graph Database, Key-Value-Database, Document Database, Relational Database with Row or Column oriented table structures

Datenbank-Typen in grobkörniger Darstellung. Es gibt in der Realität jedoch viele Feinheiten, Übergänge und Überbrückungen zwischen den Datenbanktypen, z. B. zwischen emulierter und nativer Graph-Datenbank. Manche Dokumenten- Vektor-Datenbanken können auch relationale Datenmodellierung. Und eigentlich relationale Datenbanken wie z. B. PostgreSQL können mit Zusatzmodulen auch Vektoren verarbeiten.

Vektor-Datenbanken speichern Daten grundsätzlich nicht relational oder in einer anderen Form menschlich konstruierter Verbindungen. Dennoch sichert die Datenbank gewissermaßen Verbindungen indirekt, die von Menschen jedoch – in einem hochdimensionalen Raum – nicht mehr hergeleitet werden können und sich auf bestimmte Kontexte beziehen, die sich aus den Daten selbst ergeben. Maschinelles Lernen kommt mit der nummerischen Auflösung von Text- und Bild-Daten (und natürlich auch bei ganz anderen Daten, z. B. Sound) am besten zurecht und genau dafür sind Vektor-Datenbanken unschlagbar.

Was ist eine Vektor-Datenbank?

Eine Vektordatenbank speichert Vektoren neben den traditionellen Datenformaten (Annotation) ab. Ein Vektor ist eine mathematische Struktur, ein Element in einem Vektorraum, der eine Reihe von Dimensionen hat (oder zumindest dann interessant wird, genaugenommen starten wir beim Null-Vektor). Jede Dimension in einem Vektor repräsentiert eine Art von Information oder Merkmal. Ein gutes Beispiel ist ein Vektor, der ein Bild repräsentiert: jede Dimension könnte die Intensität eines bestimmten Pixels in dem Bild repräsentieren.
Auf dieseVektor Datenbank Illustration (vereinfacht, symbolisch) Weise kann eine ganze Sammlung von Bildern als eine Sammlung von Vektoren dargestellt werden. Noch gängiger jedoch sind Vektorräume, die Texte z. B. über die Häufigkeit des Auftretens von Textbausteinen (Wörter, Silben, Buchstaben) in sich einbetten (Embeddings). Embeddings sind folglich Vektoren, die durch die Projektion des Textes auf einen Vektorraum entstehen.

Vektor-Datenbanken sind besonders nützlich, wenn man Ähnlichkeiten zwischen Vektoren finden muss, z. B. ähnliche Bilder in einer Sammlung oder die Wörter “Hund” und “Katze”, die zwar in ihren Buchstaben keine Ähnlichkeit haben, jedoch in ihrem Kontext als Haustiere. Mit Vektor-Algorithmen können diese Ähnlichkeiten schnell und effizient aufgespürt werden, was sich mit traditionellen relationalen Datenbanken sehr viel schwieriger und vor allem ineffizienter darstellt.

Vektordatenbanken können auch hochdimensionale Daten effizient verarbeiten, was in vielen modernen Anwendungen, wie zum Beispiel Deep Learning, wichtig ist. Einige Beispiele für Vektordatenbanken sind Elasticsearch / Vector Search, Weaviate, Faiss von Facebook und Annoy von Spotify.

Viele Lernalgorithmen des maschinellen Lernens basieren auf Vektor-basierter Ähnlichkeitsmessung, z. B. der k-Nächste-Nachbarn-Prädiktionsalgorithmus (Regression/Klassifikation) oder K-Means-Clustering. Die Ähnlichkeitsbetrachtung erfolgt mit Distanzmessung im Vektorraum. Die dafür bekannteste Methode, die Euklidische Distanz zwischen zwei Punkten, basiert auf dem Satz des Pythagoras (Hypotenuse ist gleich der Quadratwurzel aus den beiden Dimensions-Katheten im Quadrat, im zwei-dimensionalen Raum). Es kann jedoch sinnvoll sein, aus Gründen der Effizienz oder besserer Konvergenz des maschinellen Lernens andere als die Euklidische Distanz in Betracht zu ziehen.

Vectore-based distance measuring methods: Euclidean Distance L2-Norm, Manhatten Distance L1-Norm, Chebyshev Distance and Cosine Distance

Vectore-based distance measuring methods: Euclidean Distance L2-Norm, Manhatten Distance L1-Norm, Chebyshev Distance and Cosine Distance

Vektor-Datenbanken für Deep Learning

Der Aufbau von künstlichen Neuronalen Netzen im Deep Learning sieht nicht vor, dass ganze Sätze in ihren textlichen Bestandteilen in das jeweilige Netz eingelesen werden, denn sie funktionieren am besten mit rein nummerischen Input. Die Texte müssen in diese transformiert werden, eventuell auch nach diesen in Cluster eingeteilt und für verschiedene Trainingsszenarien separiert werden.

Vektordatenbanken werden für die Datenvorbereitung (Annotation) und als Trainingsdatenbank für Deep Learning zur effizienten Speicherung, Organisation und Manipulation der Texte genutzt. Für Natural Language Processing (NLP) benötigen Modelle des Deep Learnings die zuvor genannten Word Embedding, also hochdimensionale Vektoren, die Informationen über Worte, Sätze oder Dokumente repräsentieren. Nur eine Vektordatenbank macht diese effizient abrufbar.

Vektor-Datenbank und Large Language Modells (LLM)

Ohne Vektor-Datenbanken wären die Erfolge von OpenAI und anderen Anbietern von LLMs nicht möglich geworden. Aber fernab der Entwicklung in San Francisco kann jedes Unternehmen unter Einsatz von Vektor-Datenbanken und den APIs von Google, OpenAI / Microsoft oder mit echten Open Source LLMs (Self-Hosting) ein wahres Orakel über die eigenen Unternehmensdaten herstellen. Dazu werden über APIs die Embedding-Engines z. B. von OpenAI genutzt. Wir von DATANOMIQ nutzen diese Architektur, um Unternehmen und andere Organisationen dazu zu befähigen, dass kein Wissen mehr verloren geht.
Vektor-Datenbank für KI-Applikation (z. B. OpenAI ChatGPT)

Mit der DATANOMIQ Enterprise AI Architektur, die auf jeder Cloud ausrollfähig ist, verfügen Unternehmen über einen intelligenten Unternehmens-Repräsentanten als KI, der für Mitarbeiter relevante Dokumente und Antworten auf Fragen liefert. Sollte irgendein Mitarbeiter im Unternehmen bereits einen bestimmten Vorgang, Vorfall oder z. B. eine technische Konstruktion oder einen rechtlichen Vertrag bearbeitet haben, der einem aktuellen Fall ähnlich ist, wird die AI dies aufspüren und sinnvollen Kontext, Querverweise oder Vorschläge oder lückenauffüllende Daten liefern.

Die AI lernt permanent mit, Unternehmenswissen geht nicht verloren. Das ist Wissensmanagement auf einem neuen Level, dank Vektor-Datenbanken und KI.

How to tackle lack of data: an overview on transfer learning

1, Data is the new oil, but labeled data might be closer to it

Even though we have been in the 3rd AI boom and machine learning is showing concrete effectiveness at a commercial level, after the first two AI booms we are facing a problem: lack of labeled data or data themselves. The increasing number of papers on deep learning demonstrate that researches on AI have developed rapidly recently. If architectures of neural networks and supervised learning are all you know about deep learning, you will be overwhelmed by complications of topics studied these days, for example generative models, making more compact neural net models by for example knowledge distillation, and explainable AI (XAI). Those researches are often conducted on easily available benchmark datasets which you can easily download, often with corresponding ground truth data (label data) necessary for training. However once you try to apply the techniques to more specific data, you usually cannot prepare enough label data which theoretical researches assume. Thus among fascinating deep learning topics, in this article I am going to pick up how to tackle lack of label or data themselves, and transfer learning. Transfer learning is a technique of machine learning to take advantages of knowledge learned in one dataset to deal with a task in another dataset. Presumably due to this fact, Andrew Ng, in his presentation in NeurIPS 2016, gave a rough and abstract predictions of how transfer learning in machine learning would make commercial success like white lines in the figure below. The explanation is straightforward, and given the trends in topics of researches on machine learning these days, this prediction is actually right. But at the same time, in my opinion supervised learning, transfer learning, and unsupervised learning cannot be clearly separated like the graph originally suggested by Andrew Ng. Those fields complement each other, and one can easily shift to another.

Source: https://ruder.io/transfer-learning/ The lines and texts in white are based on explanations by Andrew Ng. The orange cells are placed at random, so not that they represent commercial success of each field.

Along with the rapid progress of deep learning mentioned above, a lot of hypes and catchphrases regarding big data and machine learning were made, and an interesting one is “Data is the new oil.” That might have been said only because big data is sources of various industries. But I would say, the characteristic is more striking in training data for machine learning. Distributions of training data for machine learning are more complicated like various energy resources besides oil in the world. Labeled data might be also like uranium. Just as uranium-235 accounting for only less than one percent of uranium in the world can be used to generate energy, only a part of massive data in the world is labeled such that they can be used for supervised machine learning. And as uranium-235 is used effectively jointly with less active uranium-238, labeled data show greater potentials with unlabeled data. And training data for machine learning have another unpleasant analogy to energy resources. Like most mainstream energy resources, only limited companies or institutions would be able to mine and refine huge labeled datasets with gigantic computation resources, and most people more or less need to rely on that for their business. Even though alternative renewable energy resources are proposed, principal energy resources are indispensable for making industries stable. As well, even though a lot of techniques actually have been proposed to lack of data, it often turns out just fine-tuning pre-trained models is the most practical, which need huge datasets and rich computational resources. And I think recent success in for example BERT or GPT made this trend more visible.

*I am sorry in a case I am mistaken about energy resources. I just wanted to come up with some cool metaphors.

But I still think knowing about transfer learning more comprehensively would be effective. That is partly because I have been working on relatively unique data which are hard to even label. As I was studying computer vision (CV) in plant science field, I frequently saw relatively unique data obtained with special apparatuses. Such data are for the most part look far from very general dataset, which huge pre-trained models are trained on. At the same time such plant data have very complicated structures and hard to label. And also in my work, have to detect certain values in various formats in very specific documents, in German. Such data are far from general datasets, and even labeling is hard in that case. We have to carefully tackle lack of data every time on each type of data in that case.

In this article I would first like to explain in the first place what it is like to lack data and next introduce representative techniques to tackle lack of labeled data. Many of them are classified to transfer learning, but other techniques like unsupervised learning or self-supervised learning are used in them or share a lot in their ideas. Thus my main purpose of writing this article is to let you have a richer view on transfer learning. And you would see “transfer learning” these days are mainly about fine-tuning of pre-trained models. Also how to tackle lack of data or labels is in other words how to efficiently achieve good performance in machine learning. Thus even if tons of high quality labeled data are at your disposal, learning those ideas would be still effective to you. I hope you could find some hints of machine learning through my articles.

2, What does lack of data or labels mean in the first place?

We need to first consider what lack of labels or data means, and my answer to the title of this section is “It depends.” The more data you have, the better performances you get. And the bigger machine learning models are, the more data they usually need for training. I assume that people reading this article more or less understand neural networks and how they are trained with back propagation. But let’s review the process here. Most machine learning frameworks are more or less expressed like the figure below unless reinforcement learning is considered. The ultimate purpose of machine learning is to train a model f(\boldsymbol{x}_n;\boldsymbol{\theta}) by adjusting parameters \boldsymbol{\theta}. And the parameters \boldsymbol{\theta} are optimized so that a loss function L is minimized. If it is a supervised learning, the a value of a loss function is denoted L(f(\boldsymbol{x}_n, \boldsymbol{\theta}), \boldsymbol{y}_n) =L(\hat{\boldsymbol{y}}_n, \boldsymbol{y}_n), and it gets smaller as f(\boldsymbol{x}_n, \boldsymbol{\theta}) gets closer to \boldsymbol{y}_n. That is, \boldsymbol{y}_n is giving supervision to adjust f(\boldsymbol{\theta}) via L(\hat{\boldsymbol{y}}_n, \boldsymbol{y}_n). And in a case of unsupervised learning, a loss function is L(\hat{\boldsymbol{y}}_n), which is often heuristically handcrafted.

The very first problem from lacking training data you would learn is overfitting. That is, a machine learning model can be specialized too much for a training dataset, and it loses generalization to other data from the same dataset. It is like students with little imaginations and flexibility gradually memorizing all the answers in a textbook and failing to answer new questions they have not encountered yet. Overfitting is judged by relations of training and validation loss like in the graph below. Training loss in blue indicates how the students adjust to the textbook. The smaller the training loss is, the more they memorizes from the textbook and the less flexible they are. The orange line indicates their performance in newly appeared questions in tests. The smaller the validation loss is, the better the students perform on tests. Thus the students should stop learning with the textbook when the validation loss is about to increase. This is called early stopping in machine learning. And if you increase training data, the orange graph usually shifts to the right side, usually providing smaller validation loss, namely better performance. An important point is, this ideal relations of training and validation losses will not appear if sizes or expressivity of a model is not enough. Thus the more training data you use, the more parameters you need for the model to enhance its expressivity.

 

*Depending on sizes of training data, the curve of training loss also changes, so please bear it in mind that this graph is not correct and is very simplified.

What I said so far might sound too elementary. My point is, the more data you have, and the bigger computation resource you have, the better performance you get. In other words, machine learning has scalability with data and parameters. This characteristic is clearly observed in models in natural language processing (NLP) and computer vision (CV) like in the graphs below. When I read some papers,often I am very fascinated by their performances. But sometimes it turns out that the methods are mainly creatively in terms of how they increase training data, which is personally boring. And even if performance of GPT looks astonishing, I cannot really like them because of this simple fact.

However another important point is, conversely you don’t need to increase training data or parameters of a model once it achieves an ideal score in metrics. When you make a toy model with small training data, as long as your clients or co-researchers are already happy, that is enough. Therefore lack of data or labels has to be discussed depending on sizes of machine learning and their performances you expect. Given those points mentioned so far, my answer to the question “What does lack of data or labels mean?” would rephrased like “If your model is properly designed to reach the performance you expect and it starts overfitting, you are facing lack of data.” And such decisions basically has to be made based on experiments.

3, Types of lack of data

Even though I explained lack of labels or data is a contextual matter, the problems actually exist at any case. That is, you often fail to achieve ideas accuracy partly due to lack of training data. I would like to classify types of situations of data of label shortage as below.

We should first think about the case where lack of labels does not matter in the first place. If you can analyze data with statistical knowledge or unsupervised machine learning, just extracting data without labeling would be enough. And sometimes ad hoc analysis with simple data visualization will help your decision makings. And some dashboards made from those unlabeled data will already give you some insights into data.

The next case is that, popular machine learning fields with enough investments usually have huge datasets that huge academic institutes or companies have been preparing.  For example KITTI dataset, which include labels like trajectories and depth data, is by Karlsruhe Institute of Technology and Toyota Technological Institute. Such datasets are useful for self-driving-related researches, and many types of ground truth data are provided such as odometry, depth, opticla flow, detection. This kind of data might be considered “enough” only because they are enough for training machine learning models and quantitatively evaluating them in papers, regardless of practical usefulness at a commercial level. But at any rate, popular fields with large benchmark datasets are likely to get investments for commercial uses.

Next let’s see cases of data shortage. You should also keep it in mind that there are also several types of situations of data shortage. In fact there are cases where certain labels are supposed to be scarce such as classifications of imbalanced data, for example anomaly detection, judging spam mails,  or medical examination. In those problems only some percent of data are classified as “errors,” “spam,” or “disease,” and others are classified as “normal.” Just keeping classifying data into “normal” would give maybe more than 95% accuracy. But finding the rest some percent accurately is much more important. In this case model performances need to be evaluated with ROC curves, namely relations of true positives and false positives.

The next type is more related to cases assumed in transfer learning. Some data are in the first place very expensive to obtain. For example CT images have to be stored by special medical apparatuses as you know. And even if a lot of CT images are already obtained, annotating the images often needs professional skills, thus its annotations cost is high. Another case of high annotation cost is for example detection or segmentation of objects in images. Even if you can collect numerous images on the Internet, annotating bounding boxes or pixel-wise segments require a lot of time. Annotating around 1000 images  for classification might be ok, but annotating them at a pixel level is really time consuming. If you have a tablet, I would like you to paint each segment of objects in a picture with different colors. And you should multiply the time spent by 80,000, as many as the training images needed for Mask R-CNN, a popular model for instance segmentation. As you can imagine, it is a huge tediou work. Even preparing some 50 labeled images for fine-tuning is paiful, and even annotations for computer vision tasks itself is also a field of deep learning.

*I would say medical image processing is a relatively popular field in CV with deep learning, and there are several famous datasets on this field.

4, An overview on ways for dealing with lack of labeled data

I am going to first roughly introduce what kind of approaches can be taken to deal with lack of labeled data or data itself, but you should also keep it in mind that they are not clearly separated. Just as I am going to explain, one type of techniques can easily shift to another type. You should flexibly switch among them depending on your situations. And also please keep it in mind that these are well-studied areas, and tons of ingenious papers are announced one after another, usually giving slight changes in their performances. Problems I point out about each technique might not be a problem anymore with recently published researches on researches currently peer-read. It is hard to prove that something does not exist. Given those points, I think it is convenient to classify technique of dealing with label or data shortage as below.

Through this article, ideas of domains are important. A domain simply means a combination of a dataset and a task with it. Transfer learning is a family of machine learning techniques to make uses of knowledge learned in a domain to another domain, and the former is called a source domain, the latter a target domain. And discrepancies between a source domain and a target domain is called a domain shift. The figure below abstractly visualize examples of domains and domain shifts. Intuitively it is easy to imagine that face a CV task and an NLP task have bigger domain shifts than domains of leaf images taken from different angles, but quantitatively evaluating domain shifts is in practice hard, and I am not going to introduce the topic because that will need a lot of mathematics.

Instead of formulating transfer learning, I would like to take learning languages as an intuitive example of transfer learning. Most people master at least one native language before learning another one. Baby brains are a kind of fantastic machine learning models, and after overcoming many obstacles they master native languages. And people take advantages of their mother tongues to learn another language. Usually they learn foreign languages by comparing structures of translated sentences. And naturally, if both a foreign language and your language have analogies like grammatical cases or genders in common, language learning would be easy. In other words, proficiency in one language is helpful in leaning some language. But it is also possible that your native language badly affects learning the second language, due to grammatical structures, pronunciations. The case of a source domain deteriorating performances in a target domain is called negative transfer and contexts of transfer learning.

*I know similarities languages are not the sole and definite barometers of effectiveness in learning foreign languages. Sizes of economy or markets in a country would also affects English language acquisition of people there. But at least it is unfair to compare for example German or Dutch people learning English with Japanese, Chinese people learning it. Unlike Eastern Asian people who have to learn thousands of characters to at least read decent texts or who use very different grammars, European people obviously can use “transfer learning” to learn English.

5, Increasing training data

When you lack data or labels, the most straightforward and often quick solution is to just increase data. The two topics I will cover in this section are mainly conducted in one domain.

Data augmentation

Data augmentation is one of the first techniques you would learn to mitigate overfitting of machine learning, which is in short caused by lack of data. The idea is very simple and it is implemented well in deep learning libraries, so I would only briefly talk about it here. The idea of data augmentation is simply transforming input data by for example flipping, rotating, zooming, changing colors. By doing so for example an input image \boldsymbol{x}_n of a butterfly below with a label of \boldsymbol{y}_n = \text{Butterfly} can be converted to more than 6 images. This corresponds to getting a converted \boldsymbol{x}'_n= g(\boldsymbol{x}_n) in the machine learning outline in the last section. And this process is the same as increasing the size of a dataet \mathcal {D}. And one point you have to be careful is, you must not change \boldsymbol{x}_n too much to change corresponding \boldsymbol{y}_n. For example if \boldsymbol{x}_n is distorted too much, it cannot be recognized as \boldsymbol{y}_n anymore even by humans. Or if you rotate an image of a digit 6 180 degrees, its becomes 9. Recent researches focus on automatically find what kind of data augmentation is effective by using for example reinforcement learning.

Here let me take an example of data augmentation technique that would be contrary to your intuition. A technique named mixup literally mix up data with different classes and their labels. In classification problems, labels are expressed as one-hot vectors, that is only an element corresponding to a correct element is 1 and the others are 0. In a case of binary dog-or-cat classification, each label is \boldsymbol{y}_n = (1, 0)^T or \boldsymbol{y}_n = (0, 1)^T, respectively. In data augmentation, distorting data too much is a taboo because label data is contaminated, but in mixup you literally mix up labels. Randomly choosing a two inputs \boldsymbol{x}_n , \boldsymbol{x}_{n'} and a  number \lambda \in [0,1], you prepare a input and label pair (\lambda \boldsymbol{x}_n + (1 - \lambda) \boldsymbol{x}_{n'},  \lambda \boldsymbol{y}_n + (1 - \lambda) \boldsymbol{y}_{n'}). The figure below is an example of a mixing up a cat input and a dog input, and corresponding labels. It is known augmenting training data like this improves classification performances. It is said this is partly due to machine learning models effectively learning decision boundaries. In classification ambiguous inputs are bottlenecks, so learning to giving ambiguous outputs to ambiguous inputs can enhance classification abilities.

*One-hot-encoded labels are called hard labels, and otherwise soft labels. Recent topics in deep learning, such as lottery hypothesis, knowledge distillation, imply that whether supervising labels are hard or not is important in deep learning. Hopefully I would like to explain why little by little in my articles.

6, Active learning

Active learning is about how to annotate data and get labeled data efficiently. Labels of data do not equally contribute to enhancing machine learning models, and labels actually have qualities. Even if you give apparently similar images with the same label to machine learning models during training, the models cannot learn so much from the pair of data. You need to efficiently dig data to know its distribution by giving labels to samples. I think a good metaphor is geological survey by excavating with some boring. In order to know substances or features of ground, some earth need to be sampled with boring. But you cannot freely penetrate everywhere mainly due to costs. They need to be sampled one by one due to uncertainty about the ground.

 

Similar approaches are often taken in machine learning or statistics, that is estimating distributions of data with a small size of samples is an important idea. A basic idea for doing that is you sample or annotate data which decreases uncertainty of your model the most. The figure simply exhibits the idea. We want to regress a data distribution with the red curve, and the cross marks can be sampled from the distribution. And the part filled with light blue shows uncertainty of the model to predict a value of y for a x. When you want to regress the data with as few samples as possible, data points should be sampled from the parts with great uncertainties. And by doing so, you can see that the data is regressed efficiently with few samples.

We have seen that modeling uncertainty is the key to active learning, and that can be applied to annotations of data in deep learning. An example of the process is displayed below, and in this case a deep neural network model (DNN model) is trained with some labeled data, and you give some signals for data annotations based on uncertainty of outputs of DNN models. And human annotators prioritize giving labels to the data. Such uncertainly can be estimated by using entropy of outputs or modeling data distributions.

 

But when you get a certain amount of labels, the situation will be the same as semi-supervised learning, which I will explain next. That is, you might be already able to make the most of the labels so far with the help of unlabeled data. You should consider stopping labeling and start labeling depending on situations. And importantly, starting naively annotating data might become a quick solution rather than thinking about how to make uses of limited labels if extracting data itself is easy and does not cost so much. “Shut up and annotate!” could be often the best practice in practice. And annotations would be an effective way for exploratory data analysis (EDA), so I recommend you to immediately start annotating about 10 random samples at any rate.

7, Dealing with lack of labels in a single domain

In many cases, data themselves are easily available, and only annotations costs matter. The following two topics consider such cases, and again only one domain is considered. But by the end of this article you would see that other techniques covered in this article have a lot of analogies with topics introduced here.

Semi-supervised learning

Semi-supervised learning is a type of supervised learning where only limited labels are available in one domain. This is important in because many of other techniques in this article can be seen as semi-supervised learning from certain points of views. The figure below shows an intuition on semi-supervised learning in a case of classification task. In this case, original data distribution have two clusters of circles and triangles and a clear border can be drawn between them. But only with limited labeled data, decision boundaries would be ambiguous. However in fact, with a help of unlabeled data in dotted lines, machine learning model might be able to recognize two clusters with a help of unlabeled data. In other words, unlabeled data help models learn distribution of data. this might be natural as clusters of data can be estimated with unsupervised learning.

*As I have already mentioned, active learning could soon shift to semi-supervised learning, and it might be worth trying it before finishing labeling. But suspending labeling and resuming it later might not be efficient. At any rate you need to be flexible depending on situations.

Semi-supervised learning is applicable to several tasks, not only classification. I explained that normal supervised learning is adjusting parameters \boldsymbol{\theta} of a model f(\boldsymbol{\theta}) so that it minimize loss function L(\boldsymbol{\theta}, \mathcal{D}_{\text{L}}) for a labeled dataset \mathcal{D}_{\text{L}}. In semi-supervised learning, we assume that usually a bigger unsupervised dataset \mathcal{D}_{\text{UL}} is available in the same domain. And semi-supervised learning optimize \boldsymbol{\theta} by jointly minimizing L(\boldsymbol{\theta}, \mathcal{D}_{\text{L}}) + L'(\boldsymbol{\theta}, \mathcal{D}_{\text{UL}}) after designing a loss function L'(\boldsymbol{\theta}, \mathcal{D}_{\text{UL}}) for the unlabeled dataset. There are following 3 major ways of semi-supervised learning depending on how you design a L'(\boldsymbol{\theta}, \mathcal{D}_{\text{UL}}).

  • Consistency regularization: adding slight changes to data \boldsymbol{x}_{\text{UL}} in \mathcal{D}_{\text{UL}} and get \boldsymbol{x}'_{\text{UL}}. And training f(\boldsymbol{\theta}) so that f(\boldsymbol{\theta}, \boldsymbol{x}_{\text{UL}}) and f(\boldsymbol{\theta}, \boldsymbol{x}'_{\text{UL}}) give out a consistent output.
  • Pseudo label: after training f(\boldsymbol{\theta}) with \mathcal{D}_{\text{L}}, using some estimations f(\boldsymbol{\theta}, \boldsymbol{x}_{\text{UL}}) as labels of \mathcal{D}_{\text{UL}} .
  • Entropy minimization: encouraging outputs f(\boldsymbol{\theta}, \boldsymbol{x}_{\text{UL}}) to have smaller entropy.

More or less similar ideas show up in different transfer learning techniques, so it would be effective to learn the three semi-supervised learning ideas above.

Self-supervised learning

Self-supervised learning is often counted as unsupervised learning. Both unsupervised and self-supervised learning do not need label data, but especially when labels generated by processing themselves, that is often called self-supervised learning. A representative case of using self-supervised learning is auto-encoder. Simpler labels can be generated from input data themselves with elementary data processing. For example in a case of image processing, by rotating an input image 0, 90, 180, 270 degrees respectively, a classification task of estimating rotation degrees can be made. Another case is estimating the original input image after some simple image processing (for example colorization).  These simple tasks generated solely from an input is called pretext task. And in a case of image processing, deep learning models can be prompted to learn image features .

Source: https://atcold.github.io/pytorch-Deep-Learning/en/week10/10-1/

Pretext tasks are applicable also to other fields for example NLP. A very simple task is hiding a part of an input sentence, and let neural networks estimate the blank word. And this is a basic idea of how to train BERTs, famous pre-trained NLP models. BERT models are trained this way with a huge and very general corpus without any specific topics. By doing so BERT model can already learn to detect some clusters of meanings in texts, as I visualize in the next section. But if you fine-tune BERT models with labeled texts with very specific topics, that often fails to achieve satisfying performance. In that case, the BERT models have to “get used to” the new dataset. In that case, BERT can “get used to” the new dataset by applying self-supervised learning on the new dataset. This tutorial of Huggingface demonstrates this with an example of adjusting a BERT model trained with Wikipedia to the IMDb dataset.

In the case above, the BERT model is fine-tuned with relatively lots of unlabeled data and after that trained with fewer labels. As a whole this can be seen as semi-supervised learning ,with fewer labels of the IMBb dataset and more unlabeled data. Also the ideas of pretext tasks, which prompt models to give consistent outputs given preprocessed inputs, have some analogies with consistency regularization in semi-supervised learning.

*The Huggingface tutorial says, they fine-tune a pre-trained BERT model trained in a self-supervised way to adjus it, and they call it “domain adaptation.” As you can see from the statement, distinctions of topics covered in this article can be just ambiguous.

8, Dealing with lack of data or labels over several domains

Another approach for tackling label or data shortage is taking advantages of other domains, which are usually larger and have enough labels. And such techniques is called transfer learning as I mentioned. It seems like transfer learning in business refers to “fine-tuning” explained below, but in academic contexts it is often also said transfer learning is almost synonym to “domain adaptation.” At any rate, my point is it would be more important to have comprehensive view on the techniques rather than clearly distinguishing them.

Fine tuning

Fine tuning would be the easiest way of transfer learning, and at the same time it is very powerful. Even though I am going to introduce other technique of transfer learning, more often than not it turns out that fine tuning can compensate them. Here I will only explain what it is like to use fine-tuning. I would say using fine-tuning is easy like using instant coffee. Conventionally you needed to train your original model with your own data, and that is very affected by sizes of data you have. I would say, that was like making coffee or coffee cakes from coffee you made from beans. But by using pre-trained models already trained somewhere with huge datasets, you can use models which can already more or less recognize data. The idea was very normal already in the field of CV, and NLP got the same idea with the advent of BERT, or already with word embeddings. That is like people learned to use instant coffee instead of roasting and brewing coffee every time.

How such instant coffee is made depends on which type of deep learning is used on a huge dataset. Backbone CNN is usually trained on ImageNet dataset with supervised learning of a classification task. In case of BERT, it is trained with a huge corpus with a pretext task of estimating blank words of input sentences, which is classified to self-supervised learning. Let me more practically what the “coffee syrup” means. Machine learning is at any rate just mapping of tensors or vectors. In CV, an input images as a tensor is converted into a a vector or a tensor, and tasks like image classification are conducted with the converted tensor or vector. In case of an NLP task, usually a sequence of vectors is converted to a vector or another sequence of vectors. And these resulting tensors of vectors from models are the very “coffee syrup” I am talking about. An important point is, fine-tuning also considers transfer learning between different tasks. Backbone CNNs are usually trained with classification, BERT with self-supervised learning, but the there are a variety of final tasks. They are called downstream tasks. In other words, you don’t necessarily drink instant coffee as coffee.

 

The two figures below are visualizations what the “instant coffee syrup” means. I processed random N images in a dataset with a pre-trained backbone CNN, and I got corresponding D dimensional vectors, that is a N\times D tensor. And I applied t-SNE to reduce its dimension from D to 2 and got a N\times 2 tensor.  The figure below shows arrangements of input images in the 2 dimensional space. As you can see, semantically similar images get closer.

Just as well, if you process random texts with BERT and apply a dimension reduction, you get a visualization like below. As well as the figure above, texts in similar topic get closer.

To make it catchy I expressed them as “coffee syrup” but this is a kind of how so-called AI sees data. Images and texts are just vectors or tensors on computer, and AI process another set of tensors of vectors in spaces which make sense to them.

Fine-tuning is quite easy. You have only to train a pre-trained model you downloaded just like normal supervised learning with your dataset. And when you train CV models with backbone CNN, the backbone is almost automatically downloaded. You have to be careful about some points, for example you have to set learning rate smaller. Let me avoid too detailed points in this article. Hopefully in the future, I’d like to write about more practical fine-tuning tips.

Domain adaptation

Domain adaptation is another family of techniques to make uses of knowledge gained in one domain in another domain. Domain adaptation is a Domain adaptation is these days often used as almost a synonym of transfer learning. But papers on domain adaptation usually assume to handle the same tasks both in a source and a target domain. So I would say domain adaptation is a subfield of transfer learning. Domain adaptation is more of how to tackle deterioration of machine learning performances when trained models are applied in different domains. Based on how much labels are available in each domain, domain adaptation is classified to several types. And unsupervised domain adaptation (UDA), where labels are available only in a source domain, is considered as the most challenging and studied well.

*Another explanation I often hear about domain adaptation is, when a models trained on a dataset is trained on another data, domain adaptation can be used to mitigate decreases in performance. I think in this context, performance of the model on the source domain is not discussed. When you apply some retraining with a new dataset, performance of the model on the source domain often drastically decrease. This is called catastrophic forgetting, and techniques like continuous learning are studied to tackle this problem. I have not really seen continuous learning in contexts of domain adaptation, but I thin these are related.

There several approaches in domain adaptation, and one frequently used approach is using adversarial loss. As we saw with the example of getting “coffee syrup,” data is first mapped into a certain space, and this is often called feature extraction. And outputs with the feature extractor are processed are processed more to give task-specific results with some networks. Often in domain adaptation, we put a domain discriminator network right after the feature extractor. And the domain discriminator classifies whether the features extracted come from the source or target domain. The feature extractor tries to extract features the domain have in common, and the domain discriminator tries to distinguish them, and two networks compete. In this way, the feature extractor and the domain discriminator form generative adversarial network (GAN), and the feature extractor learns to extract features that are hard to distinguish their domains. Feature extractor is trained so that it extract domain invariant features, for example edges and silhouette.

As well as in other transfer learning techniques, one ultimate goal of UDA is training a deep learning model only with synthetic labeled data, for example CGI, and apply the model on a totally unlabeled dataset. Converting a source domain to look like a target domain with Cycle GAN is an often used approach in domain adaptation. In domain adaptation a source domain is supposed to be easier to annotate. The figure below is an example of converting a black and white cell images  to colored images.

*You could easily try converting data with Cycle GAN by preparing two datasets, and I made the converted data by myself. But you need at least one GPU to try that.

However some people insist that usefulness of UDA is very questionable. In the first place, if you do not have any labels on the target domain, that means you cannot evaluate anything qualitatively on the dataset of interest. And if you can prepare some of evaluation data or labels, applying other techniques like fine-tuning might be enough.

Meta learning and few-shot learning

One simple way to explain meta learning is that, it is a machine learning technique teach models to learn efficiently. We can also say that it is a transfer learning case where target domains are unknown.  A famous meta learning method is Model-Agnostic Meta-Learning (MAML). MAML is used to get an ideal parameter \boldsymbol{\theta} which can be quickly and effectively used to new tasks. Like in the figure below, \boldsymbol{\theta} reaches the generally convenient parameter shown as the black dot. And the parameter can quickly reach the parameters \theta_{i}^{\ast}, which effective for each task.

Another interesting application of meta learning is few-shot learning. Few-shot learning trains a classification model to learn to acquire classification ability based only on a very few samples. By letting the models learn classification tasks over many episodes, the model learn comes to learn efficiently from limited data samples at a test phase. The figure below shows a case of few-shot learning, where a model learns some episodes of 3-class classifications with only 4 samples per class. Few-shot learning attempts to enable human-level flexibility of perception. MAML is known to be effective also for few-shot learning.

However, studies these days do also show that fine tuning pre-trained models with a few sample data show competitive results to those by few-shot learning. Similar things can be said about large language models like GPT. Chat GPT or GPT-3/GPT-4 for example can be fine-tuned with small extra training samples, and the logic behind is different from meta learning. Fine-tuning pre-trained models rather might be closer to human learning. Humans can effectively learn new topics based on what they have experienced so far. Thus again here, fine-tuning models can be an easier and realistic solution.

I have explained an overview of machine learning techniques for handling lack of data, and as you might have noticed, fine-tuning models could be enough in many cases. I am not sure how much other transfer learning technique would be widely as useful as fine-tuning at a business level. At least, I hope this article would be a rough guideline for machine learning tasks with small sizes of data or labels. And if you have a chance to work on very unique data with very few labels, you wouldn’t be able to rely so much on only naive fine tuning of pre-trained models. In that case, you tasks have your own problem, and you would have to be careful about your EDA, data cleaning, and labeling. In that case you should consider some techniques introduced here. Hopefully someday I would like to write more detailed tutorials with each transfer learning technique. And I hope you would be able to apply a variety of transfer learning locally, not only relying on huge resources of gigantic entities.  And that would lead to a more secure future, I guess.