All about Big Data Storage and Analytics

5 Apache Spark Best Practices

Already familiar with the term big data, right? Despite the fact that we would all discuss Big Data, it takes a very long time before you confront it in your career. Apache Spark is a Big Data tool that aims to handle large datasets in a parallel and distributed manner. Apache Spark began as a research project at UC Berkeley’s AMPLab, a student, researcher, and faculty collaboration centered on data-intensive application domains, in 2009. 

Introduction

Spark’s aim is to create a new framework that was optimized for quick iterative processing, such as machine learning and interactive data analysis while retaining Hadoop MapReduce’s scalability and fault-tolerant. Spark outperforms Hadoop in many ways, reaching performance levels that are nearly 100 times higher in some cases. Spark has a number of components for various types of processing, all of which are based on Spark Core. Today we will be going to discuss in brief the Apache  Spark and 5 of its best practices to look forward to-

What is Apache Spark?

Apache Spark is an open-source distributed system for big data workforces. For fast analytic queries against another size of data, it uses in-memory caching and optimised query execution. It is a parallel processing framework for grouped computers to operate large-scale data analytics applications. This could handle packet and real-time data processing and predictive analysis workloads.

It claims to support code reuse all over multiple workloads—batch processing, interactive queries, real-time analytics, machine learning, and graph processing—and offers development APIs in Java, Scala, Python, and R. With 365,000 meetup members in 2017, Apache Spark is becoming one of the most renowned big data distributed processing frameworks. Explore for Apache Spark Tutorial for more information.

5 best practices of Apache Spark

1. Begin with a small sample of the data.

Because we want to make big data work, we need to start with a small sample of data to see if we’re on the right track. In my project, I sampled 10% of the data and verified that the pipelines were working properly. This allowed me to use the SQL section of the Spark UI to watch the numbers grow throughout the flow while not having to wait too long for it to complete.

In my experience, if you attain your preferred runtime with a small sample, scaling up is usually simple.

2. Spark troubleshooting

For transformations, Spark seems to have a lazy loading behaviour. That is, it will not initiate the transformation computation; instead, it will keep records of the transformation requested. This makes it difficult to determine where in our code there are bugs or areas that need to be optimised. Splitting the code into sections with df.cache() and then using df.count() to force Spark to calculate the df at every section was one practise that we found useful.

Spark actions seem to be keen in that they cause the underlying action to perform a computation. So, if you’ve had a Spark action which you only call when it’s required, pay attention. A Spark action, for instance, is count() on a dataset. You can now inspect the computation of each section using the spark UI and identify any issues. It’s important to note that if you don’t use the sampling we mentioned in (1), you’ll probably end up with a very long runtime that’s difficult to debug.

Check out Apache Spark Training & Certification Course to get yourself certified in Apache Spark with industry-level skills.

3. Finding and resolving Skewness is a difficult task.

Having to look at the stage specifics in the spark UI and looking for just a major difference between both the max and median can help you find the Skewness:

Let’s begin with a definition of Skewness. As previously stated, our data is divided into partitions, and the size of each partition will most likely change as the progress of transformation. This can result in a large difference in size between partitions, indicating that our data is skew. This implies that a few of the tasks were markedly slower than the rest.

Why is this even a bad thing? Because it may cause other stages to stand in line for these few tasks, leaving cores idle. If you understand where all the Skewness has been coming from, you can fix it right away by changing the partitioning.

4. Appropriately cache

Spark allows you to cache datasets in memory. There are a variety of options to choose from:

  • Since the same operation has been computed several times in the pipeline flow, cache it.
  • To allow the required cache setting, use the persist API to enable caching (persist to disc or not; serialized or not).
  • Be cognizant of lazy loading and, if necessary, prime cache up front. Some APIs are eager, while others aren’t.
  • To see information about the datasets you’ve cached, go to the Storage tab in the Spark UI.
  • It’s a good idea to unpersist your cached datasets after you’ve finished using them to free up resources, especially if other people are using the cluster.

5. Spark has issues with iterative code.

It was particularly difficult. Spark uses lazy evaluation so that when the code is run, it only creates a computational graph, a DAG. Once you have an iterative process, however, this method can be very problematic so because DAG finally opens the prior iteration and then becomes extremely large, we mean extremely large. This may be too large for the driver to remember. Because the application is stuck, this makes it appear in the spark UI as if no jobs are running (which is correct) for an extended period of time — until the driver crashes.

This seems to be presently an obvious issue with Spark, and the workaround that worked for me was to use df.checkpoint() / df.reset() / df.reset() / df.reset() / df.reset() / df. every 5–6 iterations, call localCheckpoint() (find your number by experimenting a bit). This works because, unlike cache(), checkpoint() breaks the lineage and the DAG, saves the results and starts from a new checkpoint. The disadvantage is that you don’t have the entire DAG to recreate the df if something goes wrong.

Conclusion

Spark is now one of the most popular projects inside the Hadoop ecosystem, with many companies using it in conjunction with Hadoop to process large amounts of data. In June 2013, Spark was acknowledged into the Apache Software Foundation’s (ASF) entrepreneurial context, and in February 2014, it was designated as an Apache Top-Level Project. Spark could indeed run by itself, on Apache Mesos, or on Apache Hadoop, which is the most common. Spark is used by large enterprises working with big data applications because of its speed and ability to connect multiple types of databases and run various types of analytics applications.

Learning how to make Spark work its magic takes time, but these 5 practices will help you move your project forward and sprinkle some spark charm on your code.

Data Vault 2.0 – Flexible Datenmodellierung

Was ist Data Vault 2.0?

Data Vault 2.0 ist ein im Jahr 2000 von Dan Linstedt veröffentlichtes und seitdem immer weiter entwickeltes Modellierungssystem für Enterprise Data Warehouses.

Im Unterschied zum normalisierten Data Warehouse – Definition von Inmon [1] ist ein Data Vault Modell funktionsorientiert über alle Geschäftsbereiche hinweg und nicht themenorientiert (subject-oriented)[2]. Ein und dasselbe Produkt beispielsweise ist mit demselben Business Key sichtbar für Vertrieb, Marketing, Buchhaltung und Produktion.

Data Vault ist eine Kombination aus Sternschema und dritter Normalform[3] mit dem Ziel, Geschäftsprozesse als Datenmodell abzubilden. Dies erfordert eine enge Zusammenarbeit mit den jeweiligen Fachbereichen und ein gutes Verständnis für die Geschäftsvorgänge.

Die Schichten des Data Warehouses:

Data Warehouse mit Data Vault und Data Marts

Data Warehouse mit Data Vault und Data Marts

Die Daten werden zunächst über eine Staging – Area in den Raw Vault geladen.

Bis hierher werden sie nur strukturell verändert, das heißt, von ihrer ursprünglichen Form in die Data Vault Struktur gebracht. Inhaltliche Veränderungen finden erst im Business Vault statt; wo die Geschäftslogiken auf den Daten angewandt werden.

Die Information Marts bilden die Basis für die Reporting-Schicht. Hier müssen nicht unbedingt Tabellen erstellt werden, Views können hier auch ausreichend sein. Hier werden Hubs zu Dimensionen und Links zu Faktentabellen, jeweils angereichert mit Informationen aus den zugehörigen Satelliten.

Die Grundelemente des Data Vault Modells:

Daten werden aus den Quellsystemen in sogenannte Hubs, Links und Satelliten im Raw Vault geladen:

Data Vault 2.0 Schema

Data Vault 2.0 Schema

Hub:

Hub-Tabellen beschreiben ein Geschäftsobjekt, beispielsweise einen Kunden, ein Produkt oder eine Rechnung. Sie enthalten einen Business Key (eine oder mehrere Spalten, die einen Eintrag eindeutig identifizieren), einen Hashkey – eine Verschlüsselung der Business Keys – sowie Datenquelle und Ladezeitstempel.

Link:

Ein Link beschreibt eine Interaktion oder Transaktion zwischen zwei Hubs. Beispielsweise eine Rechnungszeile als Kombination aus Rechnung, Kunde und Produkt. Auch ein Eintrag einer Linktabelle ist über einen Hashkey eindeutig identifizierbar.

Satellit:

Ein Satellit enthält zusätzliche Informationen über einen Hub oder einen Link. Ein Kundensatellit enthält beispielsweise Name und Anschrift des Kunden sowie Hashdiff (Verschlüsselung der Attribute zur eindeutigen Identifikation eines Eintrags) und Ladezeitstempel.

Herausforderungen bei der Modellierung

Die Erstellung des vollständigen Data Vault Modells erfordert nicht nur eine enge Zusammenarbeit mit den Fachbereichen, sondern auch eine gute Planung im Vorfeld. Es stehen oftmals mehrere zulässige Modellierungsoptionen zur Auswahl, aus denen die für das jeweilige Unternehmen am besten passende Option gewählt werden muss.

Es ist zudem wichtig, sich im Vorfeld Gedanken um die Handhabbarkeit des Modells zu machen, da die Zahl der Tabellen leicht explodieren kann und viele eventuell vermeidbare Joins notwendig werden.

Obwohl Data Vault als Konzept schon viele Jahre besteht, sind online nicht viele Informationen frei verfügbar – gerade für komplexere Modellierungs- und Performanceprobleme.

Zusätzliche Elemente:

Über die Kernelemente hinaus sind weitere Tabellen notwendig, um die volle Funktionalität des Data Vault Konzeptes auszuschöpfen:

PIT Tabelle

Point-in-Time Tabellen zeigen einen Snapshot der Daten zu einem bestimmten Zeitpunkt. Sie enthalten die Hashkeys und Hashdiffs der Hubs bzw. Links und deren zugehörigen Satelliten. So kann man schnell den jeweils aktuellsten Satelliteneintrag zu einem Hashkey herausfinden.

Referenztabellen

Zusätzliche, weitgehend feststehende Tabellen, beispielsweise Kalendertabellen.

Effektivitätssatellit

Diese Satelliten verfolgen die Gültigkeit von Satelliteneinträgen und markieren gelöschte Datensätze mit einem Zeitstempel. Sie können in den PIT Tabellen verarbeitet werden, um ungültige Datensätze herauszufiltern.

Bridge Tabelle

Bridge Tabellen sind Teil des Business Vaults und enthalten nur Hub- und Linkhashkeys. Sie ähneln Faktentabellen und dienen dazu, von Endanwender*innen benötigte Schlüsselkombinationen vorzubereiten.

Vorteile und Nachteile von Data Vault 2.0

Vorteile:

  • Da Hubs, Links und Satelliten jeweils unabhängig voneinander sind, können sie schnell parallel geladen werden.
  • Durch die Modularität des Systems können erste Projekte schnell umgesetzt werden.
  • Vollständige Historisierung aller Daten, denn es werden niemals Daten gelöscht.
  • Nachverfolgbarkeit der Daten
  • Handling personenbezogener Daten in speziellen Satelliten
  • Einfache Erweiterung des Datenmodells möglich
  • Zusammenführung von Daten aus unterschiedlichen Quellen grundsätzlich möglich
  • Eine fast vollständige Automatisierung der Raw Vault Ladeprozesse ist möglich, da das Grundkonzept immer gleich ist.

Nachteile:

  • Es sind verhältnismäßig wenige Informationen, Hilfestellungen und Praxisbeispiele online zu finden und das Handbuch von Dan Linstedt ist unübersichtlich gestaltet.
    • Zusammenführung unterschiedlicher Quellsysteme kaum in der verfügbaren Literatur dokumentiert und in der Praxis aufwendig.
  • Hoher Rechercheaufwand im Vorfeld und eine gewisse Anlauf- und Experimentierphase auch was die Toolauswahl angeht sind empfehlenswert.
  • Es wird mit PIT- und Bridge Tabellen und Effektivitätssatelliten noch viel zusätzlicher Overhead geschaffen, der verwaltet werden muss.
  • Business Logiken können die Komplexität des Datemodells stark erhöhen.
  • Eine Automatisierung des Business Vaults ist nur begrenzt möglich.

Praxisbeispiel Raw Vault Bestellung:

Das Design eines Raw Vault Modells funktioniert in mehreren Schritten:

  1. Business Keys identifizieren und Hubs definieren
  2. Verbindungen (Links) zwischen den Hubs identifizieren
  3. Zusätzliche Informationen zu den Hubs in Satelliten hinzufügen

Angenommen, man möchte eine Bestellung inklusive Rechnung und Versand als Data Vault modellieren.

Hubs sind alle Entitäten, die sich mit einer eindeutigen ID – einem Business Key – identifizieren lassen. So erstellt man beispielsweise einen Hub für den Kunden, das Produkt, den Kanal, über den die Bestellung hereinkommt (online / telefonisch), die Bestellung an sich, die dazugehörige Rechnung, eine zu bebuchende Kostenstelle, Zahlungen und Lieferung. Diese Liste ließe sich beliebig ergänzen.

Jeder Eintrag in einem dieser Hubs ist durch einen Schlüssel eindeutig identifizierbar. Die Rechnung durch die Rechnungsnummer, das Produkt durch eine SKU, der Kunde durch die Kundennummer etc.

Eine Zeile einer Bestellung kann nun modelliert werden als ein Link aus Bestellung (im Sinne von Bestellkopf), Kunde, Rechnung, Kanal, Produkt, Lieferung, Kostenstelle und Bestellzeilennummer.

Analog dazu können Rechnung und Lieferung ebenso als Kombination aus mehreren Hubs modelliert werden.

Allen Hubs werden anschließend ein oder mehrere Satelliten zugeordnet, die zusätzliche Informationen zu ihrem jeweiligen Hub enthalten.

Personenbezogene Daten, beispielsweise Namen und Adressen von Kunden, werden in separaten Satelliten gespeichert. Dies ermöglicht einen einfachen Umgang mit der DSGVO.

Data Vault 2.0 Beispiel Bestelldatenmodell

Data Vault 2.0 Beispiel Bestelldatenmodell

Fazit

Data Vault ist ein Modellierungsansatz, der vor allem für Organisationen mit vielen Quellsystemen und sich häufig ändernden Daten sinnvoll ist. Hier lohnt sich der nötige Aufwand für Design und Einrichtung eines Data Vaults und die Benefits in Form von Flexibilität, Historisierung und Nachverfolgbarkeit der Daten kommen wirklich zum Tragen.

Quellen

[1] W. H. Inmon, What is a Data Warehouse?. Volume 1, Number 1, 1995

[2] Dan Linstedt, Super Charge Your Data Warehouse: Invaluable Data Modeling Rules to Implement Your Data Vault. CreateSpace Independent Publishing Platform 2011

[3] Vgl. Linstedt 2011

Weiterführende Links und

Blogartikel von Analytics Today

Häufig gestellte Fragen

Einführung in Data Vault von Kent Graziano: pdf

Website von Dan Linstedt mit vielen Informationen und Artikeln

„Building a Scalable Data Warehouse with Data Vault 2.0“ von Dan Linstedt (Amazon Link)

Big Data mit Hadoop und Map Reduce!

Foto von delfi de la Rua auf Unsplash.

Hadoop ist ein Softwareframework, mit dem sich große Datenmengen auf verteilten Systemen schnell verarbeiten lassen. Es verfügt über Mechanismen, welche eine stabile und fehlertolerante Funktionalität sicherstellen, sodass das Tool für die Datenverarbeitung im Big Data Umfeld bestens geeignet ist. In diesen Fällen ist eine normale relationale Datenbank oft nicht ausreichend, um die unstrukturierten Datenmengen kostengünstig und effizient abzuspeichern.

Unterschiede zwischen Hadoop und einer relationalen Datenbank

Hadoop unterscheidet sich in einigen grundlegenden Eigenschaften von einer vergleichbaren relationalen Datenbank.

Eigenschaft Relationale Datenbank Hadoop
Datentypen ausschließlich strukturierte Daten alle Datentypen (strukturiert, semi-strukturiert und unstrukturiert)
Datenmenge wenig bis mittel (im Bereich von einigen GB) große Datenmengen (im Bereich von Terrabyte oder Petabyte)
Abfragesprache SQL HQL (Hive Query Language)
Schema Statisches Schema (Schema on Write) Dynamisches Schema (Schema on Read)
Kosten Lizenzkosten je nach Datenbank Kostenlos
Datenobjekte Relationale Tabellen Key-Value Pair
Skalierungstyp Vertikale Skalierung (Computer muss hardwaretechnisch besser werden) Horizontale Skalierung (mehr Computer können dazugeschaltet werden, um Last abzufangen)

Vergleich Hadoop und Relationale Datenbank

Bestandteile von Hadoop

Das Softwareframework selbst ist eine Zusammenstellung aus insgesamt vier Komponenten.

Hadoop Common ist eine Sammlung aus verschiedenen Modulen und Bibliotheken, welche die anderen Bestandteile unterstützt und deren Zusammenarbeit ermöglicht. Unter anderem sind hier die Java Archive Dateien (JAR Files) abgelegt, die zum Starten von Hadoop benötigt werden. Darüber hinaus ermöglicht die Sammlung die Bereitstellung von grundlegenden Services, wie beispielsweise das File System.

Der Map-Reduce Algorithmus geht in seinen Ursprüngen auf Google zurück und hilft komplexe Rechenaufgaben in überschaubarere Teilprozesse aufzuteilen und diese dann über mehrere Systeme zu verteilen, also horizontal zu skalieren. Dadurch verringert sich die Rechenzeit deutlich. Am Ende müssen die Ergebnisse der Teilaufgaben wieder zu seinem Gesamtresultat zusammengefügt werden.

Der Yet Another Resource Negotiator (YARN) unterstützt den Map-Reduce Algorithmus, indem er die Ressourcen innerhalb eines Computer Clusters im Auge behält und die Teilaufgaben auf die einzelnen Rechner verteilt. Darüber hinaus ordnet er den einzelnen Prozessen die Kapazitäten dafür zu.

Das Hadoop Distributed File System (HDFS) ist ein skalierbares Dateisystem zur Speicherung von Zwischen- oder Endergebnissen. Innerhalb des Clusters ist es über mehrere Rechner verteilt, um große Datenmengen schnell und effizient verarbeiten zu können. Die Idee dahinter war, dass Big Data Projekte und Datenanalysen auf großen Datenmengen beruhen. Somit sollte es ein System geben, welches die Daten auch stapelweise speichert und dadurch schnell verarbeitet. Das HDFS sorgt auch dafür, dass Duplikate von Datensätzen abgelegt werden, um den Ausfall eines Rechners verkraften zu können.

Map Reduce am Beispiel

Angenommen wir haben alle Teile der Harry Potter Romane in Hadoop PDF abgelegt und möchten nun die einzelnen Wörter zählen, die in den Büchern vorkommen. Dies ist eine klassische Aufgabe bei der uns die Aufteilung in eine Map-Funktion und eine Reduce Funktion helfen kann.

Bevor es die Möglichkeit gab, solche aufwendigen Abfragen auf ein ganzes Computer-Cluster aufzuteilen und parallel berechnen zu können, war man gezwungen, den kompletten Datensatz nacheinander zu durchlaufen. Dadurch wurde die Abfragezeit auch umso länger, umso größer der Datensatz wurde. Der einzige Weg, um die Ausführung der Funktion zu beschleunigen ist es, einen Computer mit einem leistungsfähigeren Prozessor (CPU) auszustatten, also dessen Hardware zu verbessern. Wenn man versucht, die Ausführung eines Algorithmus zu beschleunigen, indem man die Hardware des Gerätes verbessert, nennt man das vertikale Skalieren.

Mithilfe von MapReduce ist es möglich eine solche Abfrage deutlich zu beschleunigen, indem man die Aufgabe in kleinere Teilaufgaben aufsplittet. Das hat dann wiederum den Vorteil, dass die Teilaufgaben auf viele verschiedene Computer aufgeteilt und von ihnen ausgeführt werden kann. Dadurch müssen wir nicht die Hardware eines einzigen Gerätes verbessern, sondern können viele, vergleichsweise leistungsschwächere, Computer nutzen und trotzdem die Abfragezeit verringern. Ein solches Vorgehen nennt man horizontales Skalieren.

Kommen wir zurück zu unserem Beispiel: Bisher waren wir bildlich so vorgegangen, dass wir alle Harry Potter Teile gelesen haben und nach jedem gelesenen Wort die Strichliste mit den einzelnen Wörtern einfach um einen Strich erweitert haben. Das Problem daran ist, dass wir diese Vorgehensweise nicht parallelisieren können. Angenommen eine zweite Person will uns unterstützen, dann kann sie das nicht tun, weil sie die Strichliste, mit der wir gerade arbeiten, benötigt, um weiterzumachen. Solange sie diese nicht hat, kann sie nicht unterstützen.

Sie kann uns aber unterstützen, indem sie bereits mit dem zweiten Teil der Harry Potter Reihe beginnt und eine eigene Strichliste nur für das zweite Buch erstellt. Zum Schluss können wir dann alle einzelnen Strichlisten zusammenführen und beispielsweise die Häufigkeit des Wortes “Harry” auf allen Strichlisten zusammenaddieren.

MapReduce am Beispiel von Wortzählungen in Harry Potter Büchern

MapReduce am Beispiel von Wortzählungen in Harry Potter Büchern | Source: Data Basecamp

Dadurch lässt sich die Aufgabe auch relativ einfach horizontal skalieren, indem jeweils eine Person pro Harry Potter Buch arbeitet. Wenn wir noch schneller arbeiten wollen, können wir auch mehrere Personen mit einbeziehen und jede Person ein einziges Kapitel bearbeiten lassen. Am Schluss müssen wir dann nur alle Ergebnisse der einzelnen Personen zusammennehmen, um so zu einem Gesamtergebnis zu gelangen.

Das ausführliche Beispiel und die Umsetzung in Python findest Du hier.

Aufbau eines Hadoop Distributed File Systems

Der Kern des Hadoop Distributed File Systems besteht darin die Daten auf verschiedene Dateien und Computer zu verteilen, sodass Abfragen schnell bearbeitet werden können und der Nutzer keine langen Wartezeiten hat. Damit der Ausfall einer einzelnen Maschine im Cluster nicht zum Verlust der Daten führt, gibt es gezielte Replikationen auf verschiedenen Computern, um eine Ausfallsicherheit zu gewährleisten.

Hadoop arbeitet im Allgemeinen nach dem sogenannten Master-Slave-Prinzip. Innerhalb des Computerclusters haben wir einen Knoten, der die Rolle des sogenannten Masters übernimmt. Dieser führt in unserem Beispiel keine direkte Berechnung durch, sondern verteilt lediglich die Aufgaben auf die sogenannten Slave Knoten und koordiniert den ganzen Prozess. Die Slave Knoten wiederum lesen die Bücher aus und speichern die Worthäufigkeit und die Wortverteilung.

Dieses Prinzip wird auch bei der Datenspeicherung genutzt. Der Master verteilt Informationen aus dem Datensatz auf verschiedenen Slave Nodes und merkt sich, auf welchen Computern er welche Partitionen abgespeichert hat. Dabei legt er die Daten auch redundant ab, um Ausfälle kompensieren zu können. Bei einer Abfrage der Daten durch den Nutzer entscheidet der Masterknoten dann, welche Slaveknoten er anfragen muss, um die gewünschten Informationen zu erhalten.

CAP Theorem

Understanding databases for storing, updating and analyzing data requires the understanding of the CAP Theorem. This is the second article of the article series Data Warehousing Basics.

Understanding NoSQL Databases by the CAP Theorem

CAP theorem – or Brewer’s theorem – was introduced by the computer scientist Eric Brewer at Symposium on Principles of Distributed computing in 2000. The CAP stands for Consistency, Availability and Partition tolerance.

  • Consistency: Every read receives the most recent writes or an error. Once a client writes a value to any server and gets a response, it is expected to get afresh and valid value back from any server or node of the database cluster it reads from.
    Be aware that the definition of consistency for CAP means something different than to ACID (relational consistency).
  • Availability: The database is not allowed to be unavailable because it is busy with requests. Every request received by a non-failing node in the system must result in a response. Whether you want to read or write you will get some response back. If the server has not crashed, it is not allowed to ignore the client’s requests.
  • Partition tolerance: Databases which store big data will use a cluster of nodes that distribute the connections evenly over the whole cluster. If this system has partition tolerance, it will continue to operate despite a number of messages being delayed or even lost by the network between the cluster nodes.

CAP theorem applies the logic that  for a distributed system it is only possible to simultaneously provide  two out of the above three guarantees. Eric Brewer, the father of the CAP theorem, proved that we are limited to two of three characteristics, “by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three.” (Brewer, E., 2012).

CAP Theorem Triangle

To recap, with the CAP theorem in relation to Big Data distributed solutions (such as NoSQL databases), it is important to reiterate the fact, that in such distributed systems it is not possible to guarantee all three characteristics (Availability, Consistency, and Partition Tolerance) at the same time.

Database systems designed to fulfill traditional ACID guarantees like relational database (management) systems (RDBMS) choose consistency over availability, whereas NoSQL databases are mostly systems designed referring to the BASE philosophy which prefer availability over consistency.

The CAP Theorem in the real world

Lets look at some examples to understand the CAP Theorem further and provewe cannot create database systems which are being consistent, partition tolerant as well as always available simultaniously.

AP – Availability + Partition Tolerance

If we have achieved Availability (our databases will always respond to our requests) as well as Partition Tolerance (all nodes of the database will work even if they cannot communicate), it will immediately mean that we cannot provide Consistency as all nodes will go out of sync as soon as we write new information to one of the nodes. The nodes will continue to accept the database transactions each separately, but they cannot transfer the transaction between each other keeping them in synchronization. We therefore cannot fully guarantee the system consistency. When the partition is resolved, the AP databases typically resync the nodes to repair all inconsistencies in the system.

A well-known real world example of an AP system is the Domain Name System (DNS). This central network component is responsible for resolving domain names into IP addresses and focuses on the two properties of availability and failure tolerance. Thanks to the large number of servers, the system is available almost without exception. If a single DNS server fails,another one takes over. According to the CAP theorem, DNS is not consistent: If a DNS entry is changed, e.g. when a new domain has been registered or deleted, it can take a few days before this change is passed on to the entire system hierarchy and can be seen by all clients.

CA – Consistency + Availability

Guarantee of full Consistency and Availability is practically impossible to achieve in a system which distributes data over several nodes. We can have databases over more than one node online and available, and we keep the data consistent between these nodes, but the nature of computer networks (LAN, WAN) is that the connection can get interrupted, meaning we cannot guarantee the Partition Tolerance and therefor not the reliability of having the whole database service online at all times.

Database management systems based on the relational database models (RDBMS) are a good example of CA systems. These database systems are primarily characterized by a high level of consistency and strive for the highest possible availability. In case of doubt, however, availability can decrease in favor of consistency. Reliability by distributing data over partitions in order to make data reachable in any case – even if computer or network failure – meanwhile plays a subordinate role.

CP – Consistency + Partition Tolerance

If the Consistency of data is given – which means that the data between two or more nodes always contain the up-to-date information – and Partition Tolerance is given as well – which means that we are avoiding any desynchronization of our data between all nodes, then we will lose Availability as soon as only one a partition occurs between any two nodes In most distributed systems, high availability is one of the most important properties, which is why CP systems tend to be a rarity in practice. These systems prove their worth particularly in the financial sector: banking applications that must reliably debit and transfer amounts of money on the account side are dependent on consistency and reliability by consistent redundancies to always be able to rule out incorrect postings – even in the event of disruptions in the data traffic. If consistency and reliability is not guaranteed, the system might be unavailable for the users.

CAP Theorem Venn Diagram

Conclusion

The CAP Theorem is still an important topic to understand for data engineers and data scientists, but many modern databases enable us to switch between the possibilities within the CAP Theorem. For example, the Cosmos DB von Microsoft Azure offers many granular options to switch between the consistency, availability and partition tolerance . A common misunderstanding of the CAP theorem that it´s none-absoluteness: “All three properties are more continuous than binary. Availability is continuous from 0 to 100 percent, there are many levels of consistency, and even partitions have nuances. Exploring these nuances requires pushing the traditional way of dealing with partitions, which is the fundamental challenge. Because partitions are rare, CAP should allow perfect C and A most of the time, but when partitions are present or perceived, a strategy is in order.” (Brewer, E., 2012).

ACID vs BASE Concepts

Understanding databases for storing, updating and analyzing data requires the understanding of two concepts: ACID and BASE. This is the first article of the article series Data Warehousing Basics.

The properties of ACID are being applied for databases in order to fulfill enterprise requirements of reliability and consistency.

ACID is an acronym, and stands for:

  • Atomicity – Each transaction is either properly executed completely or does not happen at all. If the transaction was not finished the process reverts the database back to the state before the transaction started. This ensures that all data in the database is valid even if we execute big transactions which include multiple statements (e. g. SQL) composed into one transaction updating many data rows in the database. If one statement fails, the entire transaction will be aborted, and hence, no changes will be made.
  • Consistency – Databases are governed by specific rules defined by table formats (data types) and table relations as well as further functions like triggers. The consistency of data will stay reliable if transactions never endanger the structural integrity of the database. Therefor, it is not allowed to save data of different types into the same single column, to use written primary key values again or to delete data from a table which is strictly related to data in another table.
  • Isolation – Databases are multi-user systems where multiple transactions happen at the same time. With Isolation, transactions cannot compromise the integrity of other transactions by interacting with them while they are still in progress. It guarantees data tables will be in the same states with several transactions happening concurrently as they happen sequentially.
  • Durability – The data related to the completed transaction will persist even in cases of network or power outages. Databases that guarante Durability save data inserted or updated permanently, save all executed and planed transactions in a recording and ensure availability of the data committed via transaction even after a power failure or other system failures If a transaction fails to complete successfully because of a technical failure, it will not transform the targeted data.

ACID Databases

The ACID transaction model ensures that all performed transactions will result in reliable and consistent databases. This suits best for businesses which use OLTP (Online Transaction Processing) for IT-Systems such like ERP- or CRM-Systems. Furthermore, it can also be a good choice for OLAP (Online Analytical Processing) which is used in Data Warehouses. These applications need backend database systems which can handle many small- or medium-sized transactions occurring simultaneous by many users. An interrupted transaction with write-access must be removed from the database immediately as it could cause negative side effects impacting the consistency(e.g., vendors could be deleted although they still have open purchase orders or financial payments could be debited from one account and due to technical failure, never credited to another).

The speed of the querying should be as fast as possible, but even more important for those applications is zero tolerance for invalid states which is prevented by using ACID-conform databases.

BASE Concept

ACID databases have their advantages but also one big tradeoff: If all transactions need to be committed and checked for consistency correctly, the databases are slow in reading and writing data. Furthermore, they demand more effort if it comes to storing new data in new formats.

In chemistry, a base is the opposite to acid. The database concepts of BASE and ACID have a similar relationship. The BASE concept provides several benefits over ACID compliant databases asthey focus more intensely on data availability of database systems without guarantee of safety from network failures or inconsistency.

The acronym BASE is even more confusing than ACID as BASE relates to ACID indirectly. The words behind BASE suggest alternatives to ACID.

BASE stands for:

  • Basically Available – Rather than enforcing consistency in any case, BASE databases will guarantee availability of data by spreading and replicating it across the nodes of the database cluster. Basic read and write functionality is provided without liabilityfor consistency. In rare cases it could happen that an insert- or update-statement does not result in persistently stored data. Read queries might not provide the latest data.
  • Soft State – Databases following this concept do not check rules to stay write-consistent or mutually consistent. The user can toss all data into the database, delegating the responsibility of avoiding inconsistency or redundancy to developers or users.
  • Eventually Consistent –No guarantee of enforced immediate consistency does not mean that the database never achieves it. The database can become consistent over time. After a waiting period, updates will ripple through all cluster nodes of the database. However, reading data out of it will stay always be possible, it is just not certain if we always get the last refreshed data.

All the three above mentioned properties of BASE-conforming databases sound like disadvantages. So why would you choose BASE? There is a tradeoff compared to ACID. If databases do not have to follow ACID properties then the database can work much faster in terms of writing and reading from the database. Further, the developers have more freedom to implement data storage solutions or simplify data entry into the database without thinking about formats and structure beforehand.

BASE Databases

While ACID databases are mostly RDBMS, most other database types, known as NoSQL databases, tend more to conform to BASE principles. Redis, CouchDB, MongoDB, Cosmos DB, Cassandra, ElasticSearch, Neo4J, OrientDB or ArangoDB are just some popular examples. But other than ACID, BASE is not a strict approach. Some NoSQL databases apply at least partly to ACID rules or provide optional functions to get almost or even full ACID compatibility. These databases provide different level of freedom which can be useful for the Staging Layer in Data Warehouses or as a Data Lake, but they are not the recommended choice for applications which need data environments guaranteeing strict consistency.

Data Warehousing Basiscs

Data Warehousing is applied Big Data Management and a key success factor in almost every company. Without a data warehouse, no company today can control its processes and make the right decisions on a strategic level as there would be a lack of data transparency for all decision makers. Bigger comanies even have multiple data warehouses for different purposes.

In this series of articles I would like to explain what a data warehouse actually is and how it is set up. However, I would also like to explain basic topics regarding Data Engineering and concepts about databases and data flows.

To do this, we tick off the following points step by step:

 

Moderne Business Intelligence in der Microsoft Azure Cloud

Google, Amazon und Microsoft sind die drei großen Player im Bereich Cloud Computing. Die Cloud kommt für nahezu alle möglichen Anwendungsszenarien infrage, beispielsweise dem Hosting von Unternehmenssoftware, Web-Anwendungen sowie Applikationen für mobile Endgeräte. Neben diesen Klassikern spielt die Cloud jedoch auch für Internet of Things, Blockchain oder Künstliche Intelligenz eine wichtige Rolle als Enabler. In diesem Artikel beleuchten wir den Cloud-Anbieter Microsoft Azure mit Blick auf die Möglichkeiten des Aufbaues eines modernen Business Intelligence oder Data Platform für Unternehmen.

Eine Frage der Architektur

Bei der Konzeptionierung der Architektur stellen sich viele Fragen:

  • Welche Datenbank wird für das Data Warehouse genutzt?
  • Wie sollten ETL-Pipelines erstellt und orchestriert werden?
  • Welches BI-Reporting-Tool soll zum Einsatz kommen?
  • Müssen Daten in nahezu Echtzeit bereitgestellt werden?
  • Soll Self-Service-BI zum Einsatz kommen?
  • … und viele weitere Fragen.

1 Die Referenzmodelle für Business Intelligence Architekturen von Microsoft Azure

Die vielen Dienste von Microsoft Azure erlauben unzählige Einsatzmöglichkeiten und sind selbst für Cloud-Experten nur schwer in aller Vollständigkeit zu überblicken.  Microsoft schlägt daher verschiedene Referenzmodelle für Datenplattformen oder Business Intelligence Systeme mit unterschiedlichen Ausrichtungen vor. Einige davon wollen wir in diesem Artikel kurz besprechen und diskutieren.

1a Automatisierte Enterprise BI-Instanz

Diese Referenzarchitektur für automatisierte und eher klassische BI veranschaulicht die Vorgehensweise für inkrementelles Laden in einer ELT-Pipeline mit dem Tool Data Factory. Data Factory ist der Cloud-Nachfolger des on-premise ETL-Tools SSIS (SQL Server Integration Services) und dient nicht nur zur Erstellung der Pipelines, sondern auch zur Orchestrierung (Trigger-/Zeitplan der automatisierten Ausführung und Fehler-Behandlung). Über Pipelines in Data Factory werden die jeweils neuesten OLTP-Daten inkrementell aus einer lokalen SQL Server-Datenbank (on-premise) in Azure Synapse geladen, die Transaktionsdaten dann in ein tabellarisches Modell für die Analyse transformiert, dazu wird MS Azure Analysis Services (früher SSAS on-premis) verwendet. Als Tool für die Visualisierung der Daten wird von Microsoft hier und in allen anderen Referenzmodellen MS PowerBI vorgeschlagen. MS Azure Active Directory verbindet die Tools on Azure über einheitliche User im Active Directory Verzeichnis in der Azure-Cloud.

https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/data/enterprise-bi-adfQuelle:

Einige Diskussionspunkte zur BI-Referenzarchitektur von MS Azure

Der von Microsoft vorgeschlagenen Referenzarchitektur zu folgen kann eine gute Idee sein, ist jedoch tatsächlich nur als Vorschlag – eher noch als Kaufvorschlag – zu betrachten. Denn Unternehmens-BI ist hochgradig individuell und Bedarf einiger Diskussion vor der Festlegung der Architektur.

Azure Data Factory als ETL-Tool

Azure Data Factory wird in dieser Referenzarchitektur als ETL-Tool vorgeschlagen. In der Tat ist dieses sehr mächtig und rein über Mausklicks bedienbar. Darüber hinaus bietet es die Möglichkeit z. B. über Python oder Powershell orchestriert und pipeline-modelliert zu werden. Der Clue für diese Referenzarchitektur ist der Hinweis auf die On-Premise-Datenquellen. Sollte zuvor SSIS eingesetzt werden sollen, können die SSIS-Packages zu Data Factory migriert werden.

Die Auswahl der Datenbanken

Der Vorteil dieser Referenzarchitektur ist ohne Zweifel die gute Aufstellung der Architektur im Hinblick auf vielseitige Einsatzmöglichkeiten, so werden externe Daten (in der Annahme, dass diese un- oder semi-strukturiert vorliegen) zuerst in den Azure Blob Storage oder in den auf dem Blob Storage beruhenden Azure Data Lake zwischen gespeichert, bevor sie via Data Factory in eine für Azure Synapse taugliche Struktur transformiert werden können. Möglicherweise könnte auf den Blob Storage jedoch auch gut verzichtet werden, solange nur Daten aus bekannten, strukturierten Datenbanken der Vorsysteme verarbeitet werden. Als Staging-Layer und für Datenhistorisierung sind der Azure Blob Storage oder der Azure Data Lake jedoch gute Möglichkeiten, da pro Dateneinheit besonders preisgünstig.

Azure Synapse ist eine mächtige Datenbank mindestens auf Augenhöhe mit zeilen- und spaltenorientierten, verteilten In-Memory-Datenbanken wie Amazon Redshift, Google BigQuery oder SAP Hana. Azure Synapse bietet viele etablierte Funktionen eines modernen Data Warehouses und jährlich neue Funktionen, die zuerst als Preview veröffentlicht werden, beispielsweise der Einsatz von Machine Learning direkt auf der Datenbank.

Zur Diskussion steht jedoch, ob diese Funktionen und die hohe Geschwindigkeit (bei richtiger Nutzung) von Azure Synapse die vergleichsweise hohen Kosten rechtfertigen. Alternativ können MySQL-/MariaDB oder auch PostgreSQL-Datenbanken bei MS Azure eingesetzt werden. Diese sind jedoch mit Vorsicht zu nutzen bzw. erst unter genauer Abwägung einzusetzen, da sie nicht vollständig von Azure Data Factory in der Pipeline-Gestaltung unterstützt werden. Ein guter Kompromiss kann der Einsatz von Azure SQL Database sein, der eigentliche Nachfolger der on-premise Lösung MS SQL Server. MS Azure Snypase bleibt dabei jedoch tatsächlich die Referenz, denn diese Datenbank wurde speziell für den Einsatz als Data Warehouse entwickelt.

Zentrale Cube-Generierung durch Azure Analysis Services

Zur weiteren Diskussion stehen könnte MS Azure Analysis Sevice als Cube-Engine. Diese Cube-Engine, die ursprünglich on-premise als SQL Server Analysis Service (SSAS) bekannt war, nun als Analysis Service in der Azure Cloud verfügbar ist, beruhte früher noch als SSAS auf der Sprache MDX (Multi-Dimensional Expressions), eine stark an SQL angelehnte Sprache zum Anlegen von schnellen Berechnungsformeln für Kennzahlen im Cube-Datenmodellen, die grundlegendes Verständnis für multidimensionale Abfragen mit Tupeln und Sets voraussetzt. Heute wird statt MDX die Sprache DAX (Data Analysis Expression) verwendet, die eher an Excel-Formeln erinnert (diesen aber keinesfalls entspricht), sie ist umfangreicher als MDX, jedoch für den abitionierten Anwender leichter verständlich und daher für Self-Service-BI geeignet.

Punkt der Diskussion ist, dass der Cube über den Analysis-Service selbst keine Möglichkeiten eine Self-Service-BI nicht ermöglicht, da die Bearbeitung des Cubes mit DAX nur über spezielle Entwicklungsumgebungen möglich ist (z. B. Visual Studio). MS Power BI selbst ist ebenfalls eine Instanz des Analysis Service, denn im Kern von Power BI steckt dieselbe Engine auf Basis von DAX. Power BI bietet dazu eine nutzerfreundliche UI und direkt mit mausklickbaren Elementen Daten zu analysieren und Kennzahlen mit DAX anzulegen oder zu bearbeiten. Wird im Unternehmen absehbar mit Power BI als alleiniges Analyse-Werkzeug gearbeitet, ist eine separate vorgeschaltete Instanz des Azure Analysis Services nicht notwendig. Der zur Abwägung stehende Vorteil des Analysis Service ist die Nutzung des Cubes in Microsoft Excel durch die User über Power Pivot. Dies wiederum ist eine eigene Form des sehr flexiblen Self-Service-BIs.

1b Enterprise Data Warehouse-Architektur

Eine weitere Referenz-Architektur von Microsoft auf Azure ist jene für den Einsatz als Data Warehouse, bei der Microsoft Azure Synapse den dominanten Part von der Datenintegration über die Datenspeicherung und Vor-Analyse übernimmt.https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/enterprise-data-warehouseQuelle: 

Diskussionspunkte zum Referenzmodell der Enterprise Data Warehouse Architecture

Auch diese Referenzarchitektur ist nur für bestimmte Einsatzzwecke in dieser Form sinnvoll.

Azure Synapse als ETL-Tool

Im Unterschied zum vorherigen Referenzmodell wird hier statt auf Azure Data Factory auf Azure Synapse als ETL-Tool gesetzt. Azure Synapse hat die Datenintegrationsfunktionalitäten teilweise von Azure Data Factory geerbt, wenn gleich Data Factory heute noch als das mächtigere ETL-Tool gilt. Azure Synapse entfernt sich weiter von der alten SSIS-Logik und bietet auch keine Integration von SSIS-Paketen an, zudem sind einige Anbindungen zwischen Data Factory und Synapse unterschiedlich.

Auswahl der Datenbanken

Auch in dieser Referenzarchitektur kommt der Azure Blob Storage als Zwischenspeicher bzw. Staging-Layer zum Einsatz, jedoch im Mantel des Azure Data Lakes, der den reinen Speicher um eine Benutzerebene erweitert und die Verwaltung des Speichers vereinfacht. Als Staging-Layer oder zur Datenhistorisierung ist der Blob Storage eine kosteneffiziente Methode, darf dennoch über individuelle Betrachtung in der Notwendigkeit diskutiert werden.

Azure Synapse erscheint in dieser Referenzarchitektur als die sinnvolle Lösung, da nicht nur die Pipelines von Synapse, sondern auch die SQL-Engine sowie die Spark-Engine (über Python-Notebooks) für die Anwendung von Machine Learning (z. B. für Recommender-Systeme) eingesetzt werden können. Hier spielt Azure Synpase die Möglichkeiten als Kern einer modernen, intelligentisierbaren Data Warehouse Architektur voll aus.

Azure Analysis Service

Auch hier wird der Azure Analysis Service als Cube-generierende Maschinerie von Microsoft vorgeschlagen. Hier gilt das zuvor gesagte: Für den reinen Einsatz mit Power BI ist der Analysis Service unnötig, sollen Nutzer jedoch in MS Excel komplexe, vorgerechnete Analysen durchführen können, dann zahlt sich der Analysis Service aus.

Azure Cosmos DB

Die Azure Cosmos DB ist am nächsten vergleichbar mit der MongoDB Atlas (die Cloud-Version der eigentlich on-premise zu hostenden MongoDB). Es ist eine NoSQL-Datenbank, die über Datendokumente im JSON-File-Format auch besonders große Datenmengen in sehr hoher Geschwindigkeit abfragen kann. Sie gilt als die zurzeit schnellste Datenbank in Sachen Lesezugriff und spielt dabei alle Vorteile aus, wenn es um die massenweise Bereitstellung von Daten in andere Applikationen geht. Unternehmen, die ihren Kunden mobile Anwendungen bereitstellen, die Millionen parallele Datenzugriffe benötigen, setzen auf Cosmos DB.

1c Referenzarchitektur für Realtime-Analytics

Die Referenzarchitektur von Microsoft Azure für Realtime-Analytics wird die Referenzarchitektur für Enterprise Data Warehousing ergänzt um die Aufnahme von Data Streaming.

Diskussionspunkte zum Referenzmodell für Realtime-Analytics

Diese Referenzarchitektur ist nur für Einsatzszenarios sinnvoll, in denen Data Streaming eine zentrale Rolle spielt. Bei Data Streaming handelt es sich, vereinfacht gesagt, um viele kleine, ereignis-getriggerte inkrementelle Datenlade-Vorgänge bzw. -Bedarfe (Events), die dadurch nahezu in Echtzeit ausgeführt werden können. Dies kann über Webshops und mobile Anwendungen von hoher Bedeutung sein, wenn z. B. Angebote für Kunden hochgrade-individualisiert angezeigt werden sollen oder wenn Marktdaten angezeigt und mit ihnen interagiert werden sollen (z. B. Trading von Wertpapieren). Streaming-Tools bündeln eben solche Events (bzw. deren Datenhäppchen) in Data-Streaming-Kanäle (Partitionen), die dann von vielen Diensten (Consumergruppen / Receiver) aufgegriffen werden können. Data Streaming ist insbesondere auch dann ein notwendiges Setup, wenn ein Unternehmen über eine Microservices-Architektur verfügt, in der viele kleine Dienste (meistens als Docker-Container) als dezentrale Gesamtstruktur dienen. Jeder Dienst kann über Apache Kafka als Sender- und/oder Empfänger in Erscheinung treten. Der Azure Event-Hub dient dazu, die Zwischenspeicherung und Verwaltung der Datenströme von den Event-Sendern in den Azure Blob Storage bzw. Data Lake oder in Azure Synapse zu laden und dort weiter zu reichen oder für tiefere Analysen zu speichern.

Azure Eventhub ArchitectureQuelle: https://docs.microsoft.com/de-de/azure/event-hubs/event-hubs-about

Für die Datenverarbeitung in nahezu Realtime sind der Azure Data Lake und Azure Synapse derzeitig relativ alternativlos. Günstigere Datenbank-Instanzen von MariaDB/MySQL, PostgreSQL oder auch die Azure SQL Database wären hier ein Bottleneck.

2 Fazit zu den Referenzarchitekturen

Die Referenzarchitekturen sind exakt als das zu verstehen: Als Referenz. Keinesfalls sollte diese Architektur unreflektiert für ein Unternehmen übernommen werden, sondern vorher in Einklang mit der Datenstrategie gebracht werden, dabei sollten mindestens diese Fragen geklärt werden:

  • Welche Datenquellen sind vorhanden und werden zukünftig absehbar vorhanden sein?
  • Welche Anwendungsfälle (Use Cases) habe ich für die Business Intelligence bzw. Datenplattform?
  • Über welche finanziellen und fachlichen Ressourcen darf verfügt werden?

Darüber hinaus sollten sich die Architekten bewusst sein, dass, anders als noch in der trägeren On-Premise-Welt, die Could-Dienste schnelllebig sind. So sah die Referenzarchitektur 2019/2020 noch etwas anders aus, in der Databricks on Azure als System für Advanced Analytics inkludiert wurde, heute scheint diese Position im Referenzmodell komplett durch Azure Synapse ersetzt worden zu sein.

Azure Reference Architecture BI Databrikcs 2019

Azure Reference Architecture – with Databricks, old image source: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/modern-data-warehouse

Hinweis zu den Kosten und der Administration

Die Kosten für Cloud Computing statt für IT-Infrastruktur On-Premise sind ein zweischneidiges Schwert. Der günstige Einstieg in de Azure Cloud ist möglich, jedoch bedingt ein kosteneffizienter Betrieb viel Know-How im Umgang mit den Diensten und Konfigurationsmöglichkeiten der Azure Cloud oder des jeweiligen alternativen Anbieters. Beispielsweise können über Azure Data Factory Datenbanken über Pipelines automatisiert hochskaliert und nach nur Minuten wieder runterskaliert werden. Nur wer diese dynamischen Skaliermöglichkeiten nutzt, arbeitet effizient in der Cloud.

Ferner sind Kosten nur schwer einschätzbar, da diese mehr noch von der Nutzung (Datenmenge, CPU, RAM) als von der zeitlichen Nutzung (Lifetime) abhängig sind. Preisrechner ermöglichen zumindest eine Kosteneinschätzung: https://azure.com/e/96162a623bda4911bb8f631e317affc6

How to Efficiently Manage Big Data

The benefits of big data today can’t be ignored, especially since these benefits encompass industries. Despite the misconceptions around big data, it has shown potential in helping organizations move forward and adapt to an ever-changing market, where those that can’t respond appropriately or quickly enough are left behind. Data analytics is the name of the game, and efficient data management is the main differentiator.

A digital integration hub (DIH) will provide the competitive edge an organization needs in the efficient handling of data. It’s an application architecture that aggregates operational data into a low-latency data fabric and helps in digital transformation by offloading from legacy architecture and providing a decoupled API layer that effectively supports modern online applications. Data management entails the governance, organization, and administration of large, and possibly complex, datasets. The rapid growth of data pooIs has left unprepared companies scrambling to find solutions that will help keep them above water. Data in these pools originate from a myriad of sources, including websites, social media sites, and system logs. The variety in data types and their sheer size make data management a fairly complex undertaking.

Big Data Management for Big Wins

In today’s data-driven world, the capability to efficiently analyze and store data are vital factors in enhancing current business processes and setting up new ones. Data has gone beyond the realm of data analysts and into the business mainstream. As such, businesses should add data analysis as a core competency to ensure that the entire organization is on the same page when it comes to data strategy. Below are a few ways you can make big data work for you and your business.

Define Specific Goals

The data you need to capture will depend on your business goals so it’s imperative that you know what these are and ensure that these are shared across the organization. Without definite and specific goals, you’ll end up with large pools of data and nowhere to use them in. As such, it’s advisable to involve the entire team in mapping out a data strategy based on the company’s objectives. This strategy should be part of the organization’s overall business strategy to avoid the collection of irrelevant data that has no impact on business performance. Setting the direction early on will help set you up for long-term success.

Secure Your Data

Because companies have to contend with large amounts of data each day, storage and management could become very challenging. Security is also a main concern; no organization wants to lose it’s precious data after spending time and money processing and storing them. While keeping data accessible for analysis, you should also ensure that it’s kept secure at all times. When handling data, you should have security measures in place, such as firewall security, malware scanning, and spam filtering. Data security is especially important when collecting customer data to avoid violating data privacy regulations. Ideally, it should be one of the main considerations in data management because it’s a critical factor that could mean the difference between a successful venture and a problematic one.

Interlink Your Data

Different channels can be used to access a database, but this doesn’t necessarily mean that you should use several or all of them. There’s no need to deploy different tools for each application your organization uses. One of the ways to prevent miscommunication between applications and ensure that data is synchronized at all times is to keep data interlinked. Synchronicity of data is vital if your organization or team plans to use a single database. An in-memory data grid, cloud storage system, and remote database administrator are just some of the tools a company can use to interlink data.

Ensure Compliance With Audit Regulations

One thing that could be easy to overlook is how compliant systems are to audit regulations. A database and it is conducted to check on the actions of database users and managers. It is typically done for security purposes—to ensure that data or information can be accessed only by those authorized to do so. Adhering to audit regulations is a must, even for offsite database administrators, so it’s critical that they maintain compliant database components.

Be Prepared for Change

There have been significant changes in the field of data processing and management in recent years, which indicates a promising yet constantly changing landscape. To get ahead of the data analytics game, it’s vital that you keep up with current data trends. New tools and technology are made available at an almost regular pace, and keeping abreast of them will ensure that a business keeps its database up to date. It’s also important to be flexible and be able to pivot or restrategize at a moment’s notice so the business can adapt to change accordingly.

Big Data for the Long Haul

Traditional data warehouses and relational database platforms are slowly becoming things of the past. Big data analytics has changed the game, with data management moving away from being a complex, IT-team focused function and becoming a core competency of every business. Ensuring efficient data management means giving your business a competitive edge, and implementing the tips above ensures that a business manages its data effectively. Changes in data strategies are certain, and they may come sooner than later. Equipping your people with the appropriate skills and knowledge will ensure that your business can embrace change with ease.

On the difficulty of language: prerequisites for NLP with deep learning

This is the first article of my article series “Instructions on Transformer for people outside NLP field, but with examples of NLP.”

1 Preface

This section is virtually just my essay on language. You can skip this if you want to get down on more technical topic.

As I do not study in natural language processing (NLP) field, I would not be able to provide that deep insight into this fast changing deep leaning field throughout my article series. However at least I do understand language is a difficult and profound field, not only in engineering but also in many other study fields. Some people might be feeling that technologies are eliminating languages, or one’s motivations to understand other cultures. First of all, I would like you to keep it in mind that I am not a geek who is trying to turn this multilingual world into a homogeneous one and rebuild Tower of Babel, with deep learning. I would say I am more keen on social or anthropological sides of language.

I think you would think more about languages if you have mastered at least one foreign language. As my mother tongue is Japanese, which is totally different from many other Western languages in terms of characters and ambiguity, I understand translating is not what learning a language is all about. Each language has unique characteristics, and I believe they more or less influence one’s personalities. For example, many Western languages make the verb, I mean the conclusion, of sentences clear in the beginning part of the sentences. That is also true of Chinese, I heard. However in Japanese, the conclusion comes at the end, so that is likely to give an impression that Japanese people are being obscure or indecisive. Also, Japanese sentences usually omit their subjects. In German as well, the conclusion of a sentences tend to come at the end, but I am almost 100% sure that no Japanese people would feel German people make things unclear. I think that comes from the structures of German language, which tends to make the number, verb, relations of words crystal clear.

Source: https://twitter.com/nakamurakihiro

Let’s take an example to see how obscure Japanese is. A Japanese sentence 「頭が赤い魚を食べる猫」can be interpreted in five ways, depending on where you put emphases on.

Common sense tells you that the sentence is likely to mean the first two cases, but I am sure they can mean those five possibilities. There might be similarly obscure sentences in other languages, but I bet few languages can be as obscure as Japanese. Also as you can see from the last two sentences, you can omit subjects in Japanese. This rule is nothing exceptional. Japanese people usually don’t use subjects in normal conversations. And when you read classical Japanese, which Japanese high school students have to do just like Western students learn some of classical Latin, the writings omit subjects much more frequently.

*However interestingly we have rich vocabulary of subjects. The subject “I” can be translated to 「私」、「僕」、「俺」、「自分」、「うち」etc, depending on your personality, who you are talking to, and the time when it is written in.

I believe one can see the world only in the framework of their language, and it seems one’s personality changes depending on the language they use. I am not sure whether the language originally determines how they think, or how they think forms the language. But at least I would like you to keep it in mind that if you translate a conversation, for example a random conversation at a bar in Berlin, into Japanese, that would linguistically sound Japanese, but not anthropologically. Imagine that such kind of random conversation in Berlin or something is like playing a catch, I mean throwing a ball named “your opinion.” On the other hand,  normal conversations of Japanese people are in stead more of, I would say,  “resonance” of several tuning forks. They do their bests to show that they are listening to each other, by excessively nodding or just repeating “Really?”, but usually it seems hardly any constructive dialogues have been made.

*I sometimes feel you do not even need deep learning to simulate most of such Japanese conversations. Several-line Python codes would be enough.

My point is, this article series is mainly going to cover only a few techniques of NLP in deep learning field: sequence to sequence model (seq2seq model) , and especially Transformer. They are, at least for now, just mathematical models and mappings of a small part of this profound field of language (as far as I can cover in this article series). But still, examples of language would definitely help you understand Transformer model in the long run.

2 Tokens and word embedding

*Throughout my article series, “words” just means the normal words you use in daily life. “Tokens” means more general unit of NLP tasks. For example the word “Transformer” might be denoted as a single token “Transformer,” or maybe as a combination of two tokens “Trans” and “former.”

One challenging part of handling language data is its encodings. If you started learning programming in a language other than English, you would have encountered some troubles of using keyboards with different arrangements or with characters. Some comments on your codes in your native languages are sometimes not readable on some software. You can easily get away with that by using only English, but when it comes to NLP you have to deal with this difficulty seriously. How to encode characters in each language should be a first obstacle of NLP. In this article we are going to rely on a library named BPEmb, which provides word embedding in various languages, and you do not have to care so much about encodings in languages all over the world with this library.

In the first section, you might have noticed that Japanese sentence is not separated with spaces like Western languages. This is also true of Chinese language, and that means we need additional tasks of separating those sentences at least into proper chunks of words. This is not only a matter of engineering, but also of some linguistic fields. Also I think many people are not so conscious of how sentences in their native languages are grammatically separated.

The next point is, unlike other scientific data, such as temperature, velocity, voltage, or air pressure, language itself is not measured as numerical data. Thus in order to process language, including English, you first have to map language to certain numerical data, and after some processes you need to conversely map the output numerical data into language data. This section is going to be mainly about one-hot encoding and word embedding, the ways to convert word/token into numerical data. You might already have heard about this

You might have learnt about word embedding to some extent, but I hope you could get richer insight into this topic through this article.

2.1 One-hot encoding

One-hot encoding would be the most straightforward way to encode words/tokens. Assume that you have a dictionary whose size is |\mathcal{V}|, and it includes words from “a”, “ablation”, “actually” to “zombie”, “?”, “!”

In a mathematical manner, in order to choose a word out of those |\mathcal{V}| words, all you need is a |\mathcal{V}| dimensional vector, one of whose elements is 1, and the others are 0. When you want to choose the No. i word, which is “indeed” in the example below, its corresponding one-hot vector is \boldsymbol{v} = (0, \dots, 1, \dots, 0 ), where only the No. i element is 1. One-hot encoding is also easy to understand, and that’s all. It is easy to imagine that people have already come up with more complicated and better way to encoder words. And one major way to do that is word embedding.

2.2 Word embedding

Source: Francois Chollet, Deep Learning with Python,(2018), Manning

Actually word embedding is related to one-hot encoding, and if you understand how to train a simple neural network, for example densely connected layers, you would understand word embedding easily. The key idea of word embedding is denoting each token with a D dimensional vector, whose dimension is fewer than the vocabulary size |\mathcal{V}|. The elements of the resulting word embedding vector are real values, I mean not only 0 or 1. Obviously you can encode much richer variety of tokens with such vectors. The figure at the left side is from “Deep Learning with Python” by François Chollet, and I think this is an almost perfect and simple explanation of the comparison of one-hot encoding and word embedding. But the problem is how to get such convenient vectors. The answer is very simple: you have only to train a network whose inputs are one-hot vector of the vocabulary.

The figure below is a simplified model of word embedding of a certain word. When the word is input into a neural network, only the corresponding element of the one-hot vector is 1, and that virtually means the very first input layer is composed of one neuron whose value is 1. And the only one neuron propagates to the next D dimensional embedding layer. These weights are the very values which most other study materials call “an embedding vector.”

When you input each word into a certain network, for example RNN or Transformer, you map the input one-hot vector into the embedding layer/vector. The examples in the figure are how inputs are made when the input sentences are “You’ve got the touch” and “You’ve got the power.”   Assume that you have a dictionary of one-hot encoding, whose vocabulary is {“the”, “You’ve”, “Walberg”, “touch”, “power”, “Nights”, “got”, “Mark”, “Boogie”}, and the dimension of word embeding is 6. In this case |\mathcal{V}| = 9, D=6. When the inputs are “You’ve got the touch” or “You’ve got the power” , you put the one-hot vector corresponding to “You’ve”, “got”, “the”, “touch” or “You’ve”, “got”, “the”, “power” sequentially every time step t.

In order to get word embedding of certain vocabulary, you just need to train the network. We know that the words “actually” and “indeed” are used in similar ways in writings. Thus when we propagate those words into the embedding layer, we can expect that those embedding layers are similar. This is how we can mathematically get effective word embedding of certain vocabulary.

More interestingly, if word embedding is properly trained, you can mathematically “calculate” words. For example, \boldsymbol{v}_{king} - \boldsymbol{v}_{man} + \boldsymbol{v}_{woman} \approx \boldsymbol{v}_{queen}, \boldsymbol{v}_{Japan} - \boldsymbol{v}_{Tokyo} + \boldsymbol{v}_{Vietnam} \approx \boldsymbol{v}_{Hanoi}.

*I have tried to demonstrate this type of calculation on several word embedding, but none of them seem to work well. At least you should keep it in mind that word embedding learns complicated linear relations between words.

I should explain word embedding techniques such as word2vec in detail, but the main focus of this article is not NLP, so the points I have mentioned are enough to understand Transformer model with NLP examples in the upcoming articles.

 

3 Language model

Language models is one of the most straightforward, but crucial ideas in NLP. This is also a big topic, so this article is going to cover only basic points. Language model is a mathematical model of the probabilities of which words to come next, given a context. For example if you have a sentence “In the lecture, he opened a _.”, a language model predicts what comes at the part “_.” It is obvious that this is contextual. If you are talking about general university students, “_” would be “textbook,” but if you are talking about Japanese universities, especially in liberal art department, “_” would be more likely to be “smartphone. I think most of you use this language model everyday. When you type in something on your computer or smartphone, you would constantly see text predictions, or they might even correct your spelling or grammatical errors. This is language modelling. You can make language models in several ways, such as n-gram and neural language models, but in this article I can explain only general formulations for such models.

*I am not sure which algorithm is used in which services. That must be too fast changing and competitive for me to catch up.

As I mentioned in the first article series on RNN, a sentence is usually processed as sequence data in NLP. One single sentence is denoted as \boldsymbol{X} = (\boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau)}), a list of vectors. The vectors are usually embedding vectors, and the (t) is the index of the order of tokens. For example the sentence “You’ve go the power.” can be expressed as \boldsymbol{X} = (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \boldsymbol{x}^{(3)}, \boldsymbol{x}^{(4)}), where \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \boldsymbol{x}^{(3)}, \boldsymbol{x}^{(4)} denote “You’ve”, “got”, “the”, “power”, “.” respectively. In this case \tau = 4.

In practice a sentence \boldsymbol{X} usually includes two tokens BOS and EOS at the beginning and the end of the sentence. They mean “Beginning Of Sentence” and “End Of Sentence” respectively. Thus in many cases \boldsymbol{X} = (\boldsymbol{BOS} , \boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau)}, \boldsymbol{EOS} ). \boldsymbol{BOS} and \boldsymbol{EOS} are also both vectors, at least in the Tensorflow tutorial.

P(\boldsymbol{X} = (\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau)}, \boldsymbol{EOS}) is the probability of incidence of the sentence. But it is easy to imagine that it would be very hard to directly calculate how likely the sentence \boldsymbol{X} appears out of all possible sentences. I would rather say it is impossible. Thus instead in NLP we calculate the probability P(\boldsymbol{X}) as a product of the probability of incidence or a certain word, given all the words so far. When you’ve got the words (\boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(t-1}) so far, the probability of the incidence of \boldsymbol{x}^{(t)}, given the context is  P(\boldsymbol{x}^{(t)}|\boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(t-1)}). P(\boldsymbol{BOS}) is a probability of the the sentence \boldsymbol{X} being (\boldsymbol{BOS}), and the probability of \boldsymbol{X} being (\boldsymbol{BOS}, \boldsymbol{x}^{(1)}) can be decomposed this way: P(\boldsymbol{BOS}, \boldsymbol{x}^{(1)}) = P(\boldsymbol{x}^{(1)}|\boldsymbol{BOS})P(\boldsymbol{BOS}).

Just as well P(\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) = P(\boldsymbol{x}^{(2)}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}) P( \boldsymbol{BOS}, \boldsymbol{x}^{(1)})= P(\boldsymbol{x}^{(2)}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}) P(\boldsymbol{x}^{(1)}| \boldsymbol{BOS}) P( \boldsymbol{BOS}).

Hence, the general probability of incidence of a sentence \boldsymbol{X} is P(\boldsymbol{X})=P(\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \dots, \boldsymbol{x}^{(\tau -1)}, \boldsymbol{x}^{(\tau)}, \boldsymbol{EOS}) = P(\boldsymbol{EOS}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau)}) P(\boldsymbol{x}^{(\tau)}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau - 1)}) \cdots P(\boldsymbol{x}^{(2)}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}) P(\boldsymbol{x}^{(1)}| \boldsymbol{BOS}) P(\boldsymbol{BOS}).

Let \boldsymbol{x}^{(0)} be \boldsymbol{BOS} and \boldsymbol{x}^{(\tau + 1)} be \boldsymbol{EOS}. Plus, let P(\boldsymbol{x}^{(t+1)}|\boldsymbol{X}_{[0, t]}) be P(\boldsymbol{x}^{(t+1)}|\boldsymbol{x}^{(0)}, \dots, \boldsymbol{x}^{(t)}), then P(\boldsymbol{X}) = P(\boldsymbol{x}^{(0)})\prod_{t=0}^{\tau}{P(\boldsymbol{x}^{(t+1)}|\boldsymbol{X}_{[0, t]})}. Language models calculate which words to come sequentially in this way.

Here’s a question: how would you evaluate a language model?

I would say the answer is, when the language model generates words, the more confident the language model is, the better the language model is. Given a context, when the distribution of the next word is concentrated on a certain word, we can say the language model is confident about which word to come next, given the context.

*For some people, it would be more understandable to call this “entropy.”

Let’s take the vocabulary {“the”, “You’ve”, “Walberg”, “touch”, “power”, “Nights”, “got”, “Mark”, “Boogie”} as an example. Assume that P(\boldsymbol{X}) = P(\boldsymbol{BOS}, \boldsymbol{You've}, \boldsymbol{got}, \boldsymbol{the}, \boldsymbol{touch}, \boldsymbol{EOS}) = P(\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \boldsymbol{x}^{(3)}, \boldsymbol{x}^{(4)}, \boldsymbol{EOS})= P(\boldsymbol{x}^{(0)})\prod_{t=0}^{4}{P(\boldsymbol{x}^{(t+1)}|\boldsymbol{X}_{[0, t]})}. Given a context (\boldsymbol{BOS}, \boldsymbol{x}^{(1)}), the probability of incidence of \boldsymbol{x}^{(2)} is P(\boldsymbol{x}^{2}|\boldsymbol{BOS}, \boldsymbol{x}^{(1)}). In the figure below, the distribution at the left side is less confident because probabilities do not spread widely, on the other hand the one at the right side is more confident that next word is “got” because the distribution concentrates on “got”.

*You have to keep it in mind that the sum of all possible probability P(\boldsymbol{x}^{(2)} | \boldsymbol{BOS}, \boldsymbol{x}^{(1)}) is 1, that is, P(\boldsymbol{the}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}) + P(\boldsymbol{You've}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}) + \cdots + P(\boldsymbol{Boogie}| \boldsymbol{BOS}, \boldsymbol{x}^{(1)}) = 1.

While the language model generating the sentence “BOS You’ve got the touch EOS”, it is better if the language model keeps being confident. If it is confident, P(\boldsymbol{X})= P(\boldsymbol{BOS}) P(\boldsymbol{x}^{(1)}|\boldsymbol{BOS}}P(\boldsymbol{x}^{(3)}|\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) P(\boldsymbol{x}^{(4)}|\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \boldsymbol{x}^{(3)}) P(\boldsymbol{EOS}|\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \boldsymbol{x}^{(3)}, \boldsymbol{x}^{(4)})} gets higher. Thus (-1) \{ log_{b}{P(\boldsymbol{BOS})} + log_{b}{P(\boldsymbol{x}^{(1)}|\boldsymbol{BOS}}) + log_{b}{P(\boldsymbol{x}^{(3)}|\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})} + log_{b}{P(\boldsymbol{x}^{(4)}|\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \boldsymbol{x}^{(3)})} + log_{b}{P(\boldsymbol{EOS}|\boldsymbol{BOS}, \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}, \boldsymbol{x}^{(3)}, \boldsymbol{x}^{(4)})} \} gets lower, where usually b=2 or b=e.

This is how to measure how confident language models are, and the indicator of the confidence is called perplexity. Assume that you have a data set for evaluation \mathcal{D} = (\boldsymbol{X}_1, \dots, \boldsymbol{X}_n, \dots, \boldsymbol{X}_{|\mathcal{D}|}), which is composed of |\mathcal{D}| sentences in total. Each sentence \boldsymbol{X}_n = (\boldsymbol{x}^{(0)})\prod_{t=0}^{\tau ^{(n)}}{P(\boldsymbol{x}_{n}^{(t+1)}|\boldsymbol{X}_{n, [0, t]})} has \tau^{(n)} tokens in total excluding \boldsymbol{BOS}, \boldsymbol{EOS}. And let |\mathcal{V}| be the size of the vocabulary of the language model. Then the perplexity of the language model is b^z, where z = \frac{-1}{|\mathcal{V}|}\sum_{n=1}^{|\mathcal{D}|}{\sum_{t=0}^{\tau ^{(n)}}{log_{b}P(\boldsymbol{x}_{n}^{(t+1)}|\boldsymbol{X}_{n, [0, t]})}. The b is usually 2 or e.

For example, assume that \mathcal{V} is vocabulary {“the”, “You’ve”, “Walberg”, “touch”, “power”, “Nights”, “got”, “Mark”, “Boogie”}. Also assume that the evaluation data set for perplexity of a language model is \mathcal{D} = (\boldsymbol{X}_1, \boldsymbol{X}_2), where \boldsymbol{X_1} =(\boldsymbol{You've}, \boldsymbol{got}, \boldsymbol{the}, \boldsymbol{touch}) \boldsymbol{X_2} = (\boldsymbol{You've}, \boldsymbol{got}, \boldsymbol{the }, \boldsymbol{power}). In this case |\mathcal{V}|=9, |\mathcal{D}|=2. I have already showed you how to calculate the perplexity of the sentence “You’ve got the touch.” above. You just need to do a similar thing on another sentence “You’ve got the power”, and then you can get the perplexity of the language model.

*If the network is not properly trained, it would also be confident of generating wrong outputs. However, such network still would give high perplexity because it is “confident” at any rate. I’m sorry I don’t know how to tackle the problem. Please let me put this aside, and let’s get down on Transformer model soon.

Appendix

Let’s see how word embedding is implemented with a very simple example in the official Tensorflow tutorial. It is a simple binary classification task on IMDb Dataset. The dataset is composed to comments on movies by movie critics, and you have only to classify if the commentary is positive or negative about the movie. For example when you get you get an input “To be honest, Michael Bay is a terrible as an action film maker. You cannot understand what is going on during combat scenes, and his movies rely too much on advertisements. I got a headache when Mark Walberg used a Chinese cridit card in Texas. However he is very competent when it comes to humorous scenes. He is very talented as a comedy director, and I have to admit I laughed a lot.“, the neural netowork has to judge whether the statement is positive or negative.

This networks just takes an average of input embedding vectors and regress it into a one dimensional value from 0 to 1. The shape of embedding layer is (8185, 16). Weights of neural netowrks are usually implemented as matrices, and you can see that each row of the matrix corresponds to emmbedding vector of each token.

*It is easy to imagine that this technique is problematic. This network virtually taking a mean of input embedding vectors. That could mean if the input sentence includes relatively many tokens with negative meanings, it is inclined to be classified as negative. But for example, if the sentence is “This masterpiece is a dark comedy by Charlie Chaplin which depicted stupidity of the evil tyrant gaining power in the time. It thoroughly mocked Germany in the time as an absurd group of fanatics, but such propaganda could have never been made until ‘Casablanca.'” , this can be classified as negative, because only the part “masterpiece” is positive as a token, and there are much more words with negative meanings themselves.

The official Tensorflow tutorial provides visualization of word embedding with Embedding Projector, but I would like you to take more control over the data by yourself. Please just copy and paste the codes below, installing necessary libraries. You would get a map of vocabulary used in the text classification task. It seems you cannot find clear tendency of the clusters of the tokens. You can try other dimension reduction methods to get maps of the vocabulary by for example using Scikit Learn.

[References]

[1] “Word embeddings” Tensorflow Core
https://www.tensorflow.org/tutorials/text/word_embeddings

[2]Tsuboi Yuuta, Unno Yuuya, Suzuki Jun, “Machine Learning Professional Series: Natural Language Processing with Deep Learning,” (2017), pp. 43-64, 72-85, 91-94
坪井祐太、海野裕也、鈴木潤 著, 「機械学習プロフェッショナルシリーズ 深層学習による自然言語処理」, (2017), pp. 43-64, 72-85, 191-193

[3]”Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 8 – Translation, Seq2Seq, Attention”, stanfordonline, (2019)
https://www.youtube.com/watch?v=XXtpJxZBa2c

[4] Francois Chollet, Deep Learning with Python,(2018), Manning , pp. 178-185

[5]”2.2. Manifold learning,” scikit-learn
https://scikit-learn.org/stable/modules/manifold.html

* I make study materials on machine learning, sponsored by DATANOMIQ. I do my best to make my content as straightforward but as precise as possible. I include all of my reference sources. If you notice any mistakes in my materials, including grammatical errors, please let me know (email: yasuto.tamura@datanomiq.de). And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.

Turbocharge Business Analytics With In-memory Computing

One of the customer traits that’s been gradually diminishing through the years is patience; if a customer-facing website or application doesn’t deliver real-time or near-instant results, it can be a reason for a customer to look elsewhere. This trend has pushed companies to turn to in-memory computing to get the speed needed to address customer demands in real-time. It simplifies access to multiple data sources to provide super-fast performance that’s thousands of times faster than disk-based storage systems. By storing data in RAM and processing in parallel against the full dataset, in-memory computing solutions allow for real-time insights that lead to informed business decisions and improved performance.

The in-memory computing solutions market has been on the rise in recent years because it has been heralded as the platform that will accelerate IT modernization. In-memory data grids, in particular, show great promise because it addresses the main limitation of an in-memory relational database. While the latter is designed to scale up, the former is designed to scale out. This scalability is one of the main draws of an in-memory data grid, since a scale-up architecture is not sustainable in the long term and will always have a breaking point. In-memory data grids on the other hand, benefit from horizontal scalability and computing elasticity. Scaling an in-memory data grid is as simple as adding nodes to a cluster and removing it when it’s no longer needed. This is especially useful for businesses that demand speed in the management of hundreds of terabytes of data across multiple networked computers in geographically distributed data centers.

Since big data is complex and fast-moving, keeping data synchronized across data centers is vital to preserve data integrity. Keeping data in memory removes the bottleneck caused by constant access to disk -based storage and allows applications and their data to collocate in the same memory space. This allows for optimization that allows the amount of data to exceed the amount of available memory. Speed and efficiency is also improved by keeping frequently accessed data in memory and the rest on disk, consequently allowing data to reside both in memory and on disk.

Future-proofing Businesses With In-memory Computing

Data analytics is as much a part of every business as other marketing and business intelligence tools. Because data constantly grows at an exponential rate, in-memory computing serves as the enabler of data analytics because it provides speed, high availability, and straightforward scalability. Speeds more than 100 times faster than other solutions enable in-memory computing solutions to provide real-time insights that are applicable in a host of industries and use cases.

Location-based Marketing

A report from 2019 shows that location-based marketing helped 89% of marketers increase sales, 86% grow their customer base, and 84% improve customer engagement. Location data can be leveraged to identify patterns of behavior by analyzing frequently visited locations. By understanding why certain customers frequent specific locations and knowing when they are there, you can better target your marketing messages and make more strategic customer acquisitions. Location data can also be used as a demographic identifier to help you segment your customers and tailor your offers and messaging accordingly.

Fraud Detection

In-memory computing helps improve operational intelligence by detecting anomalies in transaction data immediately. Through high-speed analysis of large amounts of data, potential risks are detected early on and addressed as soon as possible. Transaction data is fast-moving and changes frequently, and in-memory computing is equipped to handle data as it changes. This is why it’s an ideal platform for payment processing; it helps make comparisons of current transactions with the history of all transactions on record in a matter of seconds. Companies typically have several fraud detection measures in place, and in-memory computing allows running these algorithms concurrently without compromising overall system performance. This ensures responsiveness of systems despite peak volume levels and avoids interruptions to customer service.

Tailored Customer Experiences

The real-time insights delivered by in-memory computing helps personalize experiences based on customer data. Because customer experiences are time-sensitive, processing and analyzing data at super-fast speeds is vital in capturing real-time event data that can be used to craft the best experience possible for each customer. Without in-memory computing, getting real-time data and other necessary information that ensures a seamless customer experience would have been near impossible.

Real-time data analytics helps provide personalized recommendations based on both existing and new customer data. By looking at historical data like previously visited pages and comparing them with newer data from the stream, businesses can craft the proper messaging and plan the next course of action. The anticipation and forecasting of customers’ future actions and behavior is the key to improving conversion rates and customer satisfaction—ultimately leading to higher revenues and more loyal customers.

Conclusion

Big data is the future, and companies that don’t use it to their advantage would find it hard to compete in this ever-connected world that demands results in an instant. Processing and analyzing data can only become more complex and challenging through time, and for this reason, in-memory computing should be a solution that companies should consider. Aside from improving their business from within, it will also help drive customer acquisition and revenue, while also providing a viable low-latency, high throughput platform for high-speed data analytics.