Jobprofil des Data Engineers

Warum Data Engineering der Data Science in Bedeutung und Berufschancen längst die Show stiehlt, dabei selbst ebenso einem stetigen Wandel unterliegt.

Was ein Data Engineer wirklich können muss

Der Data Scientist als sexiest Job des 21. Jahrhunderts? Mag sein, denn der Job hat seinen ganz speziellen Reiz, auch auf Grund seiner Schnittstellenfunktion zwischen Technik und Fachexpertise. Doch das Spotlight der kommenden Jahre gehört längst einem anderen Berufsbild aus der Datenwertschöpfungskette – das zeigt sich auch bei den Gehältern.

Viele Unternehmen sind gerade auf dem Weg zum Data-Driven Business, einer Unternehmensführung, die für ihre Entscheidungen auf transparente Datengrundlagen setzt und unter Einsatz von Business Intelligence, Data Science sowie der Automatisierung mit Deep Learning und RPA operative Prozesse so weit wie möglich automatisiert. Die Lösung für diese Aufgabenstellungen werden oft vor allem bei den Experten für Prozessautomatisierung und Data Science gesucht, dabei hängt der Erfolg jedoch gerade viel eher von der Beschaffung valider Datengrundlagen ab, und damit von einer ganz anderen entscheidenden Position im Workflow datengetriebener Entscheidungsprozesse, dem Data Engineer.

Data Engineer, der gefragteste Job des 21sten Jahrhunderts?

Der Job des Data Scientists hingegen ist nach wie vor unter Studenten und Absolventen der MINT-Fächer gerade so gefragt wie nie, das beweist der tägliche Ansturm der vielen Absolventen aus Studiengängen rund um die Data Science auf derartige Stellenausschreibungen. Auch mangelt es gerade gar nicht mehr so sehr an internationalen Bewerben mit Schwerpunkt auf Statistik und Machine Learning. Der solide ausgebildete und bestenfalls noch deutschsprachige Data Scientist findet sich zwar nach wie vor kaum im Angebot, doch insgesamt gute Kandidaten sind nicht mehr allzu schwer zu finden. Seit Jahren sind viele Qualifizierungsangebote für Studenten sowie Arbeitskräfte am Markt auch günstig und ganz flexibel online verfügbar, ohne dabei Abstriche bei beim Ansehen dieser Aus- und Fortbildungsmaßnahmen in Kauf nehmen zu müssen.

Was ein Data Scientist fachlich in Sachen Expertise alles abdecken muss, hatten wir ganz ausführlich über Betrachtung des Data Science Knowledge Stack besprochen.

Doch was bringt ein Data Scientist, wenn dieser gar nicht über die Daten verfügt, die für seine Aufgaben benötigt werden? Sicherlich ist die Aufgabe eines jeden Data Scientists auch die Vorbereitung und Präsentation seiner Vorhaben. Die Heranschaffung und Verwaltung großer Datenmengen in einer Enterprise-fähigen Architektur ist jedoch grundsätzlich nicht sein Schwerpunkt und oft fehlen ihm dafür auch die Berechtigungen in einer Enterprise-IT. Noch konkreter wird der Bedarf an Datenbeschaffung und -aufbereitung in der Business Intelligence, denn diese benötigt für nachhaltiges Reporting feste Strukturen wie etwa ein Data Warehouse.

Das Profil des Data Engineers: Big Data High-Tech

Auch wenn Data Engineering von Hochschulen und Fortbildungsanbietern gerade noch etwas stiefmütterlich behandelt werden, werden der Einsatz und das daraus resultierende Anforderungsprofil eines Data Engineers am Markt recht eindeutig skizziert. Einsatzszenarien für diese Dateningenieure – auch auf Deutsch eine annehmbare Benennung – sind im Kern die Erstellung von Data Warehouse und Data Lake Systeme, mittlerweile vor allem auf Cloud-Plattformen. Sie entwickeln diese für das Anzapfen von unternehmensinternen sowie -externen Datenquellen und bereiten die gewonnenen Datenmengen strukturell und inhaltlich so auf, dass diese von anderen Mitarbeitern des Unternehmens zweckmäßig genutzt werden können.

Enabler für Business Intelligence, Process Mining und Data Science

Kein Data Engineer darf den eigentlichen Verbraucher der Daten aus den Augen verlieren, für den die Daten nach allen Regeln der Kunst zusammengeführt, bereinigt und in das Zielformat gebracht werden sollen. Klassischerweise arbeiten die Engineers am Data Warehousing für Business Intelligence oder Process Mining, wofür immer mehr Event Logs benötigt werden. Ein Data Warehouse ist der unter Wasser liegende, viel größere Teil des Eisbergs der Business Intelligence (BI), der die Reports mit qualifizierten Daten versorgt. Diese Eisberg-Analogie lässt sich auch insgesamt auf das Data Engineering übertragen, der für die Endanwender am oberen Ende der Daten-Nahrungskette meistens kaum sichtbar ist, denn diese sehen nur die fertigen Analysen und nicht die dafür vorbereiteten Datentöpfe.

Abbildung 1 - Data Engineering ist der Mittelpunkt einer jeden Datenplattform. Egal ob für Data Science, BI, Process Mining oder sogar RPA, die Datenanlieferung bedingt gute Dateningenieure, die bis hin zur Cloud Infrastructure abtauchen können.

Abbildung 1 – Data Engineering ist der Mittelpunkt einer jeden Datenplattform. Egal ob für Data Science, BI, Process Mining oder sogar RPA, die Datenanlieferung bedingt gute Dateningenieure, die bis hin zur Cloud Infrastructure abtauchen können.

Datenbanken sind Quelle und Ziel der Data Engineers

Daten liegen selten direkt in einer einzigen CSV-Datei strukturiert vor, sondern entstammen einer oder mehreren Datenbanken, die ihren eigenen Regeln unterliegen. Geschäftsdaten, beispielsweise aus ERP- oder CRM-Systemen, liegen in relationalen Datenbanken vor, oftmals von Microsoft, Oracle, SAP oder als eine Open-Source-Alternative. Besonders im Trend liegen derzeitig die Cloud-nativen Datenbanken BigQuery von Google, Redshift von Amazon und Synapse von Microsoft sowie die cloud-unabhängige Datenbank snowflake. Dazu gesellen sich Datenbanken wie der PostgreSQL, Maria DB oder Microsoft SQL Server sowie CosmosDB oder einfachere Cloud-Speicher wie der Microsoft Blobstorage, Amazon S3 oder Google Cloud Storage. Welche Datenbank auch immer die passende Wahl für das Unternehmen sein mag, ohne SQL und Verständnis für normalisierte Daten läuft im Data Engineering nichts.

Andere Arten von Datenbanken, sogenannte NoSQL-Datenbanken beruhen auf Dateiformaten, einer Spalten- oder einer Graphenorientiertheit. Beispiele für verbreitete NoSQL-Datenbanken sind MongoDB, CouchDB, Cassandra oder Neo4J. Diese Datenbanken exisiteren nicht nur als Unterhaltungswert gelangweilter Nerds, sondern haben ganz konkrete Einsatzgebiete, in denen sie jeweils die beste Performance im Lesen oder Schreiben der Daten bieten.

Ein Data Engineer muss demnach mit unterschiedlichen Datenbanksystemen zurechtkommen, die teilweise auf unterschiedlichen Cloud Plattformen heimisch sind.

Data Engineers brauchen Hacker-Qualitäten

Liegen Daten in einer Datenbank vor, können Analysten mit Zugriff einfache Analysen bereits direkt auf der Datenbank ausführen. Doch wie bekommen wir die Daten in unsere speziellen Analyse-Tools? Hier muss der Engineer seinen Dienst leisten und die Daten aus der Datenbank exportieren können. Bei direkten Datenanbindungen kommen APIs, also Schnittstellen wie REST, ODBC oder JDBC ins Spiel und ein guter Data Engineer benötigt Programmierkenntnisse, bevorzugt in Python, diese APIs ansprechen zu können. Etwas Kenntnis über Socket-Verbindungen und Client-Server-Architekturen zahlt sich dabei manchmal aus. Ferner sollte jeder Data Engineer mit synchronen und asynchronen Verschlüsselungsverfahren vertraut sein, denn in der Regel wird mit vertraulichen Daten gearbeitet. Ein Mindeststandard an Sicherheit gehört zum Data Engineering und darf keinesfalls nur Datensicherheitsexperten überlassen werden, eine Affinität zu Netzwerksicherheit oder gar Penetration-Testing ist positiv zu bewerten, mindestens aber ein sauberes Berechtigungsmanagement gehört zu den Grundfähigkeiten. Viele Daten liegen nicht strukturiert in einer Datenbank vor, sondern sind sogenannte unstrukturierte oder semi-strukturierte Daten aus Dokumenten oder aus Internetquellen. Mit Methoden wie Data Web Scrapping und Data Crawling sowie der Automatisierung von Datenabrufen beweisen herausragende Data Engineers sogar echte Hacker-Qualitäten.

Dirigent der Daten: Orchestrierung von Datenflüssen

Eine der Kernaufgaben des Data Engineers ist die Entwicklung von ETL-Strecken, um Daten aus Quellen zu Extrahieren, zu in das gewünschte Zielformat zu Transformieren und schließlich in die Zieldatenbank zu Laden. Dies mag erstmal einfach klingen, wird jedoch zur echten Herausforderung, wenn viele ETL-Prozesse sich zu ganzen ETL-Ketten und -Netzwerken zusammenfügen, diese dabei trotz hochfrequentierter Datenabfrage performant laufen müssen. Die Orchestrierung der Datenflüsse kann in der Regel in mehrere Etappen unterschieden werden, von der Quelle ins Data Warehouse, zwischen den Ebenen im Data Warehouse sowie vom Data Warehouse in weiterführende Systeme, bis hin zum Zurückfließen verarbeiteter Daten in das Data Warehouse (Reverse ETL).

Hart an der Grenze zu DevOp: Automatisierung in Cloud-Architekturen

In den letzten Jahren sind Anforderungen an Data Engineers deutlich gestiegen, denn neben dem eigentlichen Verwalten von Datenbeständen und -strömen für Analysezwecke wird zunehmend erwartet, dass ein Data Engineer auch Ressourcen in der Cloud managen, mindestens jedoch die Datenbanken und ETL-Ressourcen. Darüber hinaus wird zunehmend jedoch verlangt, IT-Netzwerke zu verstehen und das ganze Zusammenspiel der Ressourcen auch als Infrastructure as Code zu automatisieren. Auch das automatisierte Deployment von Datenarchitekturen über CI/CD-Pipelines macht einen Data Engineer immer mehr zum DevOp.

Zukunfts- und Gehaltsaussichten

Im Vergleich zum Data Scientist, der besonders viel Methodenverständnis für Datenanalyse, Statistik und auch für das zu untersuchende Fachgebiet benötigt, sind Data Engineers mehr an Tools und Plattformen orientiert. Ein Data Scientist, der Deep Learning verstanden hat, kann sein Wissen zügig sowohl mit TensorFlow als auch mit PyTorch anwenden. Ein Data Engineer hingegen arbeitet intensiver mit den Tools, die sich über die Jahre viel zügiger weiterentwickeln. Ein Data Engineer für die Google Cloud wird mehr Einarbeitung benötigen, sollte er plötzlich auf AWS oder Azure arbeiten müssen.

Ein Data Engineer kann in Deutschland als Einsteiger mit guten Vorkenntnissen und erster Erfahrung mit einem Bruttojahresgehalt zwischen 45.000 und 55.000 EUR rechnen. Mehr als zwei Jahre konkrete Erfahrung im Data Engineering wird von Unternehmen gerne mit Gehältern zwischen 50.000 und 80.000 EUR revanchiert. Darüber liegen in der Regel nur die Data Architects / Datenarchitekten, die eher in großen Unternehmen zu finden sind und besonders viel Erfahrung voraussetzen. Weitere Aufstiegschancen für Data Engineers sind Berater-Karrieren oder Führungspositionen.

Wer einen Data Engineer in Festanstellung gebracht hat, darf sich jedoch nicht all zu sicher fühlen, denn Personalvermittler lauern diesen qualifizierten Fachkräften an jeder Ecke des Social Media auf. Gerade in den Metropolen wie Berlin schaffen es längst nicht alle Unternehmen, jeden Data Engineer über Jahre hinweg zu beschäftigen. Bei der großen Auswahl an Jobs und Herausforderungen fällt diesen Datenexperten nicht schwer, seine Gehaltssteigerungen durch Jobwechsel proaktiv voranzutreiben.

6 Ways to Optimize Your Database for Performance

Knowing how to optimize your organization’s database for maximum performance can lead to greater efficiency, productivity, and user satisfaction. While it may seem challenging at first, there are a few easy performance tuning tips that you can get started with.

1. Use Indexing

Indexing is one of the core ways to give databases a performance boost. There are different ways of approaching indexing, but they all have the same goal: decreasing query wait time by making it easier to find and access data.

Indexes have a search key attached to a value or data reference. The index file will direct a query to a record, “bucket” of data, or group of data, depending on the indexing method used. Choosing a good indexing method for your specific needs will reduce strain on your system by making it much easier for data to be located, since a uniform, systematic organization is applied to the entire database.

2. Avoid Using Loops

Many coders learn early on that loops can be both useful and dangerous. It is all too easy to accidentally create an infinite loop and crash your whole system.

Loops are problematic when it comes to database performance because they often are looping redundantly. That is not to say that loops should never be used; they are useful sometimes. It simply depends on the specific case, and removing or minimizing unnecessary loops will help increase performance.

For example, having SQL queries inside of loops is not generally advised, because the system is running the same query numerous times rather than just once. A good rule of thumb is that the more data you have in a loop, the slower it is going to be.

3. Get a Stronger CPU

This fix is a classic in computer science. A CPU with better specs will increase system performance. There are ways, like those above, to increase performance within your system’s organization and coding. However, if you find that your database is consistently struggling to keep up, your hardware might be in need of an upgrade.

Even if the CPU you have seems like it should be sufficient, a CPU that is more powerful than your minimum requirements will be able to handle waves of queries with ease. The more data you are working with and the more queries you need to manage, the stronger your CPU needs to be.

4. Defragment Data

Data defragmentation is a common solution for performance issues. When data gets accessed, written, and rewritten many times, it can get fragmented from all that copying. It is good practice to go in and clean things up on a regular basis.

One symptom of fragmented data is clogged memory, where tables are taking up more room than they should. Crammed memory, as discussed below, is another common cause of a low-performing database.

5. Optimize Queries

There are many ways to go about optimizing queries, depending on the indexing method and the specific needs of your database. When queries aren’t being handled efficiently, the whole system can get backed up, leading to longer wait times for query results. Causes may include duplicate or overlapping indexes and keys or queries that return data that isn’t relevant.

Optimizing queries can be a complex process, but there are some easy steps you can take to work out the best plan for your database and identify its specific inefficiencies.

6. Optimize Memory

Another hardware fix that may help under-performing databases is additional memory space. Databases need some memory “wiggle room” to operate quickly. When your memory is nearly or completely full, things get backed up while the system struggles to find room for creating temporary files and moving things around. It’s a bit like trying to reorganize a living room that is packed floor-to-ceiling with boxes. You need plenty of empty floor space to maneuver and shuffle things around.

Databases work the same way. Increasing your database’s memory capacity will allow it more flexibility and operating room, reducing stress on the system so it can run more efficiently.

Keep It Simple

Many of the tips above focus on simplifying and cleaning up your database. Keep your coding as straightforward and easy-to-navigate as possible. Databases are all about accessing information, so your main priority should simply be to have a well-organized system. Keeping these tips in mind will help you do just that and get your database running at top-notch performance!

CAP Theorem

Understanding databases for storing, updating and analyzing data requires the understanding of the CAP Theorem. This is the second article of the article series Data Warehousing Basics.

Understanding NoSQL Databases by the CAP Theorem

CAP theorem – or Brewer’s theorem – was introduced by the computer scientist Eric Brewer at Symposium on Principles of Distributed computing in 2000. The CAP stands for Consistency, Availability and Partition tolerance.

  • Consistency: Every read receives the most recent writes or an error. Once a client writes a value to any server and gets a response, it is expected to get afresh and valid value back from any server or node of the database cluster it reads from.
    Be aware that the definition of consistency for CAP means something different than to ACID (relational consistency).
  • Availability: The database is not allowed to be unavailable because it is busy with requests. Every request received by a non-failing node in the system must result in a response. Whether you want to read or write you will get some response back. If the server has not crashed, it is not allowed to ignore the client’s requests.
  • Partition tolerance: Databases which store big data will use a cluster of nodes that distribute the connections evenly over the whole cluster. If this system has partition tolerance, it will continue to operate despite a number of messages being delayed or even lost by the network between the cluster nodes.

CAP theorem applies the logic that  for a distributed system it is only possible to simultaneously provide  two out of the above three guarantees. Eric Brewer, the father of the CAP theorem, proved that we are limited to two of three characteristics, “by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three.” (Brewer, E., 2012).

CAP Theorem Triangle

To recap, with the CAP theorem in relation to Big Data distributed solutions (such as NoSQL databases), it is important to reiterate the fact, that in such distributed systems it is not possible to guarantee all three characteristics (Availability, Consistency, and Partition Tolerance) at the same time.

Database systems designed to fulfill traditional ACID guarantees like relational database (management) systems (RDBMS) choose consistency over availability, whereas NoSQL databases are mostly systems designed referring to the BASE philosophy which prefer availability over consistency.

The CAP Theorem in the real world

Lets look at some examples to understand the CAP Theorem further and provewe cannot create database systems which are being consistent, partition tolerant as well as always available simultaniously.

AP – Availability + Partition Tolerance

If we have achieved Availability (our databases will always respond to our requests) as well as Partition Tolerance (all nodes of the database will work even if they cannot communicate), it will immediately mean that we cannot provide Consistency as all nodes will go out of sync as soon as we write new information to one of the nodes. The nodes will continue to accept the database transactions each separately, but they cannot transfer the transaction between each other keeping them in synchronization. We therefore cannot fully guarantee the system consistency. When the partition is resolved, the AP databases typically resync the nodes to repair all inconsistencies in the system.

A well-known real world example of an AP system is the Domain Name System (DNS). This central network component is responsible for resolving domain names into IP addresses and focuses on the two properties of availability and failure tolerance. Thanks to the large number of servers, the system is available almost without exception. If a single DNS server fails,another one takes over. According to the CAP theorem, DNS is not consistent: If a DNS entry is changed, e.g. when a new domain has been registered or deleted, it can take a few days before this change is passed on to the entire system hierarchy and can be seen by all clients.

CA – Consistency + Availability

Guarantee of full Consistency and Availability is practically impossible to achieve in a system which distributes data over several nodes. We can have databases over more than one node online and available, and we keep the data consistent between these nodes, but the nature of computer networks (LAN, WAN) is that the connection can get interrupted, meaning we cannot guarantee the Partition Tolerance and therefor not the reliability of having the whole database service online at all times.

Database management systems based on the relational database models (RDBMS) are a good example of CA systems. These database systems are primarily characterized by a high level of consistency and strive for the highest possible availability. In case of doubt, however, availability can decrease in favor of consistency. Reliability by distributing data over partitions in order to make data reachable in any case – even if computer or network failure – meanwhile plays a subordinate role.

CP – Consistency + Partition Tolerance

If the Consistency of data is given – which means that the data between two or more nodes always contain the up-to-date information – and Partition Tolerance is given as well – which means that we are avoiding any desynchronization of our data between all nodes, then we will lose Availability as soon as only one a partition occurs between any two nodes In most distributed systems, high availability is one of the most important properties, which is why CP systems tend to be a rarity in practice. These systems prove their worth particularly in the financial sector: banking applications that must reliably debit and transfer amounts of money on the account side are dependent on consistency and reliability by consistent redundancies to always be able to rule out incorrect postings – even in the event of disruptions in the data traffic. If consistency and reliability is not guaranteed, the system might be unavailable for the users.

CAP Theorem Venn Diagram

Conclusion

The CAP Theorem is still an important topic to understand for data engineers and data scientists, but many modern databases enable us to switch between the possibilities within the CAP Theorem. For example, the Cosmos DB von Microsoft Azure offers many granular options to switch between the consistency, availability and partition tolerance . A common misunderstanding of the CAP theorem that it´s none-absoluteness: “All three properties are more continuous than binary. Availability is continuous from 0 to 100 percent, there are many levels of consistency, and even partitions have nuances. Exploring these nuances requires pushing the traditional way of dealing with partitions, which is the fundamental challenge. Because partitions are rare, CAP should allow perfect C and A most of the time, but when partitions are present or perceived, a strategy is in order.” (Brewer, E., 2012).

ACID vs BASE Concepts

Understanding databases for storing, updating and analyzing data requires the understanding of two concepts: ACID and BASE. This is the first article of the article series Data Warehousing Basics.

The properties of ACID are being applied for databases in order to fulfill enterprise requirements of reliability and consistency.

ACID is an acronym, and stands for:

  • Atomicity – Each transaction is either properly executed completely or does not happen at all. If the transaction was not finished the process reverts the database back to the state before the transaction started. This ensures that all data in the database is valid even if we execute big transactions which include multiple statements (e. g. SQL) composed into one transaction updating many data rows in the database. If one statement fails, the entire transaction will be aborted, and hence, no changes will be made.
  • Consistency – Databases are governed by specific rules defined by table formats (data types) and table relations as well as further functions like triggers. The consistency of data will stay reliable if transactions never endanger the structural integrity of the database. Therefor, it is not allowed to save data of different types into the same single column, to use written primary key values again or to delete data from a table which is strictly related to data in another table.
  • Isolation – Databases are multi-user systems where multiple transactions happen at the same time. With Isolation, transactions cannot compromise the integrity of other transactions by interacting with them while they are still in progress. It guarantees data tables will be in the same states with several transactions happening concurrently as they happen sequentially.
  • Durability – The data related to the completed transaction will persist even in cases of network or power outages. Databases that guarante Durability save data inserted or updated permanently, save all executed and planed transactions in a recording and ensure availability of the data committed via transaction even after a power failure or other system failures If a transaction fails to complete successfully because of a technical failure, it will not transform the targeted data.

ACID Databases

The ACID transaction model ensures that all performed transactions will result in reliable and consistent databases. This suits best for businesses which use OLTP (Online Transaction Processing) for IT-Systems such like ERP- or CRM-Systems. Furthermore, it can also be a good choice for OLAP (Online Analytical Processing) which is used in Data Warehouses. These applications need backend database systems which can handle many small- or medium-sized transactions occurring simultaneous by many users. An interrupted transaction with write-access must be removed from the database immediately as it could cause negative side effects impacting the consistency(e.g., vendors could be deleted although they still have open purchase orders or financial payments could be debited from one account and due to technical failure, never credited to another).

The speed of the querying should be as fast as possible, but even more important for those applications is zero tolerance for invalid states which is prevented by using ACID-conform databases.

BASE Concept

ACID databases have their advantages but also one big tradeoff: If all transactions need to be committed and checked for consistency correctly, the databases are slow in reading and writing data. Furthermore, they demand more effort if it comes to storing new data in new formats.

In chemistry, a base is the opposite to acid. The database concepts of BASE and ACID have a similar relationship. The BASE concept provides several benefits over ACID compliant databases asthey focus more intensely on data availability of database systems without guarantee of safety from network failures or inconsistency.

The acronym BASE is even more confusing than ACID as BASE relates to ACID indirectly. The words behind BASE suggest alternatives to ACID.

BASE stands for:

  • Basically Available – Rather than enforcing consistency in any case, BASE databases will guarantee availability of data by spreading and replicating it across the nodes of the database cluster. Basic read and write functionality is provided without liabilityfor consistency. In rare cases it could happen that an insert- or update-statement does not result in persistently stored data. Read queries might not provide the latest data.
  • Soft State – Databases following this concept do not check rules to stay write-consistent or mutually consistent. The user can toss all data into the database, delegating the responsibility of avoiding inconsistency or redundancy to developers or users.
  • Eventually Consistent –No guarantee of enforced immediate consistency does not mean that the database never achieves it. The database can become consistent over time. After a waiting period, updates will ripple through all cluster nodes of the database. However, reading data out of it will stay always be possible, it is just not certain if we always get the last refreshed data.

All the three above mentioned properties of BASE-conforming databases sound like disadvantages. So why would you choose BASE? There is a tradeoff compared to ACID. If databases do not have to follow ACID properties then the database can work much faster in terms of writing and reading from the database. Further, the developers have more freedom to implement data storage solutions or simplify data entry into the database without thinking about formats and structure beforehand.

BASE Databases

While ACID databases are mostly RDBMS, most other database types, known as NoSQL databases, tend more to conform to BASE principles. Redis, CouchDB, MongoDB, Cosmos DB, Cassandra, ElasticSearch, Neo4J, OrientDB or ArangoDB are just some popular examples. But other than ACID, BASE is not a strict approach. Some NoSQL databases apply at least partly to ACID rules or provide optional functions to get almost or even full ACID compatibility. These databases provide different level of freedom which can be useful for the Staging Layer in Data Warehouses or as a Data Lake, but they are not the recommended choice for applications which need data environments guaranteeing strict consistency.

Data Warehousing Basiscs

Data Warehousing is applied Big Data Management and a key success factor in almost every company. Without a data warehouse, no company today can control its processes and make the right decisions on a strategic level as there would be a lack of data transparency for all decision makers. Bigger comanies even have multiple data warehouses for different purposes.

In this series of articles I would like to explain what a data warehouse actually is and how it is set up. However, I would also like to explain basic topics regarding Data Engineering and concepts about databases and data flows.

To do this, we tick off the following points step by step:

 

Why Do Companies Use Data Lakes?

Modern enterprise-level computing operations have to capture a truly monumental amount of information every single hour. As the scale of data has grown almost exponentially over the years, so has the scale and complexity of data stores. Traditional databases couldn’t possibly keep up with the massive numbers of records that have to be created today, which is why so many firms ended up looking for alternatives to them.

For a while, it certainly seemed as though the new breed of data warehouses would fit the bill. Enterprises that had to harvest information from all of their inputs regarding every possible function of their businesses ran to adopt this new paradigm even as the streams of data they were collecting turned into raging rivers that some might call fire hoses. Into this scattered market entered an even new concept that attempts to refactor the data processing question into a tranquil lake as opposed to that torrential downpour.

When businesses turn to this kind of digital infrastructure, they’re often looking to make sense out of this otherwise immeasurable flood.

When Data Lakes Beat Out Warehouses

Since lakes are taking over in a space currently occupied by already existing operational structures, the data lake vs data warehouse debate is currently pretty heated. Proponents of lakes say that one of the biggest reasons that companies are turning to them is the question of vendor lock-in. When an enterprise-level user stores all of their information into a conventional warehouse, they’re locked into a single vendor who processes data on their behalf. In general, their storage and analytic algorithms are bundled together into a package that’s not easy to separate into disparate parts.

Others might be dealing with a specific processing engine that requires all of their information to be formatted a certain way as its own inputs can only understand data presented in said format. Those who’re dealing with this issue might not notice it until they finally get a flow of information that’s in some format that the warehouse engine doesn’t understand. Writing custom software to convert it isn’t an easy task, especially when there’s no API to work with.

Admittedly, these issues don’t normally come up when dealing with simple and concrete data analytics pipelines. Data warehouses do a great job of making information available and they certainly help managers draw insights from data that they might not otherwise get a chance to visualize and understand. As soon as you get into things like log analytics or processing data through machine learning technology, data warehouses will struggle. They’re normally based around relational database formats, which aren’t designed to manage semi-structured information.

Instead of going through the incredibly complex process of normalizing and cleaning all of the incoming data, it makes more sense for organization-wide IS departments to transition to a data lake.

Shifting to Data Lakes the Easy Way

Change is never easy, and that’s especially true when you’re working with IT in an enterprise market. However, the promises made by data lake proponents are attractive enough that some are making the leap. They offer an affordable single repository for all of your information regardless of what you want to store, which is why so many are now taking a dip in these virtual bodies of digital water. Structured and unstructured data can fit in together, which is certainly helping to ease the transition in many cases.

IS department staffers and technologists have found that streaming data straight from a transactional database into a data lake abstraction is a simple process. Doing so gives them the freedom to run analytics on it at a later date. Best of all, semi-structured seemingly random data points like those that come from a clickstream analysis can be moved in real-time without having to write some kind of intermediary back-end script that forces it into some kind of relational format.

However, some have taken this entirely too far and that’s where most of the criticism of data lake technology seems to be coming from. Data lakes unfortunately have to be managed carefully and the information stored in them still has to be organized in some way that makes sense and is, ideally, human readable. Locating completely unstructured data later on will simply be too difficult.

True believers in the technology have developed new solutions that get around this problem without requiring engineers to write any customized code, which has helped even skeptical businesses adopt this technology.

The Rise of Open Data Lakes

Building an open data lake is key to ensuring that any kind of processing engine can read information that comes out of the lake. That’s why an increasing number of developers are turning to platforms that don’t store this information in any proprietary format. While this might seem like it could get confusing since there are a number of different competing standards vying for market dominance, it’s actually aiding adoption.

Competition, even in the open-source community, has helped to ensure that data processing software ships with a relatively low incidence of bugs. The fact that there are multiple vendors in the space has also proven helpful to IS department heads looking to implement data lake solutions in their own organizations. Rather than just picking a single vendor who then provides everything, they could use several to handle different chores and get the best of all the available options. For instance, Amazon Athena-based technology may query information in a lake directly while IoT and log processing could be handled by something based on the Lucene library like Elasticsearch.

Some specialists might even wish to introduce Splunk or other related solutions into their custom data lake layouts. The wide variety of off-the-shelf projects has helped to dramatically reduce the number of individuals who have to write custom solutions, which makes it easy to get up and go with a new lake. Since there’s no need to convert information into a specific format, the implementation phase is usually much smoother.

Regardless of which specific solutions they elect to go with, however, it’s likely that an increasingly large percentage of the data processing market will look to data lakes. They’re quickly proving themselves to be both flexible and at least relatively easy to roll out.

What Is Data Lake Architecture?

The volume of information produced by everyone in the world is growing exponentially. To put it in perspective, it’s estimated that by 2023 the big data analytics market will reach $103 billion.

Finding probable solutions for storing big data is a challenge. It’s no easy task to hold enormous amounts of information, clean it and transform it into understandable subsets — it’s best to take one step at a time.

Some reasons why companies access their big data is to:

  • Improve their consumer experience
  • Draw conclusions and make data-driven decisions
  • Identify potential problems
  • Create innovative products

There are ways to help define big data. Combining its characteristics with storage management methods help experts make their clients’ information digestible and understandable. Cue data lakes, which are repositories for big data in its native form.

Think of an actual lake with multiple water sources around the perimeter flowing into it. Picture these as three types of data: structured, semi-structured and unstructured. All this information can remain in a data lake and be accessed in its raw form at any time, making it an attractive storage method.

Here’s how data lakes are created, some of their components and how to avoid common pitfalls.

Creating a Data Lake

One benefit of creating and implementing a data lake is that structuring becomes much more manageable.  Pulling necessary information from a lake allows analysts to compare and contrast data and communicate any connections between datasets to their client.

There are four steps to follow when setting up a data lake:

  1. Choosing a software solution: Microsoft, Amazon and Google are cloud vendors that allow developers to create data lakes without using servers.
  2. Identifying where data is sourced: Where is your information coming from? Once sources are identified, determine how your data will be cleaned or transformed.
  3. Defining process and automation: It’s vital to outline how information should be processed once the data lake ingests it. This creates consistency for businesses.
  4. Establishing retrieval governance: Choosing who has access to what types of information is crucial for companies with multiple locations and departments. It helps with overall organization. Data scientists, for this reason, primarily access data lakes.

The next step would be to determine the extract, transform and load (ETL) process. ETL creates visual interpretations of data to provide context to businesses. When information from a data lake is sent to a warehouse, it can be analyzed.

Components of a Data Lake

Here is what happens to information once a data lake is created:

  • Collection: Data comes in from various sources.
  • Ingestion: Data is processed using management software.
  • Blending: Data is combined from multiple sources.
  • Transformation: Data is analyzed and made sense of.
  • Publication: Data can be used to drive business decisions.

There are other aspects of a data lake to keep in mind. These are the critical components that help provide business solutions:

  • Security: Data lakes require security to protect information — they do not have built-in safety measures.
  • Governance: Determine who can check on the quality of data and perform measurements.
  • Metadata: This provides information about other data to improve understanding.
  • Stewardship: Choose one or more employees to take on the responsibility of managing data.
  • Monitoring: Employ other software to perform the ETL process.

Big data lends itself to incorporating multiple processes to make it usable for companies. The volume of information one company produces is massive — to manage it, experts need to consider these components and steps when building a data lake.

What to Avoid When Using Data Lakes

The last thing people want for their data lake is to see it turn into a swamp. When big data is processed incorrectly, its value decreases, making it useless to the business sourcing it.

The first step in avoiding a common pitfall is to consider the sustainability of the data lake. Planning processes are necessary to ensure it’s secure, and governing and regulating incoming information will allow for long-term use.

A lack of security causes another problem that can arise in data lakes. Safety measures must be implemented. Because enterprises will build data lakes for different purposes, it’s easy for information to become unorganized and vulnerable to hacking. With security, the likelihood of data breaches decreases, and the quality of data remains high.

The most important thing to remember about data lakes is the planning stage. Without proper preparation, they tend to be overwhelming due to their size and complexity. Taking the time and care to establish the processes ahead of time is vital.

Using Data Lake Architecture for Business

Data lakes store massive amounts of information to be used later on to create subsets, analyze metadata and more. Their advantages allow businesses to be flexible, save money and have access to raw information at all times.

Die Notwendigkeit von DevOps in Data Science

Datenwissenschaft und maschinelles Lernen werden häufig mit Mathematik, Statistik, Algorithmen und Datenstreitigkeiten in Verbindung gebracht. Während diese Fähigkeiten für den Erfolg der Implementierung von maschinellem Lernen in einem Unternehmen von zentraler Bedeutung sind, gewinnt eine Funktion zunehmend an Bedeutung – DevOps for Data Science. DevOps umfasst die Bereitstellung der Infrastruktur, das Konfigurationsmanagement, die kontinuierliche Integration und Bereitstellung, das Testen und die Überwachung. Die DevOps Consulting – Teams haben eng mit den Entwicklungsteams zusammengearbeitet, um den Lebenszyklus von Anwendungen effektiv zu verwalten.

Data Science bringt DevOps zusätzliche Verantwortung. Data Engineering, eine Nischendomäne, die sich mit komplexen Pipelines befasst, die die Daten transformieren, erfordert eine enge Zusammenarbeit von Data Science-Teams mit DevOps. Datenwissenschaftler untersuchen transformierte Daten, um Erkenntnisse und Korrelationen zu finden. Von den DevOps-Teams wird erwartet, dass sie Datenwissenschaftler unterstützen, indem sie Umgebungen für die Datenexploration und -visualisierung erstellen.

Das Erstellen von Modellen für maschinelles Lernen unterscheidet sich grundlegend von der herkömmlichen Anwendungsentwicklung. Die Entwicklung ist nicht nur iterativ, sondern auch heterogen. Datenwissenschaftler und -entwickler verwenden eine Vielzahl von Sprachen, Bibliotheken, Toolkits und Entwicklungsumgebungen, um Modelle für maschinelles Lernen zu entwickeln. Beliebte Sprachen für die Entwicklung des maschinellen Lernens wie Python, R und Julia werden in Entwicklungsumgebungen verwendet, die auf Jupyter Notebooks, PyCharm, Visual Studio Code, RStudio und Juno basieren. Diese Umgebungen müssen Datenwissenschaftlern und Entwicklern zur Verfügung stehen, die ML-Probleme lösen.

Maschinelles Lernen und Deep Learning erfordern eine massive Computerinfrastruktur, die auf leistungsstarken CPUs und GPUs ausgeführt wird. Frameworks wie TensorFlow, Caffe, Apache MXNet und Microsoft CNTK nutzen die GPUs, um komplexe Berechnungen für das Training von ML-Modellen durchzuführen. Das Bereitstellen, Konfigurieren, Skalieren und Verwalten dieser Cluster ist eine typische DevOps-Funktion. DevOps-Teams müssen möglicherweise Skripts erstellen, um die Bereitstellung und Konfiguration der Infrastruktur für eine Vielzahl von Umgebungen zu automatisieren.

Ähnlich wie bei der modernen Anwendungsentwicklung ist die Entwicklung des maschinellen Lernens iterativ.

Wenn ein vollständig trainiertes ML-Modell verfügbar ist, wird von DevOps-Teams erwartet, dass sie das Modell in einer skalierbaren Umgebung hosten, beispielsweise mit Microsoft Azure und die dazugehörige DevOps-Lösung. Sie können Orchestrierungs-Engines wie Apache Mesos oder Kubernetes nutzen, um die Modellbereitstellung zu skalieren.

DevOps-Teams nutzen Container für die Bereitstellung von Entwicklungsumgebungen, Datenverarbeitungs-Pipelines, Schulungsinfrastrukturen und Modellbereitstellungsumgebungen. Neue Technologien wie Kubeflow und MlFlow konzentrieren sich darauf, DevOps-Teams in die Lage zu versetzen, die neuen Herausforderungen im Umgang mit der ML-Infrastruktur zu bewältigen.

Maschinelles Lernen verleiht DevOps eine neue Dimension. Zusammen mit den Entwicklern müssen die Betreiber mit Datenwissenschaftlern und Dateningenieuren zusammenarbeiten, um Unternehmen zu unterstützen, die das ML-Paradigma annehmen.

Operational Data Store vs. Data Warehouse

One of the main problems with large amounts of data, especially in this age of data-driven tools and near-instant results, is how to store the data. With proper storage also comes the challenge of keeping the data updated, and this is the reason why organizations focus on solutions that will help make data processing faster and more efficient. For many, a digital transformation is in their roadmap, thanks in large part to the changes brought about by the global COVID-19 pandemic. The problem is that organizations often assume that it’s similar to traditional change initiatives, which can’t be any further from the truth. There are a number of challenges to prepare for in digital transformations, however, and without proper planning, non-unified data storage systems and systems of record implemented through the years can slow down or even hinder the process.

Businesses have relied on two main solutions for data storage for many years: traditional data warehouses and operational data stores (ODS). These key data structures provide assistance when it comes to boosting business intelligence so that the business can make sound corporate decisions based on data. Before considering which one will work for your business, it’s important to understand the main differences between the two.

What is a Data Warehouse?

Data warehousing is a common practice because a data warehouse is designed to support business intelligence tools and activities. It’s subject-oriented so data is centered on customers, products, sales, or other subjects that contribute to the business bottom line. Because data comes from a multitude of sources, a data warehouse is also designed to consolidate large amounts of data in a variety of formats, including flat files, legacy database management systems, and relational database management systems. It’s considered an organization’s single source of truth because it houses historical records built through time, which could become invaluable as a source of actionable insights.

One of the main disadvantages of a data warehouse is its non-volatile nature. Non-volatile data is read-only and, therefore, not frequently updated or deleted over time. This leads to some time variance, which means that a data warehouse only stores a time series of periodic data snapshots that show the state of data during specific periods. As such, data loading and data retrieval are the most vital operations for a data warehouse.

What is an Operational Data Store?

Forward-thinking companies turn to an operational data store to resolve the issues with data warehousing, primarily, the issue of always keeping data up-to-date. Similar to a data warehouse, an ODS can aggregate data from multiple sources and report across multiple systems of record to provide a more comprehensive view of the data. It’s essentially a staging area that can receive operational data from transactional sources and can be queried directly. This allows data analytics tools to query ODS data as it’s received from the respective source systems. This offloads the burden from the transactional systems by only providing access to current data that’s queried in an integrated manner. This makes an ODS the ideal solution for those looking for near-real time data that’s processed quickly and efficiently.

Traditional ODS solutions, however, typically suffer from high latency because they are based on either relational databases or disk-based NoSQL databases. These systems simply can’t handle large amounts of data and provide high performance at the same time, which is a common requirement of most modern applications. The limited scalability of traditional systems also leads to performance issues when multiple users access the data store all at the same time. As such, traditional ODS solutions are incapable of providing real-time API services for accessing systems of record.

A Paradigm Shift

As modern real-time digital applications replace previously offline services, companies are going through a paradigm shift and venturing beyond what traditional data storage systems can offer. This has led to the rise of a new breed of ODS solutions that Gartner refers to as digital integration hubs. It’s a cost-effective solution because it doesn’t require a rip-and-replace if you already have a traditional ODS in place. Adopting a digital integration hub can be as simple as augmenting your current system with the missing layers, including the microservices API, smart cache, and event-driven architecture.

While sticking with a data warehouse or traditional ODS may not necessarily hurt your business, the benefits of modernization via a digital integration hub are too great to ignore. Significant improvements in throughput, availability, and scalability will help organizations become more agile so they can drive innovation quicker, helping their industry and pushing the limits of technology further to open up possibilities never before discovered.

Turbocharge Business Analytics With In-memory Computing

One of the customer traits that’s been gradually diminishing through the years is patience; if a customer-facing website or application doesn’t deliver real-time or near-instant results, it can be a reason for a customer to look elsewhere. This trend has pushed companies to turn to in-memory computing to get the speed needed to address customer demands in real-time. It simplifies access to multiple data sources to provide super-fast performance that’s thousands of times faster than disk-based storage systems. By storing data in RAM and processing in parallel against the full dataset, in-memory computing solutions allow for real-time insights that lead to informed business decisions and improved performance.

The in-memory computing solutions market has been on the rise in recent years because it has been heralded as the platform that will accelerate IT modernization. In-memory data grids, in particular, show great promise because it addresses the main limitation of an in-memory relational database. While the latter is designed to scale up, the former is designed to scale out. This scalability is one of the main draws of an in-memory data grid, since a scale-up architecture is not sustainable in the long term and will always have a breaking point. In-memory data grids on the other hand, benefit from horizontal scalability and computing elasticity. Scaling an in-memory data grid is as simple as adding nodes to a cluster and removing it when it’s no longer needed. This is especially useful for businesses that demand speed in the management of hundreds of terabytes of data across multiple networked computers in geographically distributed data centers.

Since big data is complex and fast-moving, keeping data synchronized across data centers is vital to preserve data integrity. Keeping data in memory removes the bottleneck caused by constant access to disk -based storage and allows applications and their data to collocate in the same memory space. This allows for optimization that allows the amount of data to exceed the amount of available memory. Speed and efficiency is also improved by keeping frequently accessed data in memory and the rest on disk, consequently allowing data to reside both in memory and on disk.

Future-proofing Businesses With In-memory Computing

Data analytics is as much a part of every business as other marketing and business intelligence tools. Because data constantly grows at an exponential rate, in-memory computing serves as the enabler of data analytics because it provides speed, high availability, and straightforward scalability. Speeds more than 100 times faster than other solutions enable in-memory computing solutions to provide real-time insights that are applicable in a host of industries and use cases.

Location-based Marketing

A report from 2019 shows that location-based marketing helped 89% of marketers increase sales, 86% grow their customer base, and 84% improve customer engagement. Location data can be leveraged to identify patterns of behavior by analyzing frequently visited locations. By understanding why certain customers frequent specific locations and knowing when they are there, you can better target your marketing messages and make more strategic customer acquisitions. Location data can also be used as a demographic identifier to help you segment your customers and tailor your offers and messaging accordingly.

Fraud Detection

In-memory computing helps improve operational intelligence by detecting anomalies in transaction data immediately. Through high-speed analysis of large amounts of data, potential risks are detected early on and addressed as soon as possible. Transaction data is fast-moving and changes frequently, and in-memory computing is equipped to handle data as it changes. This is why it’s an ideal platform for payment processing; it helps make comparisons of current transactions with the history of all transactions on record in a matter of seconds. Companies typically have several fraud detection measures in place, and in-memory computing allows running these algorithms concurrently without compromising overall system performance. This ensures responsiveness of systems despite peak volume levels and avoids interruptions to customer service.

Tailored Customer Experiences

The real-time insights delivered by in-memory computing helps personalize experiences based on customer data. Because customer experiences are time-sensitive, processing and analyzing data at super-fast speeds is vital in capturing real-time event data that can be used to craft the best experience possible for each customer. Without in-memory computing, getting real-time data and other necessary information that ensures a seamless customer experience would have been near impossible.

Real-time data analytics helps provide personalized recommendations based on both existing and new customer data. By looking at historical data like previously visited pages and comparing them with newer data from the stream, businesses can craft the proper messaging and plan the next course of action. The anticipation and forecasting of customers’ future actions and behavior is the key to improving conversion rates and customer satisfaction—ultimately leading to higher revenues and more loyal customers.

Conclusion

Big data is the future, and companies that don’t use it to their advantage would find it hard to compete in this ever-connected world that demands results in an instant. Processing and analyzing data can only become more complex and challenging through time, and for this reason, in-memory computing should be a solution that companies should consider. Aside from improving their business from within, it will also help drive customer acquisition and revenue, while also providing a viable low-latency, high throughput platform for high-speed data analytics.