Geschriebene Artikel über Big Data Analytics

Jobprofil des Data Engineers

Warum Data Engineering der Data Science in Bedeutung und Berufschancen längst die Show stiehlt, dabei selbst ebenso einem stetigen Wandel unterliegt.

Was ein Data Engineer wirklich können muss

Der Data Scientist als sexiest Job des 21. Jahrhunderts? Mag sein, denn der Job hat seinen ganz speziellen Reiz, auch auf Grund seiner Schnittstellenfunktion zwischen Technik und Fachexpertise. Doch das Spotlight der kommenden Jahre gehört längst einem anderen Berufsbild aus der Datenwertschöpfungskette – das zeigt sich auch bei den Gehältern.

Viele Unternehmen sind gerade auf dem Weg zum Data-Driven Business, einer Unternehmensführung, die für ihre Entscheidungen auf transparente Datengrundlagen setzt und unter Einsatz von Business Intelligence, Data Science sowie der Automatisierung mit Deep Learning und RPA operative Prozesse so weit wie möglich automatisiert. Die Lösung für diese Aufgabenstellungen werden oft vor allem bei den Experten für Prozessautomatisierung und Data Science gesucht, dabei hängt der Erfolg jedoch gerade viel eher von der Beschaffung valider Datengrundlagen ab, und damit von einer ganz anderen entscheidenden Position im Workflow datengetriebener Entscheidungsprozesse, dem Data Engineer.

Data Engineer, der gefragteste Job des 21sten Jahrhunderts?

Der Job des Data Scientists hingegen ist nach wie vor unter Studenten und Absolventen der MINT-Fächer gerade so gefragt wie nie, das beweist der tägliche Ansturm der vielen Absolventen aus Studiengängen rund um die Data Science auf derartige Stellenausschreibungen. Auch mangelt es gerade gar nicht mehr so sehr an internationalen Bewerben mit Schwerpunkt auf Statistik und Machine Learning. Der solide ausgebildete und bestenfalls noch deutschsprachige Data Scientist findet sich zwar nach wie vor kaum im Angebot, doch insgesamt gute Kandidaten sind nicht mehr allzu schwer zu finden. Seit Jahren sind viele Qualifizierungsangebote für Studenten sowie Arbeitskräfte am Markt auch günstig und ganz flexibel online verfügbar, ohne dabei Abstriche bei beim Ansehen dieser Aus- und Fortbildungsmaßnahmen in Kauf nehmen zu müssen.

Was ein Data Scientist fachlich in Sachen Expertise alles abdecken muss, hatten wir ganz ausführlich über Betrachtung des Data Science Knowledge Stack besprochen.

Doch was bringt ein Data Scientist, wenn dieser gar nicht über die Daten verfügt, die für seine Aufgaben benötigt werden? Sicherlich ist die Aufgabe eines jeden Data Scientists auch die Vorbereitung und Präsentation seiner Vorhaben. Die Heranschaffung und Verwaltung großer Datenmengen in einer Enterprise-fähigen Architektur ist jedoch grundsätzlich nicht sein Schwerpunkt und oft fehlen ihm dafür auch die Berechtigungen in einer Enterprise-IT. Noch konkreter wird der Bedarf an Datenbeschaffung und -aufbereitung in der Business Intelligence, denn diese benötigt für nachhaltiges Reporting feste Strukturen wie etwa ein Data Warehouse.

Das Profil des Data Engineers: Big Data High-Tech

Auch wenn Data Engineering von Hochschulen und Fortbildungsanbietern gerade noch etwas stiefmütterlich behandelt werden, werden der Einsatz und das daraus resultierende Anforderungsprofil eines Data Engineers am Markt recht eindeutig skizziert. Einsatzszenarien für diese Dateningenieure – auch auf Deutsch eine annehmbare Benennung – sind im Kern die Erstellung von Data Warehouse und Data Lake Systeme, mittlerweile vor allem auf Cloud-Plattformen. Sie entwickeln diese für das Anzapfen von unternehmensinternen sowie -externen Datenquellen und bereiten die gewonnenen Datenmengen strukturell und inhaltlich so auf, dass diese von anderen Mitarbeitern des Unternehmens zweckmäßig genutzt werden können.

Enabler für Business Intelligence, Process Mining und Data Science

Kein Data Engineer darf den eigentlichen Verbraucher der Daten aus den Augen verlieren, für den die Daten nach allen Regeln der Kunst zusammengeführt, bereinigt und in das Zielformat gebracht werden sollen. Klassischerweise arbeiten die Engineers am Data Warehousing für Business Intelligence oder Process Mining, wofür immer mehr Event Logs benötigt werden. Ein Data Warehouse ist der unter Wasser liegende, viel größere Teil des Eisbergs der Business Intelligence (BI), der die Reports mit qualifizierten Daten versorgt. Diese Eisberg-Analogie lässt sich auch insgesamt auf das Data Engineering übertragen, der für die Endanwender am oberen Ende der Daten-Nahrungskette meistens kaum sichtbar ist, denn diese sehen nur die fertigen Analysen und nicht die dafür vorbereiteten Datentöpfe.

Abbildung 1 - Data Engineering ist der Mittelpunkt einer jeden Datenplattform. Egal ob für Data Science, BI, Process Mining oder sogar RPA, die Datenanlieferung bedingt gute Dateningenieure, die bis hin zur Cloud Infrastructure abtauchen können.

Abbildung 1 – Data Engineering ist der Mittelpunkt einer jeden Datenplattform. Egal ob für Data Science, BI, Process Mining oder sogar RPA, die Datenanlieferung bedingt gute Dateningenieure, die bis hin zur Cloud Infrastructure abtauchen können.

Datenbanken sind Quelle und Ziel der Data Engineers

Daten liegen selten direkt in einer einzigen CSV-Datei strukturiert vor, sondern entstammen einer oder mehreren Datenbanken, die ihren eigenen Regeln unterliegen. Geschäftsdaten, beispielsweise aus ERP- oder CRM-Systemen, liegen in relationalen Datenbanken vor, oftmals von Microsoft, Oracle, SAP oder als eine Open-Source-Alternative. Besonders im Trend liegen derzeitig die Cloud-nativen Datenbanken BigQuery von Google, Redshift von Amazon und Synapse von Microsoft sowie die cloud-unabhängige Datenbank snowflake. Dazu gesellen sich Datenbanken wie der PostgreSQL, Maria DB oder Microsoft SQL Server sowie CosmosDB oder einfachere Cloud-Speicher wie der Microsoft Blobstorage, Amazon S3 oder Google Cloud Storage. Welche Datenbank auch immer die passende Wahl für das Unternehmen sein mag, ohne SQL und Verständnis für normalisierte Daten läuft im Data Engineering nichts.

Andere Arten von Datenbanken, sogenannte NoSQL-Datenbanken beruhen auf Dateiformaten, einer Spalten- oder einer Graphenorientiertheit. Beispiele für verbreitete NoSQL-Datenbanken sind MongoDB, CouchDB, Cassandra oder Neo4J. Diese Datenbanken exisiteren nicht nur als Unterhaltungswert gelangweilter Nerds, sondern haben ganz konkrete Einsatzgebiete, in denen sie jeweils die beste Performance im Lesen oder Schreiben der Daten bieten.

Ein Data Engineer muss demnach mit unterschiedlichen Datenbanksystemen zurechtkommen, die teilweise auf unterschiedlichen Cloud Plattformen heimisch sind.

Data Engineers brauchen Hacker-Qualitäten

Liegen Daten in einer Datenbank vor, können Analysten mit Zugriff einfache Analysen bereits direkt auf der Datenbank ausführen. Doch wie bekommen wir die Daten in unsere speziellen Analyse-Tools? Hier muss der Engineer seinen Dienst leisten und die Daten aus der Datenbank exportieren können. Bei direkten Datenanbindungen kommen APIs, also Schnittstellen wie REST, ODBC oder JDBC ins Spiel und ein guter Data Engineer benötigt Programmierkenntnisse, bevorzugt in Python, diese APIs ansprechen zu können. Etwas Kenntnis über Socket-Verbindungen und Client-Server-Architekturen zahlt sich dabei manchmal aus. Ferner sollte jeder Data Engineer mit synchronen und asynchronen Verschlüsselungsverfahren vertraut sein, denn in der Regel wird mit vertraulichen Daten gearbeitet. Ein Mindeststandard an Sicherheit gehört zum Data Engineering und darf keinesfalls nur Datensicherheitsexperten überlassen werden, eine Affinität zu Netzwerksicherheit oder gar Penetration-Testing ist positiv zu bewerten, mindestens aber ein sauberes Berechtigungsmanagement gehört zu den Grundfähigkeiten. Viele Daten liegen nicht strukturiert in einer Datenbank vor, sondern sind sogenannte unstrukturierte oder semi-strukturierte Daten aus Dokumenten oder aus Internetquellen. Mit Methoden wie Data Web Scrapping und Data Crawling sowie der Automatisierung von Datenabrufen beweisen herausragende Data Engineers sogar echte Hacker-Qualitäten.

Dirigent der Daten: Orchestrierung von Datenflüssen

Eine der Kernaufgaben des Data Engineers ist die Entwicklung von ETL-Strecken, um Daten aus Quellen zu Extrahieren, zu in das gewünschte Zielformat zu Transformieren und schließlich in die Zieldatenbank zu Laden. Dies mag erstmal einfach klingen, wird jedoch zur echten Herausforderung, wenn viele ETL-Prozesse sich zu ganzen ETL-Ketten und -Netzwerken zusammenfügen, diese dabei trotz hochfrequentierter Datenabfrage performant laufen müssen. Die Orchestrierung der Datenflüsse kann in der Regel in mehrere Etappen unterschieden werden, von der Quelle ins Data Warehouse, zwischen den Ebenen im Data Warehouse sowie vom Data Warehouse in weiterführende Systeme, bis hin zum Zurückfließen verarbeiteter Daten in das Data Warehouse (Reverse ETL).

Hart an der Grenze zu DevOp: Automatisierung in Cloud-Architekturen

In den letzten Jahren sind Anforderungen an Data Engineers deutlich gestiegen, denn neben dem eigentlichen Verwalten von Datenbeständen und -strömen für Analysezwecke wird zunehmend erwartet, dass ein Data Engineer auch Ressourcen in der Cloud managen, mindestens jedoch die Datenbanken und ETL-Ressourcen. Darüber hinaus wird zunehmend jedoch verlangt, IT-Netzwerke zu verstehen und das ganze Zusammenspiel der Ressourcen auch als Infrastructure as Code zu automatisieren. Auch das automatisierte Deployment von Datenarchitekturen über CI/CD-Pipelines macht einen Data Engineer immer mehr zum DevOp.

Zukunfts- und Gehaltsaussichten

Im Vergleich zum Data Scientist, der besonders viel Methodenverständnis für Datenanalyse, Statistik und auch für das zu untersuchende Fachgebiet benötigt, sind Data Engineers mehr an Tools und Plattformen orientiert. Ein Data Scientist, der Deep Learning verstanden hat, kann sein Wissen zügig sowohl mit TensorFlow als auch mit PyTorch anwenden. Ein Data Engineer hingegen arbeitet intensiver mit den Tools, die sich über die Jahre viel zügiger weiterentwickeln. Ein Data Engineer für die Google Cloud wird mehr Einarbeitung benötigen, sollte er plötzlich auf AWS oder Azure arbeiten müssen.

Ein Data Engineer kann in Deutschland als Einsteiger mit guten Vorkenntnissen und erster Erfahrung mit einem Bruttojahresgehalt zwischen 45.000 und 55.000 EUR rechnen. Mehr als zwei Jahre konkrete Erfahrung im Data Engineering wird von Unternehmen gerne mit Gehältern zwischen 50.000 und 80.000 EUR revanchiert. Darüber liegen in der Regel nur die Data Architects / Datenarchitekten, die eher in großen Unternehmen zu finden sind und besonders viel Erfahrung voraussetzen. Weitere Aufstiegschancen für Data Engineers sind Berater-Karrieren oder Führungspositionen.

Wer einen Data Engineer in Festanstellung gebracht hat, darf sich jedoch nicht all zu sicher fühlen, denn Personalvermittler lauern diesen qualifizierten Fachkräften an jeder Ecke des Social Media auf. Gerade in den Metropolen wie Berlin schaffen es längst nicht alle Unternehmen, jeden Data Engineer über Jahre hinweg zu beschäftigen. Bei der großen Auswahl an Jobs und Herausforderungen fällt diesen Datenexperten nicht schwer, seine Gehaltssteigerungen durch Jobwechsel proaktiv voranzutreiben.

Hybrid Cloud

The Cloud or Hybrid Cloud – Pros & Cons

Big data and artificial intelligence (AI) are some of today’s most disruptive technologies, and both rely on data storage. How organizations store and manage their digital information has a considerable impact on these tools’ efficacy. One increasingly popular solution is the hybrid cloud.

Cloud computing has become the norm across many organizations as the on-premise solutions struggle to meet modern demands for uptime and scalability. Within that movement, hybrid cloud setups have gained momentum, with 80% of cloud users taking this approach in 2022. Businesses noticing that trend and considering joining should carefully weigh the hybrid cloud’s pros and cons. Here’s a closer look.

The Cloud

To understand the advantages and disadvantages of hybrid cloud setups, organizations must contrast them against conventional cloud systems. These fall into two categories: public, where multiple clients share servers and resources, and private, where a single party uses dedicated cloud infrastructure. In either case, using a single cloud presents unique opportunities and challenges.

Advantages of the Cloud

The most prominent advantage of traditional cloud setups is their affordability. Because both public and private clouds entirely remove the need for on-premise infrastructure, users pay only for what they need. Considering how 31% of users unsatisfied with their network infrastructure cite insufficient budgets as the leading reason, that can be an important advantage.

The conventional cloud also offers high scalability thanks to its reduced hardware needs. It can also help prevent user errors like misconfiguration because third-party vendors manage much of the management side. Avoiding those mistakes makes it easier to use tools like big data and AI to their full potential.

Disadvantages of the Cloud

While outsourcing management and security workloads can be an advantage in some cases, it comes with risks, too. Most notably, single-cloud or single-type multi-cloud users must give up control and visibility. That poses functionality and regulatory concerns when using these services to train AI models or analyze big data.

Storing an entire organization’s data in just one system also makes it harder to implement a reliable backup system to prevent data loss in a breach. That may be too risky in a world where 96% of IT decision-makers have experienced at least one outage in the last three years.

Hybrid Cloud

The hybrid cloud combines public and private clouds so users can experience some of the benefits of both. In many instances, it also combines on-premise and cloud environments, letting businesses use both in a cohesive data ecosystem. Here’s a closer look at the hybrid cloud’s pros and cons.

Advantages of Hybrid Cloud

One of the biggest advantages of hybrid cloud setups is flexibility. Businesses can distribute workloads across public, private and on-premise infrastructure to maximize performance with different processes. That control and adaptability also let organizations use different systems for different data sets to meet the unique security needs of each.

While hybrid environments may be less affordable than traditional clouds because of their on-premise parts, they offer more cost-efficiency than purely on-prem solutions. Having multiple data storage technologies provides more disaster recovery options. With 75% of small businesses being unable to recover from a ransomware attack, that’s hard to ignore.

Hybrid cloud systems are also ideal for companies transitioning to the cloud from purely on-premise solutions. The mixture of both sides enables an easier, smoother and less costly shift than moving everything simultaneously.

Disadvantages of Hybrid Cloud

By contrast, the most prominent disadvantage of hybrid cloud setups is their complexity. Creating a system that works efficiently between public, private and on-prem setups is challenging, making these systems error-prone and difficult to manage. Misconfigurations are the biggest threat to cloud security, so that complexity can limit big data and AI’s safety.

Finding compatible public and private clouds to work with each other and on-prem infrastructure can also pose a challenge. Vendor lock-in could limit businesses’ options in this regard. Even when they get things working, they may lack transparency, making it difficult to engage in effective big data analytics.

Which Is the Best Option?

Given the advantages and disadvantages of hybrid cloud setups and their conventional counterparts, it’s clear that no single one emerges as the optimal solution for every situation. Instead, which is best depends on an organization’s specific needs.

The hybrid cloud is ideal for companies facing multiple security, regulatory or performance needs. If the business has varying data sets that must meet different regulations, some information that’s far more sensitive than others or has highly diverse workflows, they need the hybrid cloud’s flexibility and control. Companies that want to move slowly into the cloud may prefer these setups, too.

On the other hand, the conventional cloud is best for companies with tighter budgets, limited IT resources or a higher need for scalability. Smaller businesses with an aggressive digitization timeline, for example, may prefer a public multi-cloud setup over a hybrid solution.

Find the Optimal Data Storage Technology

To make the most of AI and big data, organizations must consider where they store the related data. For some companies, the hybrid cloud is the ideal solution, while for others, a more conventional setup is best. Making the right decision begins with understanding what each has to offer.

How to speed up claims processing with automated car damage detection

AI drives automation, not only in industrial production or for autonomous driving, but above all in dealing with bureaucracy. It is an realy enabler for lean management!

One example is the use of Deep Learning (as part of Artificial Intelligence) for image object detection. A car insurance company checks the amount of the damage by a damage report after car accidents. This process is actually performed by human professionals. With AI, we can partially automate this process using image data (photos of car damages). After an AI training with millions of photos in relation to real costs for repair or replacement, the cost estimation gets suprising accurate and supports the process in speed and quality.

AI drives automation and DATANOMIQ drives this automation with you! You can download the Infographic as PDF.

How to speed up claims processing with automated car damage detection

How to speed up claims processing
with automated car damage detection

Download this Infographic as PDF now by clicking here!

We wrote this article in cooperation with pixolution, a company for computer vision and AI-bases visual search. Interested in introducing AI / Deep Learning to your organization? Do not hesitate to get in touch with us!

DATANOMIQ is the independent consulting and service partner for business intelligence, process mining and data science. We are opening up the diverse possibilities offered by big data and artificial intelligence in all areas of the value chain. We rely on the best minds and the most comprehensive method and technology portfolio for the use of data for business optimization.

7 Reasons Why You Need Cloud Cost Management Tool

Many businesses today use a CCM (Cloud Cost Management) tool to optimize their spending on the cloud. Read this article to learn why.

7 Reasons Why You Need Cloud Cost Management Tool (And Where to Find One)

The Cloud. Do you remember the time when this sounded like a silly, fiction-like buzzword? No one imagined that a few years later, we’d be amazed by the technology. Or, that the clouds will be used by millions of people and businesses.

As time passes, more and more businesses shift to the cloud. They use clouds for data warehousing and to share information with remote workers and clients. They use it to keep information easily accessible and safely stored in a digital format.

Clouds have become irreplaceable in the world. In 2020, the total worth of this market reached $371.4 billion. With a compound annual growth rate of 17.5%, the projections tell us that this market will reach $832 billion by 2025.

Today, zettabytes of data are placed in the top storage services such as Google Drive, Google Workspace, and Dropbox, as well as private IT clouds.

Most used Cloud Storages

Source: https://www.cloudwards.net/cloud-computing-statistics/

In a few years from now, there will be over a hundred zettabytes of data placed in the cloud, which amounts to nearly half of the global data storage projected for 2025. At this point, it is safe to say that clouds are trending in the business world and we can only expect these numbers to grow.

This raises a very big question – how will businesses handle their cloud usage?

One of the biggest challenges that companies face when it comes to the cloud is controlling cloud costs. Fortunately, there’s such a thing as cloud cost optimization, used to help businesses minimize their expenses and achieve better cloud performance.

This article will tell you all about cloud cost optimization and the tools used to make this happen. Read on to learn about the top 7 reasons why you need to invest in a good CCM tool.

What is cloud cost optimization?

Before we jump at the benefits of using a CCM, let’s discuss cloud cost optimization a bit. CCO refers to the act of reducing resource waste on clouds by scaling and choosing the resources required for cloud functions.

Data analytics in the cloud can transform your business altogether. While this is impossible to analyze on your own, certain tools can help you find ways to optimize the cloud performance.

Depending on what you use for this purpose, the success of your CCO strategy can vary. Nevertheless, with a good tool, you can get insight into what you’re doing right and what you need to change to get the most out of your cloud investment.

Choosing one of the best cloud cost management tools

When it comes to choosing cloud cost management tools, Zluri is the answer to all your questions. The popular SaaS management platform has a full range of top-ranked tools to be used for cloud cost optimization.

According to Zluri, the top choice available to businesses today is Harness.

The number one CCM tool will proactively detect anomalies and tell you all you need to know about your cloud resources. It will also predict your cloud spending and help you make smart data-based decisions. Lastly, Harness allows users to automate idle resource management.

Source: https://www.zluri.com/blog/cloud-cost-management-tools/

Some of the things to look for when choosing a cloud cost management tool are:

  • Automated cost optimizatione. the option to schedule resources to turn on and off to reduce costs and manual efforts.
  • Visibility of resources. The tool you use should make various resources used in the cloud environment visible such as applications, containers, and servers.
  • State of resources. A good CCM will tell you about the state of resources, how they are used, and whether they are underutilized.
  • Multi-cloud support. Many businesses store their data across different cloud providers. If you are one of them, you need a tool that offers multi-cloud support.

Naturally, the pricing and ease of use are also important when you’re making your choice.

Reasons to invest in a cloud cost management tool

Now that you know where to find the best CCM, let’s talk about what you get if you use a cloud cost management tool.

It all starts with what the CCM can do for you. Here are just a few of the most popular functions:

  • Forecast and budget the cloud spend with a great deal of accuracy
  • Inform on the current cloud costs
  • Discover areas that can increase profitability and find the least profitable projects within the cloud
  • Give you tips on adjusting the pricing structure and re-thinking unnecessary features

https://www.jamcracker.com/blogs/fundamentals-of-cloud-cost-management

With the right tool in your arsenal, you can reap many benefits. We present you with the top seven.

https://www.techtarget.com/searchcloudcomputing/feature/5-ways-to-reduce-cloud-costs

1.    Define the cloud costs

451Research reported that 73% of US cloud users look at their cloud expenses as a fixed cost. But, these expenses aren’t fixed – they are variables. That’s why businesses often come across cost inefficiencies. Cloud service charging changes every day and interpreting their plans is not as straightforward as it might seem.

However, with a good CCM tool, you can tie all the expenses to pricing and value and figure out a fixed quote for cloud spending. You can use this data to plan and manage your budget or figure out ways to make your cloud use more cost-conscious.

2.    More transparency

A good model for cost optimization will positively impact your operational, as well as financial aspects. Thanks to the analytics feature in a CCM tool, you can monitor and better organize your cloud expenditures. You’ll know how your cloud is used, how much every action costs, and what you can do to optimize this. More transparency leads to better decision-making.

3.    Reduce waste and correct defects

Let’s say that you invest in a cloud cost management tool. It will discover the resources you are spending and detect features that are underused. You can use this information to manage neglected tools and eliminate the unnecessary. You can correct the defects and reduce waste.

As a result of the transparency and analytics, users of a cloud can find a better cost-performance balance and optimize their cloud.

4.    Smart predictions

A cloud cost management tool will provide you with more than waste information. The usage patterns you’ll get access to can be analyzed and used to make smart predictions. You can use the data to perform consolidated cost analysis and predict future expenses for the cloud. Base this on additional cloud services you’ll need, increased cloud usage, as well as scalability.

5.    Timely reviews

By performing audits of the cloud usage and resource, teams can get better control over this matter. A CCM tool can perform audits every couple of months and hand out reports to the company about cloud usage.

These reports are shared within organizations and used to categorize expenses and drop under-utilized resources in the cloud.

6.    Memory and storage management

Another great benefit of using a CCM tool is that it helps you manage your storage. With it, you can detect unused data, manage regular data backups, and check the usage of the cloud.

Going through everything stored in your cloud can take forever, but not with these tools. With these tools, organizations can find and discard data that is useless and save a lot of money in the process. It’s amazing how much is stored in clouds without actual use. The right cloud cost management tool can point you in the direction of things you don’t need and remind you to cut them loose.

7.    Multi-cloud synchronization

Chances are, your business is using more than one cloud service to store its data. After a while, your data is scattered across different clouds and you cannot find or manage information as easily as you should. When you’re handling more clouds, it gets much harder to optimize the costs or reduce the expenses.

But, not if you have a good CCM tool.

A good CCM tool offers synchronization between at least the most-used cloud services out there. This will allow you to optimize your costs across different clouds, figure out where you can save some money, and find a way to organize your cloud usage more efficiently.

The best time to optimize your cloud is now!

Who would have thought that clouds will become such a big part of our business operations? If you are using clouds for your business, investing in a cloud cost management tool is the next smartest step to take. Thanks to a small investment in a good tool, you can save a fortune on cloud costs.

Data Dimensionality Reduction Series: Random Forest

Hello lovely individuals, I hope everyone is doing well, is fantastic, and is smiling more than usual. In this blog we shall discuss a very interesting term used to build many models in the Data science industry as well as the cyber security industry.

SUPER BASIC DEFINITION OF RANDOM FOREST:

Random forest is a form of Supervised Machine Learning Algorithm that operates on the majority rule. For example, if we have a number of different algorithms working on the same issue but producing different answers, the majority of the findings are taken into account. Random forests, also known as random selection forests, are an ensemble learning approach for classification, regression, and other problems that works by generating a jumble of decision trees during training.

When it comes to regression and classification, random forest can handle both categorical and continuous variable data sets. It typically helps us outperform other algorithms and overcome challenges like overfitting and the curse of dimensionality.

QUICK ANALOGY TO UNDERSTAND THINGS BETTER:

Uncle John wants to see a doctor for his acute abdominal discomfort, so he goes to his pals for recommendations on the top doctors in town. After consulting with a number of friends and family members, Atlas chooses to visit the doctor who received the highest recommendations.

So, what does this mean? The same is true for random forests. It builds decision trees from several samples and utilises their majority vote for classification and average for regression.

HOW BIAS AND VARIANCE AFFECTS THE ALGORITHM?

  1. BIAS
  • The algorithm’s accuracy or quality is measured.
  • High bias means a poor match
  1. VARIANCE
  • The accuracy or specificity of the match is measured.
  • A high variance means a weak match

We would like to minimise each of these. But, unfortunately we can’t do this independently, since there is a trade-off

EXPECTED PREDICTION ERROR = VARIANCE + BIAS^2 + NOISE^2

Bias vs Variance Tradeoff

HOW IS IT DIFFERENT FROM OTHER TWO ALGORITHMS?

Every other data dimensionality reduction method, such as missing value ratio and principal component analysis, must be built from the scratch, but the best thing about random forest is that it comes with built-in features and is a tree-based model that uses a combination of decision trees for non-linear data classification and regression.

Without wasting much time, let’s move to the main part where we’ll discuss the working of RANDOM FOREST:

WORKING WITH RANDOM FOREST:

As we saw in the analogy, RANDOM FOREST operates on the basis of ensemble technique; however, what precisely does ensemble technique mean? It’s actually rather straightforward. Ensemble simply refers to the combination of numerous models. As a result, rather than a single model, a group of models is utilised to create predictions.

ENSEMBLE TECHNIQUE HAS 2 METHODS:

Ensemble Learning: Bagging and Boosting

1] BAGGING

2] BOOSTING

Let’s dive deep to understand things better:

1] BAGGING:

LET’S UNDERSTAND IT THROUGH A BETTER VIEW:

Bagging simply helps us to reduce the variance in a loud datasets. It works on an ensemble technique.

  1. Algorithm independent : general purpose technique
  2. Well suited for high variance algorithms
  3. Variance reduction is achieved by averaging a group of data.
  4. Choose # of classifiers to build (B)

DIFFERENT TRAINING DATA:

  1. Sample Training Data with Replacement
  2. Same algorithm on different subsets of training data

APPLICATION :

  1. Use with high variance algorithms (DT, NN)
  2. Easy to parallelize
  3. Limitation: Loss of Interpretability
  4. Limitation: What if one of the features dominates?

SUMMING IT ALL UP:

  1. Ensemble approach = Bootstrap Aggregation.
  2. In bagging a random dataset is selected as shown in the above figure and then a model is built using those random data samples which is termed as bootstrapping.
  3. Now, when we train this random sample data it is not mendidate to select data points only once, while training the sample data we can select the individual data point more then once.
  4. Now each of these models is built and trained and results are obtained.
  5. Lastly the majority results are being considered.

We can even calculate  the error from this thing know as random forest OOB error:

RANDOM FORESTS: OOB ERROR  (Out-of-Bag Error) :

▪ From each bootstrapped sample, 1/3rd of it is kept aside as “Test”

▪ Tree built on remaining 2/3rd

▪ Average error from each of the “Test” samples is called “Out-of-Bag Error”

▪ OOB error provides a good estimate of model error

▪ No need for separate cross validation

2] BOOSTING:

Boosting in short helps us to improve our prediction by reducing error in predictive data analysis.

Weak Learner: only needs to generate a hypothesis with a training accuracy greater than 0.5, i.e., < 50% error over any distribution.

KEY INTUITION:

  1. Strong learners are very difficult to construct
  2. Constructing weaker Learners is relatively easy influence with the empirical squared improvement when assigned to the model

APPROACH OUTLINE:

  1. Start with a ML algorithm for finding the rough rules of thumb (a.k.a. “weak” or “base” algorithm)
  2. Call the base algorithm repeatedly, each time feeding it a different subset of the training examples
  3. The basic learning algorithm creates a new weak prediction rule each time it is invoked.
  4. After several rounds, the boosting algorithm must merge these weak rules into a single prediction rule that, hopefully, is considerably more accurate than any of the weak rules alone.

TWO KEY DETAILS :

  1. In each round, how is the distribution selected ?
  2. What is the best way to merge the weak rules into a single rule?

BOOSTING is classified into two types:

1] ADA BOOST

2] XG BOOST

As far as the Random forest is concerned it is said that it follows the bagging method, not a boosting method. As the name implies, boosting involves learning from others, which in turn increases learning. Random forests have trees that run in parallel. While creating the trees, there is no interaction between them.

Boosting helps us reduce the error by decreasing the bias whereas, on other hand, Bagging is a manner to decrease the variance within the prediction with the aid of generating additional information for schooling from the dataset using mixtures with repetitions to provide multi-sets of the original information.

How Bagging helps with variance – A Simple Example

BAGGED TREES

  1. Decision Trees have high variance
  2. The resultant tree (model) is determined by the training data.
  3. (Unpruned) Decision Trees tend to overfit
  4. One option: Cost Complexity Pruning

BAG TREES

  1. Sample with replacement (1 Training set → Multiple training sets)
  2. Train model on each bootstrapped training set
  3. Multiple trees; each different : A garden ☺
  4. Each DT predicts; Mean / Majority vote prediction
  5. Choose # of trees to build (B)

ADVANTAGES

Reduce model variance / instability.

RANDOM FOREST : VARIABLE IMPORTANCE

VARIABLE IMPORTANCE :

▪ Each time a tree is split due to a variable m, Gini impurity index of the parent node is higher than that of the child nodes

▪ Adding up all Gini index decreases due to variable m over all trees in the forest, gives a measure of variable importance

IMPORTANT FEATURES AND HYPERPARAMETERS:

  1. Diversity :
  2. Immunity to the curse of dimensionality :
  3. Parallelization :
  4. Train-Test split :
  5. Stability :
  6. Gini significance (or mean reduction impurity) :
  7. Mean Decrease Accuracy :

FEATURES THAT IMPROVE THE MODEL’S PREDICTIONS and SPEED :

  1. maximum_features :

Increasing max features often increases model performance since each node now has a greater number of alternatives to examine.

  1. n_estimators :

The number of trees you wish to create before calculating the maximum voting or prediction averages. A greater number of trees improves speed but slows down your code.

  1. min_sample_leaf :

If you’ve ever designed a decision tree, you’ll understand the significance of the minimal sample leaf size. A leaf is the decision tree’s last node. A smaller leaf increases the likelihood of the model collecting noise in train data.

  1. n_jobs :

This option instructs the engine on how many processors it is permitted to utilise.

  1. random_state :

This argument makes it simple to duplicate a solution. If given the same parameters and training data, a definite value of random state will always provide the same results.

  1. oob_score:

A random forest cross validation approach is used here. It is similar to the leave one out validation procedure, except it is significantly faster.

LET’S SEE THE STEPS INVOLVED IN IMPLEMENTATION OF RANDOM FOREST ALGORITHM:

Step1: Choose T- number of trees to grow

Step2: Choose m<p (p is the number of total features) —number of features used to calculate the best split at each node (typically 30% for regression, sqrt(p) for classification)

Step3: For each tree, choose a training set by choosing N times (N is the number of training examples) with replacement from the training set

Step4: For each node, calculate the best split, Fully grown and not pruned.

Step5: Use majority voting among all the trees

Following is a full case study and implementation of all the principles we just covered, in the form of a jupyter notebook including every concept and all you ever wanted to know about RANDOM FOREST.

GITHUB Repository for this blog article: https://gist.github.com/Vidhi1290/c9a6046f079fd5abafb7583d3689a410

Control the visibility of the PowerBI visuals based on condition

In PowerBI, there is no direct or functional mechanism to adjust the visibility (Show/Hide) of visualizations based on filter choices. There is, however, a workaround that enables us to show/hide visuals based on filter condition.

The fundamental concept behind this technique is to apply a mask to a visual and change its opacity based on a condition or filter selection.

Use Case:

I have detail table of orders. These orders are divided into Consumer, Home Office, and Corporation categories. I use segment as a filter. One of the requirements is to present a table of detail if the overall profit for the selected segment is less than $100,000. To do this, this task will be divided into two major parts. First, we will display the table if the filter is selected. Next, we will add a condition to the table.

Step 1: Show table only filter is selected

  • Place filter (Slicer) and visual on the Report Pane.

  • Create a measure that will determine if the filter is selected or not.

Filter_Selected = IF(ISFILTERED(Orders[Segment]),1,0)

  • Add this measure to the filter pane of the table visualization and select the show item when the value is 1 option. This will ensure that when no options are selected, only the header is displayed.

  • Set the mask down on the table. Make sure you only mask the table header with a border color that matches your background, or remove it entirely.

  • Create a measure to change the mask’s transparency. If two zeros are appended to the end of any HAX code, this represents complete transparency.

mask_transparency =IF([Filter_Selected],”#FFFFFF00″,”#FFFFFF”)

  • Keep this measure on the Fill of the mask and add conditional formatting to it.

If the mask transparency(measure) field is grayed out during the previous steps, you may need to modify the data type of mask transparency to text.

Step 2 : Add a condition to the solution

  • Create a new measure to determine if our condition is met.

condition_check = IF(CALCULATE(SUM(Orders[Profit]),filter(all(Orders), Orders[Segment] = SELECTEDVALUE(Orders[Segment]))) < 100000,1,0)

  • Now add this new measure to a table visual’s filter pane and pick the show item when the value is 1 option. This ensures that only if the condition meets the table will appear.

You can now display or hide visuals based on slicer selection and condition. If you know a better way to do this, please comment and let me know. For this article, I referred to this page.

 

Better Customer Service Using Big Data

Big data is frequently discussed across many industries by more than just business owners, CEOs or IT managers. Big data and big data analytics are two critical elements of modern business that company leaders and their employees should understand if they want to make more informed decisions.

In addition to the highly data-driven business landscape, people’s needs and expectations are changing. Companies with superb customer service gain a competitive advantage over competitors with poor operations.

The power of big data analytics helps organizations take steps to improve their customer service offerings, ultimately meeting or exceeding the needs and expectations of existing and potential clients.

An Overview of Big Data

What exactly is big data and how is it different from traditional data?

Big data describes large, diverse datasets growing at increasing rates and proving highly useful in business. Datasets are so voluminous that traditional data processing software solutions cannot manage them properly.

Here are the “five Vs,” or essential qualities, that accurately describe big data:

  • Volume
  • Velocity
  • Variety
  • Veracity
  • Value

Businesses that leverage big data can address or even prevent a range of problems that would otherwise be more challenging to solve.

Organizations collect, combine and mine three types of data — structured, semi-structured and unstructured — for advanced analytics applications.

Benefits of Big Data Analytics

After analyzing big data, gathering new insights on company operations and other critical business issues helps companies overcome existing problems. Some of these might be costly and cause potential obstacles.

Here are two main benefits of big data analytics:

Customer Attraction and Retention

Big data analytics gives companies detailed insights into customers’ wants and needs.

For example, organizations can review customer data and adjust their current sales or marketing strategies to increase loyalty and satisfaction. Big data can also highlight changes in client sentiment and predict future trends.

Increased Employee Productivity

Monitoring employee performance is essential for most companies. Thankfully, big data analysis can show leaders how individual workers perform and measure their productivity.

Big data can analyze important factors such as absenteeism rates, number of sick days taken, workload and output. Once this information is collected, supervisors can relay findings to employees and make improvements to bolster productivity.

Other benefits exist, but these two examples provide a glimpse into the world of big data and how transformative it is in the modern business world.

How to Use Big Data to Improve Customer Service

There are a few ways businesses can harness big data analytics to gain insights and take actionable steps to improve their customer service offerings. Here’s how.

Solves Customer Inquiries More Effectively

Contacting a customer service center is often time-consuming and headache-inducing for a consumer, especially when the representative cannot answer a question or solve a problem.

Lack of effectiveness and speed are two of the most common causes of customer service frustration. Qualitative and quantitative big data analytics let customer service employees identify their weaknesses, such as their familiarity with a product or service, and take action accordingly.

For example, a representative can spend more time learning about customers’ most common issues while using a specific product, allowing them to solve problems faster and more effectively.

Increases Personalized Offers

A business can achieve significant revenue growth by aligning customer behaviors and marketing messages. Personalized offerings are becoming increasingly popular among consumers. In other words, people want companies to see them as individuals rather than a source of profit.

Big data analytics helps organizations increase the number and quality of personalized offerings. For example, analytics can reveal critical customer information, like how much money they spend, what products they buy and which services they use.

These details help employees create and automate personalized marketing offers. Customer service representatives can also use this data to make recommendations based on buyer preferences, improving the experience and building loyalty.

Empowers Customer Service Representatives

Big data analytics are a major boon to customer service representatives. These employees are considered the face of the company, meaning they must have access to all the resources they need. Insights from big data are no exception.

Representatives working with results from big data analysis are in a better position to respond to inquiries more quickly and provide effective customer solutions. They will likely perform well if they have insights at their disposal.

Provide Superior Customer Support With Big Data Analytics

No matter the industry, virtually every organization relies on data, whether it’s sales, web traffic, customer, supply chain management or inventory data.

Data is becoming increasingly important for companies in today’s competitive business environment. The role of big data will continue to grow as more organizations recognize its positive impact on customer service and satisfaction.

AI for games, games for AI

1, Who is playing or being played?

Since playing Japanese video games named “Demon’s Souls” and “Dark Souls” when they were released by From Software, I had played almost no video games for many years. During the period, From Software established one genre named soul-like games. Soul-like games are called  死にゲー in Japanese, which means “dying games,” and they are also called マゾゲー, which means “masochistic games.”  As the words imply, you have to be almost masochistic to play such video games because you have to die numerous times in them. And I think recently it has been one of the most remarkable times for From Software because in November of 2021 “Dark Souls” was selected the best video game ever by Golden Joystick Awards. And in the end of last February a new video game by From Software called “Elden Ring” was finally released. After it proved that Miyazaki Hidetaka, the director of Soul series, collaborated with George RR Martin, the author of the original of “Game of Thrones,” “Elden Ring” had been one of the most anticipated video games. In spite of its notorious difficulty as well as other soul-like games so far, “Elden Ring” became a big hit, and I think Miyazak Hidetaka is now the second most famous Miyazaki in the world.  A lot of people have been playing it, raging, and screaming. I was no exception, and it took me around 90 hours to finish the video game, breaking a game controller by the end of it. It was a long time since I had been so childishly emotional last time, and I was almost addicted to trial and errors the video game provides. At the same time, one question crossed my mind: is it the video game or us that is being played?

The childhood nightmare strikes back. Left: the iconic and notorious boss duo Ornstein and Smough in Dark Souls (2011), right: Godskin Duo in Elden Ring (2022).

Miyazaki Hidetaka entered From Software in 2004 and in the beginning worked as a programmer of game AI, which controls video games in various ways. In the same year an AI researcher Miyake Youichiro also joined From Software, and I studied a little about game AI by his book after playing “Elden Ring.” I found that he also joined “Demon’s Souls,” in which enemies with merciless game AI were arranged, and I had to conquer them to reach the demon in the end at every dungeon. Every time I died, even in the terminal place with the boss fight, I had to restart from the start, with all enemies reviving. That requires a lot of trial and errors, and that was the beginning of soul-like video games today.  In the book by the game AI researcher who was creating my tense and almost traumatizing childhood experiences, I found that very sophisticated techniques have been developed to force players to do trial and errors. They were sophisticated even at a level of controlling players at a more emotional level. Even though I am familiar with both of video games and AI at least more than average, it was not until this year that I took care about this field. After technical breakthroughs mainly made Western countries, video game industry showed rapid progress, and industry is now a huge entertainment industry, whose scale is now bigger that those of movies and music combined. Also the news that Facebook changed its named to Meta and that Microsoft announced to buy Activision Blizzard were sensational recently. However media coverage about those events would just give you impressions that those giant tech companies are making uses of the new virtual media as metaverse or new subscription services. At least I suspect these are juts limited sides of investments on the video game industry.

The book on game AI also made me rethink AI technologies also because I am currently writing an article series on reinforcement learning (RL) as a kind of my study note. RL is a type of training of an AI agent through trial-and-error-like processes. Rather than a labeled dataset, RL needs an environment. Such environment receives an action from an agent and gives the consequent state and next reward. From a view point of the agent, it give an action and gets the consequent next state and a corresponding reward, which looks like playing a video game. RL mainly considers a more simplified version of video-game-like environments called a Markov decision processes (MDPs), and in an MDP at a time step t an RL agents takes an action A_t, and gets the next state S_t and a corresponding reward R_t. An MDP is often displayed as a graph at the left side below or the graphical model at the right side.

Compared to a normal labeled dataset used for other machine learning, such environment is something hard to prepare. The video game industry has been a successful manufacturer of such environments, and as a matter of fact video games of Atari or Nintendo Entertainment System (NES) are used as benchmarks of theoretical papers on RL. Such video games might be too primitive for considering practical uses, but researches on RL are little by little tackling more and more complicated video games or simulations. But also I am sure creating AI that plays video games better than us would not be their goals. The situation seems like they are cultivating a form of more general intelligence inside computer simulations which is also effective to the real world. Someday, experiences or intelligence grown in such virtual reality might be dragged to our real world.

Testing systems in simulations has been a fascinating idea, and that is also true of AI research. As I mentioned, video games are frequently used to evaluate RL performances, and there are some tools for making RL environments with modern video game engines. Providing a variety of such sophisticated computer simulations will be indispensable for researches on AI. RL models need to be trained in simulations before being applied on physical devices because most real machines would not endure numerous trial and errors RL often requires. And I believe the video game industry has a potential of developing such experimental fields of AI fueled by commercial success in entertainment. I think the ideas of testing systems or training AI in simulations is getting a bit more realistic due to recent development of transfer learning.

Transfer learning is a subfield of machine learning which apply intelligence or experiences accumulated in datasets or tasks to other datasets or tasks. This is not only applicable to RL but also to more general machine learning tasks like regression or classification. Or rather it is said that transfer learning in general machine learning would show greater progress at a commercial level than RL for the time being. And transfer learning techniques like using pre-trained CNN or BERT is already attracting a lot of attentions. But I would say this is only about a limited type of transfer learning. According to Matsui Kota in RIKEN AIP Data Driven Biomedical Science Team, transfer learning has progressed rapidly after the advent of deep learning, but many types of tasks and approaches are scattered in the field of transfer learning. As he says, the term transfer learning should be more carefully used. I would like to say the type of transfer learning discussed these days are a family of approaches for tackling lack of labels. At the same time some of current researches on transfer learning is also showing possibilities that experiences or intelligence in computer simulations are transferable to the real world. But I think we need to wait for more progress in RL before such things are enabled.

In this article I would like to explain how video games or computer simulations can provide experiences to the real world in two ways. I am first going to briefly explain how video game industry in the first place has been making game AI to provide game users with tense experiences. And next I will explain how RL has become a promising technique to beat such games which were originally invented to moderately harass human players. And in the end, I am going to briefly introduce ideas of transfer learning applicable to video games or computer simulations. What I can talk in this article is very limited for these huge study areas or industries. But I hope you would see the video game industry and transfer learning in different ways after reading this article, and that might give you some hints about how those industries interact to each other in the future. And also please keep it in mind that I am not going to talk so much about growing video game markets, computer graphics, or metaverse. Here I focus on aspects of interweaving knowledge and experiences generated in simulation or real physical worlds.

2, Game AI

The fact that “Dark Souls” was selected the best game ever at least implies that current video game industry makes much of experiences of discoveries and accomplishments while playing video games, rather than cinematic and realistic computer graphics or iconic and world widely popular characters. That is a kind of returning to the origin of video games. Video games used to be just hard because the more easily players fail, the more money they would drop in arcade games. But I guess this aspect of video games tend to be missed when you talk about video games as a video game fan. What you see in advertisements of video games are more of beautiful graphics, a new world, characters there, and new gadgets. And it has been actually said that qualities of computer graphics have a big correlation with video game sales. In the third article of my series on recurrent neural networks (RNN), I explained how video game industry laid a foundation of the third AI boom during the second AI winter in 1990s. To be more concrete, graphic cards developed rapidly to realize more photo realistic graphics in PC games, and the graphic card used in Xbox was one of the first programmable GPU for commercial uses. But actually video games developed side by side with computer science also outside graphics. Thus in this section I am going to how video games have developed by focusing on game AI, which creates intelligence in video games in several ways. And my explanations on game AI is going to be a rough introduction to a huge and great educational works by Miyake Youichiro.

Playing video games is made up by decision makings, and such decision makings are made in react to game AI. In other words, a display is input into your eyes or sight nerves, and sequential decision makings, that is how you have been moving fingers are outputs. Complication of the experiences, namely hardness of video games, highly depend on game AI.  Game AI is mainly used to design enemies in video games to hunt down players. Ideally game AI has to be both rational and human. Rational game AI implemented in enemies frustrate or sometimes despair users by ruining users’ efforts to attack them, to dodge their attacks, or to use items. At the same time enemies have to retain some room for irrationality, that is they have to be imperfect. If enemies can perfectly conquer players’ efforts by instantly recognizing their commands, such video games would be theoretically impossible to beat. Trying to defeat such enemies is nothing but frustrating. Ideal enemies let down their guard and give some timings for attacking and trying to conquer them. Sophisticated game AI is inevitable to make grownups all over the world childishly emotional while playing video games.

These behaviors of game AI are mainly functions of character AI, which is a part of game AI. In order to explain game AI, I also have to explain a more general idea of AI, which is not the one often called “AI” these days. Artificial intelligence (AI) is in short a family of technologies to create intelligence, with computers. And AI can be divided into two types, symbolism AI and connectionism AI. Roughly speaking, the former is manual and the latter is automatic. Symbolism AI is described with a lot of rules, mainly “if” or “else” statements in code. For example very simply “If the score is greater than 5, the speed of the enemy is 10.” Or rather many people just call it “programming.”

*Note that in contexts of RL, “game AI” often means AI which plays video games or board games. But “game AI” in video games is a more comprehensive idea orchestrating video games.

This meme describes symbolism AI well.

What people usually call “AI” in this 3rd AI boom is the latter, connictionism AI. Connectionism AI basically means neural networks, which is said to be inspired by connections of neurons. But the more you study neural networks, the more you would see such AI just as “functions capable of universal approximation based on data.” That means, a function f, which you would have learned in school such as y = f(x) = ax + b is replaced with a complicated black box, and such black box f is automatically learned with many combinations of (x, y). And such black boxes are called neural networks, and the combinations of (x, y) datasets. Connectionism AI might sound more ideal, but in practice it would be hard to design characters in AI based on such training with datasets.

*Connectionism, or deep learning is of course also programming. But in deep learning we largely depend on libraries, and a lot of parameters of AI models are updated automatically as long as we properly set datasets. In that sense, I would connectionism is more automatic. As I am going to explain, game AI largely depends on symbolism AI, namely manual adjustment of lesser parameters, but such symbolism AI would behave much more like humans than so called “AI” these days when you play video games.

Digital game AI today is application of the both types of AI in video games. It initially started mainly with symbolism AI till around 2010, and as video games get more and more complicated connectionism AI are also introduced in game AI. Video game AI can be classified to mainly navigation AI, character AI, meta AI, procedural AI, and AI outside video games. The figure below shows relations of general AI and types of game AI.

Very simply putting, video game AI traced a history like this: the initial video games were mainly composed of navigation AI showing levels, maps, and objects which move deterministically based on programming.  Players used to just move around such navigation AI. Sooner or later, enemies got certain “intelligence” and learned to chase or hunt down players, and that is the advent of character AI. But of course such “intelligence” is nothing but just manual programs. After rapid progress of video games and their industry, meta AI was invented to control difficulties of video games, thereby controlling players’ emotions. Procedural AI automatically generates contents of video games, so video games are these days becoming more and more massive. And as modern video games are too huge and complicated to debug or maintain manually, AI technologies including deep learning are used. The figure below is a chronicle of development of video games and AI technologies covered in this article. Let’s see a brief history of video games and game AI by taking a closer look at each type of game AI a little more precisely.

Navigation AI

Navigation AI is the most basic type of game AI, and that allows character AI to recognize the world in video games. Even though I think character AI, which enables characters in video games to behave like humans, would be the most famous type of game AI, it is said navigation AI has an older history. One important function of navigation AI is to control objects in video games, such as lifts, item blocks, including attacks by such objects. The next aspect of navigation AI is that it provides character AI with recognition of worlds. Unlike humans, who can almost instantly roughly recognize circumstances, character AI cannot do that as we do. Even if you feel as if the character you are controlling are moving around mountains, cities, or battle fields, sometimes escaping from attacks by other AI, for character AI that is just moving on certain graphs. The figure below are some examples of world representations adopted in some popular video games. There are a variety of such representations, and please let me skip explaining the details of them. An important point is, relatively wide and global recognition of worlds by characters in video games depend on how navigation AI is designed.

The next important feature of navigation AI is path finding. If you have learned engineering or programming, you should be already familiar with pathfiniding algorithms. They had been known since a long time ago, but it was not until “Counter-Strike” in 2000 the techniques were implemented at an satisfying level for navigating characters in a 3d world. Improvements of pathfinding in video games released game AI from fixed places and enabled them to be more dynamic.

*According to Miyake Youichiro, the advent of pathfinding in video games released character AI from staying in a narrow space and enable much more dynamic and human-like movements of them. And that changed game AI from just static objects to more intelligent entity.

Navigation meshes in “Counter-Strike (2000).” Thanks to these meshes, continuous 3d world can be processed as discrete nodes of graphs.

Character AI

Character AI is something you would first imagine from the term AI. It controls characters’ actions in video games. And differences between navigation AI and character AI can be ambiguous. It is said Pac-Man is one of the very first character AI. Compared to aliens in Space Invader deterministically moved horizontally, enemies in Pac-Man chase a player, and this is the most straightforward difference between navigation AI and character AI.

Character AI is a bunch of sophisticated planning algorithms, so I can introduce only a limited part of it just like navigation AI. In this article I would like to take an example of “F.E.A.R.” released in 2005. It is said goal-oriented action planning (GOAP) adopted in this video game was a breakthrough in character AI. GOAP is classified to backward planning, and if there exists backward ones, there is also forward ones. Using a game tree is an examples of forward planning. The figure below is an example of a tree game of tic-tac-toe. There are only 9 possible actions at maximum at each phase, so the number of possible states is relatively limited.

But with more options of actions like most of video games, forward plannings have to deal much larger sizes of future action combinations. GOAP enables realistic behaviors of character AI with a heuristic idea of planning backward. To borrow Miyake Youichiro’s expression, GOAP processes actions like sticky notes. On each sticky note, there is a combination of symbols like “whether a target is dead,” “whether a weapon is armed,” or “whether the weapon is loaded.” A sticky note composed of such symbols form an action, and each action comprises a prerequisite, an action, and an effect. And behaviors of character AI is conducted with planning like pasting the sticky notes.

More practically sticky notes, namely actions are stored in actions pools. For a decision making, as displayed in the left side of the figure below, actions are connected as a chain. First an action of a goal is first set, and an action can be connected to the prerequisite of the goal via its effect. Just as well corresponding former actions are selected until the initial state.  In the example of chaining below, the goal is “kSymbol_TargetIsDead,” and actions are chained via “kSymbol_TargetIsDead,” “kSymbol_WeaponLoaded,” “kSymbol_WeaponArmed,” and “None.” And there are several combinations of actions to reach a certain goal, so more practically each action has a cost, and the most ideal behavior of character AI is chosen by pathfinding on a graph like the right side of the figure below. And the best planning is chosen by a pathfinding algorithm.

Even though many of highly intelligent behaviors of character AI are implemented as backward plannings as I explained, planning forward can be very effective in some situations. Board game AI is a good example. A searching algorithm named Monte Carlo tree search is said to be one breakthroughs in board game AI. The searching algorithm randomly plays a game until the end, which is called playout. Numerous times of playouts enables evaluations of possibilities of winning. Monte Carlo Tree search also enables more efficient searches of games trees.

Meta AI

Meta AI is a type of AI such that controls a whole video game to enhance player’s experiences. To be more concrete, it adjusts difficulties of video games by for example dynamically arranging enemies. I think differences between meta AI and navigation AI or character AI can be also ambiguous. As I explained, the earliest video games were composed mainly with navigation AI, or rather just objects. Even if there are aliens or monsters, they can be just part of interactive objects as long as they move deterministically. I said character AI gave some diversities to their behaviors, but how challenging a video game is depends on dynamic arrangements of such objects or enemies. And some of classical video games like “Xevious,” as a matter of fact implemented such adjustments of difficulties of game plays. That is an advent of meta AI, but I think they were not so much distinguished from other types of AI, and I guess meta AI has been unconsciously just a part of programming.

It is said a turning point of modern meta AI is a shooting game “Left 4 Dead” released in 2008, where zombies are dynamically arranged. As well as many masterpiece thriller films, realistic and tense terrors are made by combinations of intensities and relaxations. Tons of monsters or zombies coming up one after another and just shooting them look stupid or almost like comedies. .And analyzing the success of “Counter-Strike,” they realized that users liked rhythms of intensity and relaxation, so they implemented that explicitly in “Left 4 Dead.” The graphs below concisely shows how meta AI works in the video game. When the survivor intensity, namely players’ intensity is low, the meta AI arrange some enemies. Survivor intensity increases as players fight with zombies or something, and then meta AI places fewer enemies so that players can relax. While players re relatively relaxing, desired population of enemies increases when they actually show up in video games, again the phase of intensity comes.

*Soul series video games do not seem to use meta AI so much. Characters in the games are rearranged in more or less the same ways every time players fail. Soul-like games make much of experiences that players find solutions by themselves, which means that manual but very careful arrangements of enemies and interactive objects are also very effective.

Meta AI can be used to make video games more addictive using data analysis. Recent social network games can record logs of game plays. Therefore if you can observe a trend that more users unsubscribe when they get less rewards in certain online events, operating companies of the game can adjust chances of getting “rare” items.

Procedural AI and AI outside video games

How clearly you can have an image of what I am going to explain in this subsection would depend how recently you have played video games. If your memories of playing video games stops with good old days of playing side-scrolling ones like Super Mario Brothers, you should at first look up some videos of playing open world games. Open world means a use of a virtual reality in which players can move an behave with a high degree of freedom. The term open world is often used as opposed to the linear games, where players have process games in the order they are given. Once you are immersed in photorealistic computer graphic worlds in such open world games, you would soon understand why metaverse is attracting attentions these days. Open world games for example like “Fallout 4” are astonishing in that you can even talk to almost everyone in them. Just as “Elden Ring” changed former soul series video games into an open world one, it seems providing open world games is one way to keep competitive in the video game industry. And such massive world can be made also with a help of procedural AI. Procedural AI can be seen as a part of meta AI, and it generates components of games such as buildings, roads, plants, and even stories. Thanks to procedural AI, video game companies with relatively small domestic markets like Poland can make a top-level open world game such as “The Witcher 3: Wild Hunt.”

An example of technique of procedural AI adopted in “The Witcher 3: Wild Hunt” for automatically creating the massive open world.

Creating a massive world also means needs of tons of debugging and quality assurance (QA). Combining works by programmers, designers, and procedural AI will cause a lot of unexpected troubles when it is actually played. AI outside game can be used to find these problems for quality assurance. Debugging and and QA have been basically done manually, and especially when it comes to QA, video game manufacturer have to employ a lot of gamer to let them just play prototype of their products. However as video games get bigger and bigger, their products are not something that can be maintained manually anymore. If you have played even one open world game, that would be easy to imagine, so automatic QA would remain indispensable in the video game industry. For example an open world game “Horizon Zero Dawn” is a video game where a player can very freely move around a massive world like a jungle. The QA team of this video game prepared bug maps so that they can visualize errors in video games. And they also adopted a system named “Apollo-Autonomous Automated Autobots” to let game AI automatically play the video game and record bugs.

As most video games both in consoles or PCs are connected to the internet these days, these bugs can be fixed soon with updates. In addition, logs of data of how players played video games or how they failed can be stored to adjust difficulties of video games or train game AI. As you can see, video games are not something manufacturers just release. They are now something develop interactively between users and developers, and players’ data is all exploited just as your browsing history on the Internet.

I have briefly explained AI used for video games over four topics. In the next two sections, I am going to explain how board games and video games can be used for AI research.

3, Reinforcement learning: we might be a sort of well-made game AI models

Machine learning, especially RL is replacing humans with computers, however with incredible computation resources. Invention of game AI, in this context including computers playing board games, has been milestones of development of AI for decades. As Western countries had been leading researches on AI, defeating humans in chess, a symbol of intelligence, had been one of goals. Even Alan Turing, one of the fathers of computers, programmed game AI to play chess with one of the earliest calculators. Searching algorithms with game trees were mainly studied in the beginning. Game trees are a type of tree graphs to show how games proceed, by expressing future possibilities with diverging tree structures. And searching algorithms are often used on tree graphs to ignore future steps which are not likely to be effective, which often looks like cutting off branches of trees. As a matter of fact, chess was so “simple” that searching algorithms alone were enough to defeat Garry Kasparov, the world chess champion at that time in 1997. That is, growing trees and trimming them was enough for the “simplicity” of chess as long as a super computer of IBM was available. After that computer defeated one of the top players of shogi, a Japanese version of chess, in 2013. And remarkably, in 2016 AlphaGo of DeepMind under Google defeated the world go champion. Game AI has been gradually mastering board games in order of increasing search space size.

We can say combinations of techniques which developed in different streams converged into game AI today, like I display in the figure below. In AlphaGo or maybe also general game AI, neural networks enable “intuition” on phases of board games, searching algorithms enables “foreseeing,” and RL “experiences.” And as almost no one can defeat computers in board games anymore, the next step of game AI is how to conquer other video games.  Since progress of convolutional neural network (CNN) in this 3rd AI boom, computers got “eyes” like we do, and the invention of ResNet in 2015 is remarkable. Thus we can now use displays of video games as inputs to neural networks. And combinations of reinforcement learning and neural networks like (CNN) is called deep reinforcement learning. Since the advent of deep reinforcement learning, many people are trying to apply it on various video games, and they show impressive results. But in general that is successful in bird’s-eye view games. Even if some of researches can be competitive or outperform human players, even in first person shooting video games, they require too much computational resources and heuristic techniques. And usually they take too much time and computer resource to achieve the level.

*Even though CNN is mainly used as “eyes” of computers, it is also used to process a phase of a board game. That means each phase of is processed like an arrangement of pixels of an image. This is what I mean by “intuition” of deep learning. Just as neural networks can recognize objects, depending on training methods they can recognize boards at a high level.

Now I would like you to think about what “smartness” means. Competency in board games tend to have correlations with mathematical skills. And actually in many cases people proficient in mathematics are also competent in board games. Even though AI can defeat incredibly smart top board game players to the best of my knowledge game AI has yet to play complicated video games with more realistic computer graphics. As I explained, behaviors of character AI is in practice implemented as simpler graphs, and tactics taken in such graphs will not be as complicated as game trees of competitive board games. And the idea of game AI playing video games itself not new, and it is also used in debugging of video games. Thus the difficulties of computers playing video games would come more from how to associate what they see on displays with more long-term and more abstract plannings. And currently, kids would more flexibly switch to other video games and play them more professionally in no time. I would say the difference is due to frames of tasks. A frame roughly means a domain or a range which is related to a task. When you play a board game, its frame is relatively small because everything you can do is limited in the rule of the game which can be expressed as simple data structure. But playing video games has a wider frame in that you have to recognize only the necessary parts important for playing video games from its constantly changing displays, namely sequences of RGB images. And in the real world, even a trivial action like putting a class on a table is selected from countless frames like what your room looks like, how soft the floor is, or what the temperature is. Human brains are great in that they can pick up only necessary frames instantly.

As many researchers would already realize, making smaller models with lower resources which can learn more variety of tasks is going to be needed, and it is a main topic these days not only in RL but also in other machine learning. And to be honest, I am skeptical about industrial or academic benefits of inventing specialized AI models for beating human players with gigantic computation resources. That would be sensational and might be effective for gathering attentions and funds. But as many AI researchers would already realize, inventing a more general intelligence which would more flexibly adjust to various tasks is more important. Among various topics of researches on the problem, I am going to pick up transfer learning in the next section, but in a more futuristic and dreamy sense.

4, Transfer learning and game for AI

In an event with some young shogi players, to a question “What would you like to request to a god?” Fujii Sota, the youngest top shogi player ever, answered “If he exists, I would like to ask him to play a game with me.” People there were stunned by the answer. The young genius, contrary to his sleepy face, has an ambition which only the most intrepid figures in mythology would have had. But instead of playing with gods, he is training himself with game AI of shogi. His hobby is assembling computers with high end CPUs, whose performance is monstrous for personal home uses. But in my opinion such situation comes from a fact that humans are already a kind of well-made machine learning models and that highly intelligent games for humans have very limited frames for computers.

*It seems it is not only computers that need huge energy consumption to play board games. Japanese media often show how gorgeous and high caloric shogi players’ meals are during breaks. And more often than not, how fancy their feasts are is the only thing most normal spectators like me in front of TVs can understand, albeit highly intellectual tactics made beneath the wooden boards.

As I have explained, the video game industry has been providing complicated simulational worlds with sophisticated ensemble of game AI in both symbolism and connectionism ways. And such simulations, initially invented to hunt down players, are these days being conquered especially by RL models, and the trend showed conspicuous progress after the advent of deep learning, that is after computers getting “eyes.” The next problem is how to transfer the intelligence or experiences cultivated in such simulations to the real world. Only humans can successfully train themselves with computer simulations today as far as I know, but more practically it is desired to transfer experiences with wider frames to more inflexible entities like robots. Such technologies would be ideal especially for RL because physical devices cannot make numerous trial and errors in the real world. They should be trained in advance in computer simulations. And transfer learning could be one way to take advantages of experiences in computer simulations to the real world. But before talking about such transfer learning, we need to be careful about the term “transfer learning.” Transfer learning is a family of machine learning technologies to makes uses of knowledge learned in a dataset, which is usually relatively huge, to another task with another dataset. Even though I have been emphasizing transferring experiences in computer simulations, transfer learning is a more general idea applicable to more general use cases, also outside computer simulations. Or rather, transfer learning is attracting a lot of attentions as a promising technique for tackling lack of data in general machine learning. And another problem is even though transfer learning has been rapidly developing recently, various research topics are scattered in the field called “transfer learning.” And arranging these topics would need extra articles or something. Thus in the rest of this article,  I would like to especially focus on uses of video games or computer simulations in transfer learning. When it comes to already popular and practical transfer learning techniques like fine tuning with pre-trained backbone CNN or BERT, I am planning to cover them with more practical introduction in one of my upcoming articles. Thus in this article, after simply introducing ideas of domains and transfer learning, I am going to briefly introduce transfer learning and explain domain adaptation/randomization.

Domain and transfer learning

There is a more strict definition of a domain in machine learning, but all you have to know is it means in short a type of dataset used for a machine learning task. And different domains have a domain shift, which in short means differences in the domains. A text dataset and an image dataset have a domain shift. An image dataset of real objects and one with cartoon images also have a smaller domain shift. Even differences in lighting or angles of cameras would cause a domain  shift. In general, even if a machine learning model is successful in tasks in a domain, even a domain shift which is trivial to humans declines performances of the model. In other words, intelligence learned in one domain is not straightforwardly applicable to another domain as humans can do. That is, even if you can recognize objects both a real and cartoon cars as a car, that is not necessarily true of machine learning models. As a family of techniques for tackling this problem, transfer learning makes a use of knowledge in a source domain (the dots in blue below), and apply the knowledge to a target domain. And usually, a source domain is assumed to be large and labeled, and on the other hand a target domain is assumed to be relatively small or even unlabeled. And tasks in a source or a target domain can be different. For example, CNN models trained on classification of ImageNet can be effectively used for object detection. Or BERT is trained on a huge corpus in a self-supervised way, but it is applicable to a variety of tasks in natural language processing (NLP).

*To people in computer vision fields, an explanation that BERT is a NLP version of pre-trained CNN would make the most sense. Just as a pre-trained CNN maps an image, arrangements of RGB pixels values, to a vector representing more of “meaning” of the image, BERT maps a text,  a sequence of one-hot encodings, into a vector or a sequence of vectors in a semantic field useful for NLP.

Transfer learning is a very popular topic, and it is hard to arrange and explain types of existing techniques. I think that is because many people are tackling more or less the similar problems with slightly different approaches. For now I would like you to keep it in mind that there are roughly three points below to consider in transfer learning

  1. What to transfer
  2. When to transfer
  3. How to transfer

The answer of the second point above “When to transfer” is simply “when domains are more or less alike.” Transfer learning assume similarities between target and source domains to some extent. “How to transfer” is very task-specific, so this is not something I can explain briefly here. I think the first point “what to transfer” is the most important for now to avoid confusions about what “transfer learning” means. “What to transfer” in transfer learning is also classified to the three types below.

  • Instance transfer (transferring datasets themselves)
  • Feature transfer (transferring extracted features)
  • Parameter transfer (transferring pre-trained models)

In fact, when you talk about already practical transfer learning techniques like using pre-trained CNN or BERT, they refer to only parameter transfer above. And please let me skip introducing it in this article. I am going to focus only on techniques related to video games in this article.

*I would like to give more practical introduction on for example BERT in one of my upcoming articles.

Domain adaptation or randomization

I first got interested in relations of video games and AI research because I was studying domain adaptation, which tackles declines of machine learning performance caused by a domain shift. Domain adaptation is sometimes used as a synonym to transfer learning. But compared to that general transfer learning also assume different tasks in different domains, domain adaptation assume the same task. Thus I would say domain adaptation is a subfield of transfer learning. There are several techniques for domain adaptation, and in this article I would like to take feature alignment as an example of frequently used approaches. Input datasets have a certain domain shift like blue and circle dots in the figure below. This domain shift cannot be changed if datasets themselves are not directly converted. Feature alignment make the domain shift smaller in a feature space after data being processed by the feature extractor. The features expressed as square dots in the figure are passed to task-specific networks just as normal machine learning. With sufficient labels in the source domain and with fewer or no labels in the target one, the task-specific networks are supervised. On the other hand, the features are also passed to the domain discriminator, and the discriminator predicts which domain the feature comes from. The domain discriminator is normally trained with supervision by classification loss, but the feature supervision is reversed when it trains the feature extractor. Due to the reversed supervision the feature extractor learns mix up features because that is worse for discriminating distinguishing the source or target domains. In this way, the feature extractor learns extract domain invariant features, that is more general features both domains have in common.

*The feature extractor and the domain discriminator is in a sense composing generative adversarial networks (GAN), which is often used in data generation. To know more about GAN, you could check for example this article.

One of motivations behind domain adaptation is that it enables training AI tasks with synthetic datasets made by for example computer graphics because they are very easy to annotate and prepare labels necessary for machine learning tasks. In such cases, domain invariant features like curves or silhouettes are expected to learn. And learning computer vision tasks from GTA5 dataset which are applicable to Cityscapes dataset is counted as one of challenging tasks in papers on domain adaptation. GTA of course stands for “Grand Theft Auto,” the video open-world video game series. If this research continues successfully developing, that would imply possibilities of capability of teaching AI models to “see” only with video games. Imagine that a baby first learns to play Grand Theft Auto 5 above all and learns what cars, roads, and pedestrians are.  And when you bring the baby outside, even they have not seen any real cars, they point to a real cars and people and say “car” and “pedestrians,” rather than “mama” or “dada.”

In order to enable more effective domain adaptation, Cycle GAN is often used. Cycle GAN is a technique to map texture in one domain to another domain. The figure below is an example of applying Cycle GAN on GTA5 dataset and Cityspaces Dataset, and by doing so shiny views from a car in Los Santos can be converted to dark and depressing scenes in Germany in winter. This instance transfer is often used in researches on domain adaptation.

Even if you mainly train depth estimation with data converted like above, the model can predict depth data of the real world domain without correct depth data. In the figure below, A is the target real data, B is the target domain converted like a source domain, and C is depth estimation on A.

Crowd counting is another field where making a labeled dataset with video games is very effective. A MOD for making a crowd arbitrarily is released, and you can make labeled datasets like below.

*Introducing GTA mod into research is hilarious. You first need to buy PC software of Grand Theft Auto 5 and gaming PC at first. And after finishing the first tutorial in the video game, you need to find a place to place a camera, which looks nothing but just playing video games with public money.

Domain adaptation problems I mentioned are more of matters of how to let computers “see” the world with computer simulations. But the gap between the simulational worlds and the real world does not exist only in visual ways like in CV. How robots or vehicles are parametrized in computers also have some gaps from the real world, so even if you replace only observations with simulations, it would be hard to train AI. But surprisingly, some researches have already succeeded in training robot arms only with computer simulations. An approach named domain randomization seems to be more or less successful in training robot arms only with computer simulations and apply the learned experience to the real world. Compared to domain adaptation aligned source domain to the target domain, domain randomization is more of expanding the source domain by changing various parameters of the source domain. And the target domain, namely robot arms in the real world is in the end included in the expanded source domain. And such expansions are relatively easy with computer simulations.

For example a paper “Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience” proposes a technique to reflect real world feed back to simulations in domain randomization, and this pipeline enables a robot arm to do real world tasks in a few iteration of real world trainings.

As the video shows, the ideas of training a robot with computer simulations is becoming more realistic.

The future of games for AI

I have been emphasizing how useful video games are in AI researches, but I am not sure if how much the field purely rely on video games like it is doing especially on RL. Autonomous driving is a huge research field, and modern video games like Grand Thef Auto are already good driving simulations in urban areas. But other realistic simulations like CARLA have been developed independent of video games. And in a paper “Exploring the Limitations of Behavior Cloning for Autonomous Driving,” some limitations of training self-driving cars in the simulation are reported. And some companies like Waymo switched to recurrent neural networks (RNN) for self-driving cars. It is natural that fields like self-driving, where errors of controls can be fatal, are not so optimistic about adopting RL for now.

But at the same time, Microsoft bought a Project Bonsai, which is aiming at applying RL to real world tasks. Also Microsoft has Project Malmo or AirSim, which respectively use Minecraft or Unreal Engine for AI reseraches. Also recently the news that Microsoft bought Activision Blizzard was a sensation last year, and media’s interests were mainly about metaverse or subscription service of video games. But Microsoft also bouth Zenimax Media, is famous for open world like Fallout or Skyrim series. Given that these are under Microsoft, it seems the company has been keen on merging AI reserach and developing video games.

As I briefly explained, video games can be expanded with procedural AI technologies. In the future AI might be trained in video game worlds, which are augmented with another form of AI. Combinations of transfer learning and game AI might possibly be a family of self-supervising technologies, like an octopus growing by eating its own feet. At least the biggest advantage of the video game industry is, even technologies themselves do not make immediate profits, researches on them are fueled by increasing video game fans all over the world. This is a kind of my sci-fi imagination of the world. Though I am not sure which is more efficient to manually design controls of robots or training AI in such indirect ways. And I prefer to enhance physical world to metaverse. People should learn to put their controllers someday and to enhance the real world. Highly motivated by “Elden Ring” I wrote this article. Some readers might got interested in the idea of transferring experiences in computer simulations to the real world. I am also going to write about transfer learning in general that is helpful in practice.

* I make study materials on machine learning, sponsored by DATANOMIQ. I do my best to make my content as straightforward but as precise as possible. I include all of my reference sources. If you notice any mistakes in my materials, including grammatical errors, please let me know (email: yasuto.tamura@datanomiq.de). And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.

AI Role Analysis in Cybersecurity Sector

Cybersecurity as the name suggests is the process of safeguarding networks and programs from digital attacks. In today’s times, the world sustains on internet-connected systems that carry humungous data that is highly sensitive. Cyberthreats are on the rise with unscrupulous hackers taking over the entire industry by storm, with their unethical practices. This not only calls for more intense cyber security laws, but also the vigilance policies of the corporates, big and small, government as well as non-government; needs to be revisited.

With such huge responsibility being leveraged over the cyber-industry, more and more cyber-security enthusiasts are showing keen interest in the industry and its practices. In order to further the process of secured internet systems for all, unlike data sciences and other industries; the Cybersecurity industry has seen a workforce rattling its grey muscle with every surge they experience in cyber threats. Talking of AI impressions in Cybersecurity is still in its nascent stages of deployment as humans are capable of more; when assisted with the right set of tools.

Automatically detecting unknown workstations, servers, code repositories, and other hardware and software on the network are some of the tasks that could be easily managed by AI professionals, which were conducted manually by Cybersecurity folks. This leaves room for cybersecurity officials to focus on more urgent and critical tasks that need their urgent attention. Artificial intelligence can definitely do the leg work of processing and analyzing data in order to help inform human decision-making.

AI in cyber security is a powerful security tool for businesses. It is rapidly gaining its due share of trust among businesses for scaling cybersecurity. Statista, in a recent post, listed that in 2019, approximately 83% of organizations based in the US consider that without AI, their organization fails to deal with cyberattacks. AI-cyber security solutions can react faster to cyber security threats with more accuracy than any human. It can also free up cyber security professionals to focus on more critical tasks in the organization.

CHALLENGES FACED BY AI IN CYBER SECURITY

As it is said, “It takes a thief to catch a thief”. Being in its experimental stages, its cost could be an uninviting factor for many businesses. To counter the threats posed by cybercriminals, organizations ought to level up their internet security battle. Attacks backed by the organized crime syndicate with intentions to dismantle the online operations and damage the economy are the major threats this industry face today. AI is still mostly experimental and, in its infancy, hackers will find it much easy to carry out speedier, more advanced attacks. New-age automation-driven practices are sure to safeguard the crumbling internet security scenarios.

AI IN CYBER SECURITY AS A BOON

There are several advantageous reasons to embrace AI in cybersecurity. Some notable pros are listed below:

  •  Ability to process large volumes of data
    AI automates the creation of ML algorithms that can detect a wide range of cybersecurity threats emerging from spam emails, malicious websites, or shared files.
  • Greater adaptability
    Artificial intelligence is easily adaptable to contemporary IT trends with the ever-changing dynamics of the data available to businesses across sectors.
  • Early detection of novel cybersecurity risks
    AI-powered cybersecurity solutions can eliminate or mitigate the advanced hacking techniques to more extraordinary lengths.
  • Offers complete, real-time cybersecurity solutions
    Due to AI’s adaptive quality, artificial intelligence-driven cyber solutions can help businesses eliminate the added expenses of IT security professionals.
  • Wards off spam, phishing, and redundant computing procedures
    AI easily identifies suspicious and malicious emails to alert and protect your enterprise.

AI IN CYBERSECURITY AS A BANE

Alongside the advantages listed above, AI-powered cybersecurity solutions present a few drawbacks and challenges, such as:

  • AI benefits hackers
    Hackers can easily sneak into the data networks that are rendered vulnerable to exploitation.
  • Breach of privacy
    Stealing log-in details of the users and using them to commit cybercrimes, are deemed sensitive issues to the privacy of an entire organization.
  • Higher cost for talents
    The cost of creating an efficient talent pool is very high as AI-based technologies are in the nascent stage.
  • More data, more problems
    Entrusting our sensitive data to a third-party enterprise may lead to privacy violations.

AI-HUMAN MERGER IS THE SOLUTION

AI professionals backed with the best AI certifications in the world assist corporations of all sizes to leverage the maximum benefits of the AI skills that they bring along, for the larger benefit of the organization. Cybersecurity teams and AI systems cannot work in isolation. This communion is a huge step forward to leveraging maximum benefits for secured cybersecurity applications for organizations. Hence, this makes AI in cybersecurity a much-coveted aspect to render its offerings in the long run.

6 Steps of Process Mining – Infographic

Many Process Mining projects mainly revolve around the selection and introduction of the right Process Mining tools. Relying on the right tool is of course an important aspect in the Process Mining project. Depending on whether the process analysis project is a one-time affair or daily process monitoring, different tools are pre-selected. Whether, for example, a BI system has already been established and whether a sophisticated authorization concept is required for the process analyzes also play a role in the selection, as do many other factors.

Nevertheless, it should not be forgotten that process mining is not primarily a tool, but an analysis method, in which the first part is about the reconstruction of the processes from operational IT systems in a resulting process log (event log), the second step is about a (core) graph analysis to visualize the process flows with additional analysis/reporting elements. If this perspective on process mining is not lost sight of, companies can save a lot of costs because it allows them to concentrate on solution-oriented concepts.

However, completely independent of the tools, there is a very general procedure in this data-driven process analysis you should understand and which we would like to describe with the following infographic:

DATANOMIQ Process Mining - 6 Steps of Doing Process Mining Analysis

6 Steps of Process Mining – Infographic PDF Download.

Interested in introducing Process Mining to your organization? Do not hesitate to get in touch with us!

DATANOMIQ is the independent consulting and service partner for business intelligence, process mining and data science. We are opening up the diverse possibilities offered by big data and artificial intelligence in all areas of the value chain. We rely on the best minds and the most comprehensive method and technology portfolio for the use of data for business optimization.