Tag Archive for: KNIME

How the Pandemic is Changing the Data Analytics Outsourcing Industry

While media pundits have largely focused on the impact of COVID-19 as far as human health is concerned, it hasn’t been particularly good for the health of automated systems either. As cybersecurity budgets plummet in the face of dwindling finances, computer criminals have taken the opportunity to increase attacks against high value targets.

In June, an online antique store suffered a data breach that contained over 3 million records, and it’s likely that a number of similar attacks have simply gone unpublished. Fortunately, data scientists are hard at work developing new methods of fighting back against these kinds of breaches. Budget constraints and a lack of personnel as a result of the pandemic continues to be a problem, but automation has helped to assuage the issue to some degree.

AI-Driven Data Storage Systems

Big data experts have long promoted the cloud as an ideal metaphor for the way that data is stored remotely, but as a result few people today consider the physical locations that this information is stored at. All data has to be located on some sort of physical storage device. Even so-called serverless apps have to be distributed from a server unless they’re fully deployed using P2P services.

Since software can never truly replace hardware, researchers are looking at refining the various abstraction layers that exist between servers and the clients who access them. Data warehousing software has enabled computer scientists to construct centralized data storage solutions that look like traditional disk locations. This gives users the ability to securely interact with resources that are encrypted automatically.

Background services based on artificial intelligence monitor virtual data warehouse locations, which gives specialists the freedom to conduct whatever analytics they deem necessary. In some cases, a data warehouse can even anonymize information as it’s stored, which can streamline workflows involved with the analysis process.

While this level of automation has proven useful, it’s still subject to some of the problems that have occurred as a result of the pandemic. Traditional supply chains are in shambles and a large percentage of technical workers are now telecommuting. If there’s a problem with any existing big data plans, then there’s often nobody around to do any work in person.

Living with Shifting Digital Priorities

Many businesses were in the process of outsourcing their data operations even before the pandemic, and the current situation is speeding this up considerably. Initial industry estimates had projected steady growth numbers for the data analytics sector through 2025. While the current figures might not be quite as bullish, it’s likely that sales of outsourcing contracts will remain high.

That being said, firms are also shifting a large percentage of their IT spending dollars into cybersecurity projects. A recent survey found that 37 percent of business leaders said they were already going to cut their IT department budgets. The same study found that 28 percent of businesses are going to move at least some part of their data analytics programs abroad.

Those companies that can’t find an attractive outsourcing contract might start to patch their remote systems over a virtual private network. Unfortunately, this kind of technology has been strained to some degree in recent months. The virtual servers that power VPNs are flooded with requests, which in turn has brought them down in some instances. Neural networks, which utilize deep learning technology to improve themselves as time goes on, have proven more than capable of predicting when these problems are most likely to arise.

That being said, firms that deploy this kind of technology might find that it still costs more to work with automated technology on-premise compared to simply investing in an outsourcing program that works with these kinds of algorithms at an outside location.

Saving Money in the Time of Corona

Experts from Think Big Analytics pointed out how specialist organizations can deal with a much wider array of technologies than a small business ever could. Since these companies specialize in providing support for other organizations, they have a tendency to offer support for a large number of platforms.

These representatives recently opined that they could provide support for NoSQL, Presto, Apache Spark and several other emerging platforms at the same time. Perhaps most importantly, these organizations can work with Hadoop and other traditional data analysis languages.

Staffers working on data mining operations have long relied on languages like Hadoop and R to write scripts that they later use to automate the process of collecting and analyzing data. By working with an organization that already supports a language that companies rely on, they can avoid the need of changing up their existing operations.

This can help to drastically reduce the cost of migration, which is extremely important since many of the firms that need to migrate to a remote system are already suffering from budget problems. Assuming that some issues related to the pandemic continue to plague businesses for some time, it’s likely that these budget constraints will force IT departments to consider a migration even if they would have otherwise relied solely on a traditional colocation arrangement.

IT department staffers were already moving away from many rare platforms even before the COVID-19 pandemic hit, however, so this shouldn’t be as much of a herculean task as it sounds. For instance, the KNIME Analytics Platform has increased in popularity exponentially since it’s release in 2006. The fact that it supports over 1,000 plug-in modules has made it easy for smaller businesses to move toward the platform.

The road ahead isn’t going to be all that pleasant, however. COBOL and other antiquated languages still rule the roost at many governmental big data processing centers. At the same time, some small businesses have never even been able to put a big data plan into play in the first place. As the pandemic continues to wreak havoc on the world’s economy, however, it’s likely that there will be no shortage of organizations continuing to migrate to more secure third-party platforms backed by outsourcing contracts.

Data Analytics and Mining for Dummies

Data Analytics and Mining is often perceived as an extremely tricky task cut out for Data Analysts and Data Scientists having a thorough knowledge encompassing several different domains such as mathematics, statistics, computer algorithms and programming. However, there are several tools available today that make it possible for novice programmers or people with no absolutely no algorithmic or programming expertise to carry out Data Analytics and Mining. One such tool which is very powerful and provides a graphical user interface and an assembly of nodes for ETL: Extraction, Transformation, Loading, for modeling, data analysis and visualization without, or with only slight programming is the KNIME Analytics Platform.

KNIME, or the Konstanz Information Miner, was developed by the University of Konstanz and is now popular with a large international community of developers. Initially KNIME was originally made for commercial use but now it is available as an open source software and has been used extensively in pharmaceutical research since 2006 and also a powerful data mining tool for the financial data sector. It is also frequently used in the Business Intelligence (BI) sector.

KNIME as a Data Mining Tool

KNIME is also one of the most well-organized tools which enables various methods of machine learning and data mining to be integrated. It is very effective when we are pre-processing data i.e. extracting, transforming, and loading data.

KNIME has a number of good features like quick deployment and scaling efficiency. It employs an assembly of nodes to pre-process data for analytics and visualization. It is also used for discovering patterns among large volumes of data and transforming data into more polished/actionable information.

Some Features of KNIME:

  • Free and open source
  • Graphical and logically designed
  • Very rich in analytics capabilities
  • No limitations on data size, memory usage, or functionalities
  • Compatible with Windows ,OS and Linux
  • Written in Java and edited with Eclipse.

A node is the smallest design unit in KNIME and each node serves a dedicated task. KNIME contains graphical, drag-drop nodes that require no coding. Nodes are connected with one’s output being another’s input, as a workflow. Therefore end-to-end pipelines can be built requiring no coding effort. This makes KNIME stand out, makes it user-friendly and make it accessible for dummies not from a computer science background.

KNIME workflow designed for graduate admission prediction

KNIME workflow designed for graduate admission prediction

KNIME has nodes to carry out Univariate Statistics, Multivariate Statistics, Data Mining, Time Series Analysis, Image Processing, Web Analytics, Text Mining, Network Analysis and Social Media Analysis. The KNIME node repository has a node for every functionality you can possibly think of and need while building a data mining model. One can execute different algorithms such as clustering and classification on a dataset and visualize the results inside the framework itself. It is a framework capable of giving insights on data and the phenomenon that the data represent.

Some commonly used KNIME node groups include:

  • Input-Output or I/O:  Nodes in this group retrieve data from or to write data to external files or data bases.
  • Data Manipulation: Used for data pre-processing tasks. Contains nodes to filter, group, pivot, bin, normalize, aggregate, join, sample, partition, etc.
  • Views: This set of nodes permit users to inspect data and analysis results using multiple views. This gives a means for truly interactive exploration of a data set.
  • Data Mining: In this group, there are nodes that implement certain algorithms (like K-means clustering, Decision Trees, etc.)

Comparison with other tools 

The first version of the KNIME Analytics Platform was released in 2006 whereas Weka and R studio were released in 1997 and 1993 respectively. KNIME is a proper data mining tool whereas Weka and R studio are Machine Learning tools which can also do data mining. KNIME integrates with Weka to add machine learning algorithms to the system. The R project adds statistical functionalities as well. Furthermore, KNIME’s range of functions is impressive, with more than 1,000 modules and ready-made application packages. The modules can be further expanded by additional commercial features.

Kontrolle und Steuerung von Spark Applikationen über REST

Apache Spark erfreut sich zunehmender Beliebtheit in der Data Science Szene da es in Geschwindigkeit und Funktionalität eine immense Verbesserung bzw. Erweiterung des reinen Hadoop MapReduce Programmiermodells ist. Jedoch bleibt Spark ebenso wie Hadoop eine Technologie für Experten. Es erfordert zumindest Kenntnisse von Unix-Skripten und muss über die Command-Line gesteuert werden. Die vorhandenen Weboberflächen bieten nur sehr rudimentäre Einblicke in den Status von Spark Applikationen:

spark basic ui

Der Spark JobServer ist ein Open-Source Projekt, das eine REST-Schnittstelle (Representational State Transfer) für Spark anbietet. (In diesem YouTube Video wird anschaulich erläutert, was ein REST API ist und wozu es verwendet werden kann.) Vereinfacht gesagt, ermöglicht es der JobServer, Spark über diese REST-Schnittstelle als Webservice zu nutzen. Es ist möglich, über den JobServer Spark Kontexte und Applikationen (Jobs) zu managen und Kontexte über verschiedene Aufrufe der REST-Schnittstelle hinweg wiederzuverwenden. Jar Files mit Job Implementierungen können vorab über die gleiche Schnittstelle installiert werden, so dass es z.B. möglich ist, auch sehr feingranulare Jobs über die Schnittstelle zu steuern (vollständige Liste der Features).

Der Spark JobServer ist bereits bei verschiedenen Organisationen (u.a. Netflix, Zed Worldwide, KNIME, Azavea und Maana) im Einsatz. Diese Nutzer des JobServers verwenden ihn meist versteckt „unter der Haube“, um so ihre jeweiligen Werkzeuge Big-Data tauglich zu machen. So nutzt KNIME ab dem nächsten Release (Oktober 2015) den JobServer. Anwendern können dann Spark Jobs über eine grafische Oberfläche bequem von ihrem lokalen Rechner aus starten, monitoren und stoppen. In der folgenden Abbildung sehen Sie, wie Trainingsdaten auf den Server hochgeladen werden, um daraus verschiedene Machine Learning Modelle zu erstellen. Diese Modelle können dann auf Testdaten angewandt werden, die z.B. aus einer HIVE-Tabelle nach Spark importiert werden:

spark knime hive jobs

Jeder der dargestellten Knoten mit der Überschrift „Spark ***“, wie z.B. „Spark Decision Tree“, ist ein Spark Job im Sinne des JobServers. Weitere Beispiele für Spark Jobs sind verschiedene Vorverarbeitungsaufgaben wie das Sampling einer Tabelle oder ein Join über mehrere Tabellen.

Spark kann über den JobServer im Standalone-, Mesos- oder im Yarn-Client-Modus angesteuert werden. Eine sehr hilfreiche Erweiterung der eigentlichen Spark-Funktionalität bietet der JobServer über die sogenannten „Named RDDs“ an. Ein Resilient Distributed Dataset (RDD) ist im Prinzip ein Datensatz bzw. eine Tabelle in Spark. „Named RDDs“ erlauben die Weiterverwendung von RDDs über einzelne Jobs hinweg. So kann man Jobs modularer aufbauen und leichter Zwischenergebnisse inspizieren.

Ich kann aus eigener Erfahrung sagen, dass der JobServer die geeignete Middleware zwischen einer benutzerfreundlichen Oberfläche und Spark ist. Die Open-Source Community ist hier sehr aktiv und der JobServer lässt sich bei Bedarf gut erweitern.