Continuous Integration and Continuous Delivery (CI/CD) for Data Pipelines

Looking Ahead: The Future of Data Preparation for Generative AI

Sponsored Post

Generative AI is a significant part of the technology landscape. The effectiveness of generative AI is linked to the data it uses. Similar to how a chef needs fresh ingredients to prepare a meal, generative AI needs well-prepared, clean data to produce outputs. Businesses need to understand the trends in data preparation to adapt and succeed.

The Principle of “Garbage In, Garbage Out”

The principle of “garbage in, garbage out” (GIGO) remains as relevant as ever.  If you input poor-quality data into an AI system, the results will be poor. This principle highlights the need for careful data preparation, ensuring that the input data is accurate, consistent, and relevant.

Emerging Trends in Data Preparation

  1. Automated Data Cleaning

Manual data cleaning is both time-consuming and error-prone. Emerging tools now leverage AI to automate this process, identifying and correcting errors more efficiently. This shift not only saves time but also ensures a higher standard of data quality. Tools like BiG EVAL are leading data quality field for all technical systems in which data is transported and transformed. BiG EVAL utilizes plausibility and validation mechanisms to adopt proactive quality assurance and enable short release cycles in agile projects as well.

  1. Real-Time Data Processing

 Businesses are adopting technologies that can process and analyze data instantly due to the need for real-time insights. Real-time data preparation tools allow companies to react quickly to new information, maintaining a competitive edge in fast-paced industries.

  1. Improved Data Integration

Data often comes from various sources, and integrating this data smoothly is essential. Advanced data integration tools now facilitate the  merging of different data sets, creating a cohesive and comprehensive dataset for analysis. Managing a vast array of data sources is almost incomprehensible with data automation tools.

  1. Augmented Data Catalogs

Modern data catalogs are becoming more intuitive and intelligent. They not only help in organizing and finding data but also in understanding its lineage and context. This contextual awareness aids in better data preparation and utilization.

Adapting to These Changes

Businesses must be proactive in adopting these emerging trends. Here are a few strategies to consider:

  1. Invest in Advanced Data Tools

Investing in modern data preparation tools can  enhance data processing capabilities. Solutions like AnalyticsCreator provide robust platforms for real-time processing and seamless integration.

  1. Foster a Data-Driven Culture

Promote a culture where data quality is a shared responsibility. Encourage teams to prioritize data accuracy and consistency at every stage of data handling.

  1. Continuous Training and Development

The field of data science is constantly evolving. Ensure your team is up-to-date with the latest trends and technologies in data preparation through continuous learning and development programs.

  1. Leverage Expert Guidance

Sometimes, navigating the complex landscape of data preparation requires expert guidance. Partnering with specialists can provide valuable insights and help in implementing best practices tailored to your business needs. (Link to our partner page).

The Role of AnalyticsCreator

AnalyticsCreator helps businesses navigate the future of data preparation. By providing advanced tools and solutions, AnalyticsCreator ensures that your data is prepared, well-integrated, and ready for analysis. Its platform is designed to handle the complexities of modern data environments, offering features that align with the latest trends in data preparation.

In conclusion, as generative AI continues to influence industries, the need for high-quality data is important. By staying informed of emerging trends and leveraging tools like AnalyticsCreator, businesses can ensure they are prepared to harness the full potential of generative AI. Just as a chef’s masterpiece depends on the quality of the ingredients, your AI outcomes will depend on the data you prepare. Investing in your data can only lead to positive results.

Cloud Data Platform for Shopfloor Management

How Cloud Data Platforms improve Shopfloor Management

In the era of Industry 4.0, linking data from MES (Manufacturing Execution System) with that from ERP, CRM and PLM systems plays an important role in creating integrated monitoring and control of business processes.

ERP (Enterprise Resource Planning) systems contain information about finance, supplier management, human resources and other operational processes, while CRM (Customer Relationship Management) systems provide data about customer relationships, marketing and sales activities. PLM (Product Lifecycle Management) systems contain information about products, development, design and engineering.

By linking this data with the data from MES, companies can obtain a more complete picture of their business operations and thus achieve better monitoring and control of their business processes. Of central importance here are the OEE (Overall Equipment Effectiveness) KPIs that are so important in production, as well as the key figures from financial controlling, such as contribution margins. The fusion of data in a central platform enables smooth analysis to optimize processes and increase business efficiency in the world of Industry 4.0 using methods from business intelligence, process mining and data science. Companies also significantly increase their enterprise value with the linking of this data, thanks to the data and information transparency gained.

Cloud Data Platform for shopfloor management and data sources such like MES, ERP, PLM and machine data.

Cloud Data Platform for shopfloor management and data sources such like MES, ERP, PLM and machine data. Copyright by DATANOMIQ.

If the data sources are additionally expanded to include the machines of production and logistics, much more in-depth analyses for error detection and prevention as well as for optimizing the factory in its dynamic environment become possible. The machine sensor data can be monitored directly in real time via respective data pipelines (real-time stream analytics) or brought into an overall picture of aggregated key figures (reporting). The readers of this data are not only people, but also individual machines or entire production plants that can react to this data.

As a central data architecture there are dozens of analytical applications which can be fed with data:

OEE key figures for Shopfloor reporting
Process Mining (e.g. material flow analysis) for manufacturing and supply chain.
Detection of anomalies on the shopfloor or on individual machines.
Predictive maintenance for individual machines or entire production lines.

This solution scales completely automatically in terms of both performance and cost. It looks beyond individual problems since it offers universal and flexible scope for action. In other words, it will result in a “god mode” for the management being able to drill-down from a specific client project to insights into single machines involved into each project.

Are you interested in scalable data architectures for your shopfloor management? Or would you like to discuss a specific problem with us? Or maybe you are interested in an individual data strategy? Then get in touch with me! 🙂

Data Warehousing Basiscs

Data Warehousing is applied Big Data Management and a key success factor in almost every company. Without a data warehouse, no company today can control its processes and make the right decisions on a strategic level as there would be a lack of data transparency for all decision makers. Bigger comanies even have multiple data warehouses for different purposes.

In this series of articles I would like to explain what a data warehouse actually is and how it is set up. However, I would also like to explain basic topics regarding Data Engineering and concepts about databases and data flows.

To do this, we tick off the following points step by step:

 

How to Successfully Perform a Data Quality Assessment (DQA)

People generate 2.5 quintillion bytes of data every single day. That’s 1.7 megabytes generated every second for each of the 7.8 billion residents of Earth. A lot of that information is junk that somebody can easily discard, but just as much can prove to be vital. How do you tell the difference?

According to industry experts, poor quality information costs the U.S. economy upwards of $3.1 trillion annually. That is why data quality assessments (DQAs) are so important.

A Brief Explanation of Data Quality Assessments

With companies around the globe generating massive amounts of data every second of the day, it’s essential to have tools that help you sort through it all. Data quality assessments are usually carried out by software programmed with a predefined set of rules. They can compare the incoming information to those guidelines and provide reports.

This is a simplified explanation, but the goal of these DQA programs is to separate the wheat from the chaff. They eliminate any unnecessary or redundant data, leaving only the highest quality information.

The biggest challenge here is figuring who will determine what is considered quality. Data quality depends on three things: the individual or team that creates the requirements, how they complete that task, and how flexible the program meets those obligations.

How to Perform a DQA

Once you have your DQA program in place, performing an assessment is relatively simple. The challenge lies in establishing the program. The first step is to determine the scope of the data you’re trying to assess. The details of this step will depend on your system and the amount of information you have to sort through. You can set up a program to assess a single data point at a time, but if your system generates a lot of info, this isn’t effective from an efficiency standpoint.

Define your scope carefully to ensure the program does the job correctly without wasting time sorting through bytes one at a time.

Now that you have a framework to work from, you can move on to monitoring and cleansing data. Analyze your information against the scope and details you’ve established. Validate each point against your existing statistical measures, and determine its quality.

Next, ensure all the data requirements are available and correctly formatted. You may wish to provide training for any new team members entering information to ensure it’s in a format that the DQA system can understand.

Finally, make it a point to verify that your data is consistent with the rules you’ve established, as well as your business goals. DQAs aren’t a one-and-done kind of program. Monitoring needs to be an ongoing process to prevent things from falling through the cracks and keeping bad information from potentially costing you millions of dollars.

Benefits of DQA

A data quality assessment has various benefits, both on the commercial and consumer side of your business. Accuracy is essential. It’s valuable for marketers who purchase demographic data, with 84% stating it plays a large role in their purchase decisions. Targeted marketing is one of the most popular forms of advertisement, and while it’s not always practical, its efficacy drops even further if the demographic data is incorrect.

High-quality data should be accurate, complete, relevant, valid, timely and consistent. Maintaining frequent and comprehensive quality assessments can help you do that and more. The goal of collecting this information is to produce results. The higher quality your data is, the easier and faster your system will work, with better results than you might manage without DQAs.

Data Quality Assessment vs. Data Profiling

When talking about data quality, you’ll often see the terms assessment and profiling used interchangeably. While the concepts are similar, they are not the same. Data profiling is a valuable tool for setting up your quality assessment program, giving you the information you’ll need to build your program in the future. It isn’t a step you can perform independently and expect to get the same results.

If you don’t already have a DQA in place, start with profiling to create the foundation for a comprehensive data quality assessment program.

The Growing Importance of Data Quality

Data quality has always been important. However, as the population generates more information every year, learning how to separate value from junk is more critical than ever.

Web Scraping Using R..!

In this blog, I’ll show you, How to Web Scrape using R..?

What is R..?

R is a programming language and its environment built for statistical analysis, graphical representation & reporting. R programming is mostly preferred by statisticians, data miners, and software programmers who want to develop statistical software.

R is also available as Free Software under the terms of the Free Software Foundation’s GNU General Public License in source code form.

Reasons to choose R

Reasons to choose R

Let’s begin our topic of Web Scraping using R.

Step 1- Select the website & the data you want to scrape.

I picked this website “https://www.alexa.com/topsites/countries/IN” and want to scrape data of Top 50 sites in India.

Data we want to scrape

Data we want to scrape

Step 2- Get to know the HTML tags using SelectorGadget.

In my previous blog, I already discussed how to inspect & find the proper HTML tags. So, now I’ll explain an easier way to get the HTML tags.

You have to go to Google chrome extension (chrome://extensions) & search SelectorGadget. Add it to your browser, it’s a quite good CSS selector.

Step 3- R Code

Evoking Important Libraries or Packages

I’m using RVEST package to scrape the data from the webpage; it is inspired by libraries like Beautiful Soup. If you didn’t install the package yet, then follow the code in the snippet below.

Step 4- Set the url of the website

Step 5- Find the HTML tags using SelectorGadget

It’s quite easy to find the proper HTML tags in which your data is present.

Firstly, I have to click on data using SelectorGadget which I want to scrape, it automatically selects the data which are similar to selected HTML tags. Before going forward, cross-check the selected values, are they correct or some junk data is also gets selected..? If you noticed our page has only 50 values, but you can see 156 values are selected.

Selection by SelectorGadget

Selection by SelectorGadget

So I need to remove unwanted values who get selected, once you click on them to deselect it, it turns red and others will turn yellow except our primary selection which turn to green. Now you can see only 50 values are selected as per our primary requirement but it’s not enough. I have to again cross-check that some required values are not exchanged with junk values.

If we satisfy with our selection then copy the HTML tag & include it into the code, else repeat this exercise.

Modified Selection by SelectorGadget

Step 6- Include the tag in our Code

After including the tags, our code is like this.

Code Snippet

If I run the code, values in each list object will be 50.

Data Stored in List Objects

Step 7- Creating DataFrame

Now, we create a dataframe with our list-objects. So for creating a dataframe, we always need to remember one thumb rule that is the number of rows (length of all the lists) should be equal, else we get an error.

Error appears when number of rows differs

Finally, Our DataFrame will look like this:

Our Final Data

Step 8- Writing our DataFrame to CSV file

We need our scraped data to be available locally for further analysis & model building or other purposes.

Our final piece of code to write it in CSV file is:

Writing to CSV file

Step 9- Check the CSV file

Data written in CSV file

Conclusion-

I tried to explain Web Scraping using R in a simple way, Hope this will help you in understanding it better.

Find full code on

https://github.com/vgyaan/Alexa/blob/master/webscrap.R

If you have any questions about the code or web scraping in general, reach out to me on LinkedIn!

Okay, we will meet again with the new exposer.

Till then,

Happy Coding..!

Interview – Customer Data Platform, more than CRM 2.0?

Interview with David M. Raab from the CDP Institute

David M. Raab is as a consultant specialized in marketing software and service vendor selection, marketing analytics and marketing technology assessment. Furthermore he is the founder of the Customer Data Platform Institute which is a vendor-neutral educational project to help marketers build a unified customer view that is available to all of their company systems.

Furthermore he is a Keynote-Speaker for the Predictive Analytics World Event 2019 in Berlin.

Data Science Blog: Mr. Raab, what exactly is a Customer Data Platform (CDP)? And where is the need for it?

The CDP Institute defines a Customer Data Platform as „packaged software that builds a unified, persistent customer database that is accessible by other systems“.  In plainer language, a CDP assembles customer data from all sources, combines it into customer profiles, and makes the profiles available for any use.  It’s important because customer data is collected in so many different systems today and must be unified to give customers the experience they expect.

Data Science Blog: Is it something like a CRM System 2.0? What Use Cases can be realized by a Customer Data Platform?

CRM systems are used to interact directly with customers, usually by telephone or in the field.  They work almost exclusively with data that is entered during those interactions.  This gives a very limited view of the customer since interactions through other channels such as order processing or Web sites are not included.  In fact, one common use case for CDP is to give CRM users a view of all customer interactions, typically by opening a window into the CDP database without needing to import the data into the CRM.  There are many other use cases for unified data, including customer segmentation, journey analysis, and personalization.  Anything that requires sharing data across different systems is a CDP use case.

Data Science Blog: When does a CDP make sense for a company? It is more relevant for retail and financial companies than for industrial companies, isn´t it?

CDP has been adopted most widely in retail and online media, where each customer has many interactions and there are many products to choose from.  This is a combination that can make good use of predictive modeling, which benefits greatly from having more complete data.  Financial services was slower to adopt, probably because they have fewer products but also because they already had pretty good customer data systems.  B2B has also been slow to adopt because so much of their customer relationship is handled by sales people.  We’ve more recently been seeing growth in additional sectors such as travel, healthcare, and education.  Those involve fewer transactions than retail but also rely on building strong customer relationships based on good data.

Data Science Blog: There are several providers for CDPs. Adobe, Tealium, Emarsys or Dynamic Yield, just to name some of them. Do they differ a lot between each other?

Yes they do.  All CDPs build the customer profiles I mentioned.  But some do more things, such as predictive modeling, message selection, and, increasingly, message delivery.  Of course they also vary in the industries they specialize in, regions they support, size of clients they work with, and many technical details.  This makes it hard to buy a CDP but also means buyers are more likely to find a system that fits their needs.

Data Science Blog: How established is the concept of the CDP in Europe in general? And how in comparison with the United States?

CDP is becoming more familiar in Europe but is not as well understood as in the U.S.  The European market spent a lot of money on Data Management Platforms (DMPs) which promised to do much of what a CDP does but were not able to because they do not store the level of detail that a CDP does.  Many DMPs also don’t work with personally identifiable data because the DMPs primarily support Web advertising, where many customers are anonymous.  The failures of DMPs have harmed CDPs because they have made buyers skeptical that any system can meet their needs, having already failed once.  But we are overcoming this as the market becomes better educated and more success stories are available.  What’s the same in Europe and the U.S. is that marketers face the same needs.  This will push European marketers towards CDPs as the best solution in many cases.

Data Science Blog: What are coming trends? What will be the main topic 2020?

We see many CDPs with broader functions for marketing execution: campaign management, personalization, and message delivery in particular.  This is because marketers would like to buy as few systems as possible, so they want broader scope in each systems.  We’re seeing expansion into new industries such as financial services, travel, telecommunications, healthcare, and education.  Perhaps most interesting will be the entry of Adobe, Salesforce, and Oracle, who have all promised CDP products late this year or early next year.  That will encourage many more people to consider buying CDPs.  We expect that market will expand quite rapidly, so current CDP vendors will be able to grow even as Adobe, Salesforce, and Oracle make new CDP sales.


You want to get in touch with Daniel M. Raab and understand more about the concept of a CDP? Meet him at the Predictive Analytics World 18th and 19th November 2019 in Berlin, Germany. As a Keynote-Speaker, he will introduce the concept of a Customer Data Platform in the light of Predictive Analytics. Click here to see the agenda of the event.

 


 

Consider Anonymization – Process Mining Rule 3 of 4

This is article no. 3 of the four-part article series Privacy, Security and Ethics in Process Mining.

Read this article in German:
Datenschutz, Sicherheit und Ethik beim Process Mining – Regel 3 von 4

If you have sensitive information in your data set, instead of removing it you can also consider the use of anonymization. When you anonymize a set of values, then the actual values (for example, the employee names “Mary Jones”, “Fred Smith”, etc.) will be replaced by another value (for example, “Resource 1”, “Resource 2”, etc.).

If the same original value appears multiple times in the data set, then it will be replaced with the same replacement value (“Mary Jones” will always be replaced by “Resource 1”). This way, anonymization allows you to obfuscate the original data but it preserves the patterns in the data set for your analysis. For example, you will still be able to analyze the workload distribution across all employees without seeing the actual names.

Some process mining tools (Disco and ProM) include anonymization functionality. This means that you can import your data into the process mining tool and select which data fields should be anonymized. For example, you can choose to anonymize just the Case IDs, the resource name, attribute values, or the timestamps. Then you export the anonymized data set and you can distribute it among your team for further analysis.

Do:

  • Determine which data fields are sensitive and need to be anonymized (see also the list of common process mining attributes and how they are impacted if anonymized).
  • Keep in mind that despite the anonymization certain information may still be identifiable. For example, there may be just one patient having a very rare disease, or the birthday information of your customer combined with their place of birth may narrow down the set of possible people so much that the data is not anonymous anymore.

Don’t:

  • Anonymize the data before you have cleaned your data, because after the anonymization the data cleaning may not be possible anymore. For example, imagine that slightly different customer category names are used in different regions but they actually mean the same. You would like to merge these different names in a data cleaning step. However, after you have anonymized the names as “Category 1”, “Category 2”, etc. the data cleaning cannot be done anymore.
  • Anonymize fields that do not need to be anonymized. While anonymization can help to preserve patterns in your data, you can easily lose relevant information. For example, if you anonymize the Case ID in your incident management process, then you cannot look up the ticket number of the incident in the service desk system anymore. By establishing a collaborative culture around your process mining initiative (see guideline No. 4) and by working in a responsible, goal-oriented way, you can often work openly with the original data that you have within your team.