Simplify Vendor Onboarding with Automated Data Integration

Vendor onboarding is a key business process that involves collecting and processing large data volumes from one or multiple vendors. Business users need vendor information in a standardized format to use it for subsequent data processes. However, consolidating and standardizing data for each new vendor requires IT teams to write code for custom integration flows, which can be a time-consuming and challenging task.

In this blog post, we will talk about automated vendor onboarding and how it is far more efficient and quicker than manually updating integration flows.

Problems with Manual Integration for Vendor Onboarding

During the onboarding process, vendor data needs to be extracted, validated, standardized, transformed, and loaded into the target system for further processing. An integration task like this involves coding, updating, and debugging manual ETL pipelines that can take days and even weeks on end.

Every time a vendor comes on board, this process is repeated and executed to load the information for that vendor into the unified business system. Not just this, but because vendor data is often received from disparate sources in a variety of formats (CSV, Text, Excel), these ETL pipelines frequently break and require manual fixes.

All this effort is not suitable, particularly for large-scale businesses that onboard hundreds of vendors each month. Luckily, there is a faster alternative available that involves no code-writing.

Automated Data Integration

The manual onboarding process can be automated using purpose-built data integration tools.

To help you better understand the advantages, here is a step-by-step guide on how automated data integration for vendor onboarding works:

  1. Vendor data is retrieved from heterogeneous sources such as databases, FTP servers, and web APIs through built-in connectors available in the solution.
  2. The data from each file is validated by passing it through a set of predefined quality rules – this step helps in eliminating records with missing, duplicate, or incorrect data.
  3. Transformations are applied to convert input data into the desired output format or screen vendors based on business criteria. For example, if the vendor data is stored in Excel sheets and the business uses SQL Server for data storage, then the data has to be mapped to the relevant fields in the SQL Server database, which is the destination.
  4. The standardized, validated data is then loaded into a unified enterprise database that you can use as the source of information for business processes. In some cases, this can be a staging database where you can perform further filtering and aggregation to build a consolidated vendor database.
  5. This entire ETL pipeline (Step 1 through Step 4) can then be automated through event-based or time-based triggers in a workflow. For instance, you may want to run the pipeline once every day, or once a new file/data point is available in your FTP server.

Why Build a Consolidated Database for Vendors?

Once the ETL pipeline runs, you will end up with a consolidated database with complete vendor information. The main benefit of having a unified database is that it would have filtered information regarding vendors.

Most businesses have a strict process for screening vendors that follows a set of predefined rules. For example, you may want to reject vendors that have a poor credit history automatically. With manual data integration, you would need to perform this filtering by writing code. Automated data integration allows you to apply pre-built filters directly within your ETL pipeline to flag or remove vendors with a credit score lower than the specified threshold.

This is just one example; you can perform a wide range of tasks at this level in your ETL pipeline including vendor scoring (calculated based on multiple fields in your data), filtering (based on rules applied to your data), and data aggregation (to add measures to your data) to build a robust vendor database for decision-making and subsequent processes.


Automated vendor onboarding offers cost-and-time benefits to your organization. Making use of enterprise-grade data integration tools ensures a seamless business-to-vendor data exchange without the need for reworking and upgrading your ETL pipelines.

Interview: Does Business Intelligence benefit from Cloud Data Warehousing?

Interview with Ross Perez, Senior Director, Marketing EMEA at Snowflake

Read this article in German:
“Profitiert Business Intelligence vom Data Warehouse in der Cloud?”

Does Business Intelligence benefit from Cloud Data Warehousing?

Ross Perez is the Senior Director, Marketing EMEA at Snowflake. He leads the Snowflake marketing team in EMEA and is charged with starting the discussion about analytics, data, and cloud data warehousing across EMEA. Before Snowflake, Ross was a product marketer at Tableau Software where he founded the Iron Viz Championship, the world’s largest and longest running data visualization competition.

Data Science Blog: Ross, Business Intelligence (BI) is not really a new trend. In 2019/2020, making data available for the whole company should not be a big thing anymore. Would you agree?

BI is definitely an old trend, reporting has been around for 50 years. People are accustomed to seeing statistics and data for the company at large, and even their business units. However, using BI to deliver analytics to everyone in the organization and encouraging them to make decisions based on data for their specific area is relatively new. In a lot of the companies Snowflake works with, there is a huge new group of people who have recently received access to self-service BI and visualization tools like Tableau, Looker and Sigma, and they are just starting to find answers to their questions.

Data Science Blog: Up until today, BI was just about delivering dashboards for reporting to the business. The data warehouse (DWH) was something like the backend. Today we have increased demand for data transparency. How should companies deal with this demand?

Because more people in more departments are wanting access to data more frequently, the demand on backend systems like the data warehouse is skyrocketing. In many cases, companies have data warehouses that weren’t built to cope with this concurrent demand and that means that the experience is slow. End users have to wait a long time for their reports. That is where Snowflake comes in: since we can use the power of the cloud to spin up resources on demand, we can serve any number of concurrent users. Snowflake can also house unlimited amounts of data, of both structured and semi-structured formats.

Data Science Blog: Would you say the DWH is the key driver for becoming a data-driven organization? What else should be considered here?

Absolutely. Without having all of your data in a single, highly elastic, and flexible data warehouse, it can be a huge challenge to actually deliver insight to people in the organization.

Data Science Blog: So much for the theory, now let’s talk about specific use cases. In general, it matters a lot whether you are storing and analyzing e.g. financial data or machine data. What do we have to consider for both purposes?

Financial data and machine data do look very different, and often come in different formats. For instance, financial data is often in a standard relational format. Data like this needs to be able to be easily queried with standard SQL, something that many Hadoop and noSQL tools were unable to provide. Luckily, Snowflake is an ansi-standard SQL data warehouse so it can be used with this type of data quite seamlessly.

On the other hand, machine data is often semi-structured or even completely unstructured. This type of data is becoming significantly more common with the rise of IoT, but traditional data warehouses were very bad at dealing with it since they were optimized for relational data. Semi-structured data like JSON, Avro, XML, Orc and Parquet can be loaded into Snowflake for analysis quite seamlessly in its native format. This is important, because you don’t want to have to flatten the data to get any use from it.

Both types of data are important, and Snowflake is really the first data warehouse that can work with them both seamlessly.

Data Science Blog: Back to the common business use case: Creating sales or purchase reports for the business managers, based on data from ERP-systems such as Microsoft or SAP. Which architecture for the DWH could be the right one? How many and which database layers do you see as necessary?

The type of report largely does not matter, because in all cases you want a data warehouse that can support all of your data and serve all of your users. Ideally, you also want to be able to turn it off and on depending on demand. That means that you need a cloud-based architecture… and specifically Snowflake’s innovative architecture that separates storage and compute, making it possible to pay for exactly what you use.

Data Science Blog: Where would you implement the main part of the business logic for the report? In the DWH or in the reporting tool? Does it matter which reporting tool we choose?

The great thing is that you can choose either. Snowflake, as an ansi-Standard SQL data warehouse, can support a high degree of data modeling and business logic. But you can also utilize partners like Looker and Sigma who specialize in data modeling for BI. We think it’s best that the customer chooses what is right for them.

Data Science Blog: Snowflake enables organizations to store and manage their data in the cloud. Does it mean companies lose control over their storage and data management?

Customers have complete control over their data, and in fact Snowflake cannot see, alter or change any aspect of their data. The benefit of a cloud solution is that customers don’t have to manage the infrastructure or the tuning – they decide how they want to store and analyze their data and Snowflake takes care of the rest.

Data Science Blog: How big is the effort for smaller and medium sized companies to set up a DWH in the cloud? Does this have to be an expensive long-term project in every case?

The nice thing about Snowflake is that you can get started with a free trial in a few minutes. Now, moving from a traditional data warehouse to Snowflake can take some time, depending on the legacy technology that you are using. But Snowflake itself is quite easy to set up and very much compatible with historical tools making it relatively easy to move over.

New Sponsor: Snowflake

Dear readers,

we have good news again: Now we welcome snowflake as our new Data Science Blog Sponsor! So we are booked out for the moment regarding sponsoring. Snowflake provides data warehousing for the cloud and has an unique data, access and feature model, the snowflake. Now we are looking forward to editorial contributions by snowflake.

Snowflake is the only data warehouse built for the cloud. Snowflake delivers the performance, concurrency and simplicity needed to store and analyze all data available to an organization in one location. Snowflake’s technology combines the power of data warehousing, the flexibility of big data platforms, the elasticity of the cloud, and live data sharing at a fraction of the cost of traditional solutions. Snowflake: Your data, no limits. Find out more at

Furthermore, snowflake will also sponsor our Data Leader Days 2018 in November in Berlin!