Geschriebene Artikel über Big Data Analytics

How to Successfully Perform a Data Quality Assessment (DQA)

People generate 2.5 quintillion bytes of data every single day. That’s 1.7 megabytes generated every second for each of the 7.8 billion residents of Earth. A lot of that information is junk that somebody can easily discard, but just as much can prove to be vital. How do you tell the difference?

According to industry experts, poor quality information costs the U.S. economy upwards of $3.1 trillion annually. That is why data quality assessments (DQAs) are so important.

A Brief Explanation of Data Quality Assessments

With companies around the globe generating massive amounts of data every second of the day, it’s essential to have tools that help you sort through it all. Data quality assessments are usually carried out by software programmed with a predefined set of rules. They can compare the incoming information to those guidelines and provide reports.

This is a simplified explanation, but the goal of these DQA programs is to separate the wheat from the chaff. They eliminate any unnecessary or redundant data, leaving only the highest quality information.

The biggest challenge here is figuring who will determine what is considered quality. Data quality depends on three things: the individual or team that creates the requirements, how they complete that task, and how flexible the program meets those obligations.

How to Perform a DQA

Once you have your DQA program in place, performing an assessment is relatively simple. The challenge lies in establishing the program. The first step is to determine the scope of the data you’re trying to assess. The details of this step will depend on your system and the amount of information you have to sort through. You can set up a program to assess a single data point at a time, but if your system generates a lot of info, this isn’t effective from an efficiency standpoint.

Define your scope carefully to ensure the program does the job correctly without wasting time sorting through bytes one at a time.

Now that you have a framework to work from, you can move on to monitoring and cleansing data. Analyze your information against the scope and details you’ve established. Validate each point against your existing statistical measures, and determine its quality.

Next, ensure all the data requirements are available and correctly formatted. You may wish to provide training for any new team members entering information to ensure it’s in a format that the DQA system can understand.

Finally, make it a point to verify that your data is consistent with the rules you’ve established, as well as your business goals. DQAs aren’t a one-and-done kind of program. Monitoring needs to be an ongoing process to prevent things from falling through the cracks and keeping bad information from potentially costing you millions of dollars.

Benefits of DQA

A data quality assessment has various benefits, both on the commercial and consumer side of your business. Accuracy is essential. It’s valuable for marketers who purchase demographic data, with 84% stating it plays a large role in their purchase decisions. Targeted marketing is one of the most popular forms of advertisement, and while it’s not always practical, its efficacy drops even further if the demographic data is incorrect.

High-quality data should be accurate, complete, relevant, valid, timely and consistent. Maintaining frequent and comprehensive quality assessments can help you do that and more. The goal of collecting this information is to produce results. The higher quality your data is, the easier and faster your system will work, with better results than you might manage without DQAs.

Data Quality Assessment vs. Data Profiling

When talking about data quality, you’ll often see the terms assessment and profiling used interchangeably. While the concepts are similar, they are not the same. Data profiling is a valuable tool for setting up your quality assessment program, giving you the information you’ll need to build your program in the future. It isn’t a step you can perform independently and expect to get the same results.

If you don’t already have a DQA in place, start with profiling to create the foundation for a comprehensive data quality assessment program.

The Growing Importance of Data Quality

Data quality has always been important. However, as the population generates more information every year, learning how to separate value from junk is more critical than ever.

Coffee Shop Location Predictor

As part of this article, we will explore the main steps involved in predicting the best location for a coffee shop in Vancouver. We will also take into consideration that the coffee shop is near a transit station, and has no Starbucks near it. Well, while at it, let us also add an extra feature where we make sure the crime in the area is lower.

Introduction

In this article, we will highlight the main steps involved to predict a location for a coffee shop in Vancouver. We also want to make sure that the coffee shop is near a transit station, and has no Starbucks near it. As an added feature, we will make sure that the crime concentration in the area is low, and the entire program should be implemented in Python. So let’s walk through the steps.

Steps Required

  • Get crime history for the last two years
  • Get locations of all transit stations and Starbucks in Vancouver
  • Check all the transit stations that do not have any Starbucks near them
  • Get all the data regarding crimes near the filtered transit stations
  • Create a grid of all possible coordinates around the transit station
  • Check crime around each created coordinate and display the top 5 locations.

Gathering Data

This covers the first two steps required to get data from the internet, both manually and automatically.

Getting all Crime History

We can get crime history for the past 14 years in Vancouver from here. This data is in raw crime.csv format, so we have to process it and filter out useless data. We then write this processed information on the crime_processed.csv file.

Note: There are 530,653 records of crime in this file

In this program, we will just use the type and coordinate of the crime. There are many crime types, but we have classified them into three major categories namely;

Theft (red), Break and Enter (orange) and Mischief (green)

These all crimes can be plotted on Graph as displayed below.

This may seem very congested and full, so let’s see a closeup image for future references.

Getting Locations of all Rapid Transit Stations

We can get the coordinates of all Transit Stations in Vancouver from here. This dataset has all coordinates of rapid transit stations in three transit lines in Vancouver. There are a total of 23 of them in Vancouver, we can then use it for further processing.

Getting Locations of all Starbucks

The Starbucks data is present here, we can scrape it easily and get the locations of all the Starbucks in Vancouver. We just need the Starbucks that is near transit stations, so we’ll filter out the rest. There are a total 24 Starbucks in Vancouver, and 10 of them are near Transit Stations.

Note: Other than the coordinates of Transit Stations and Starbucks, we also need coordinates and type of the crime.

Transit Stations with no Starbucks

As we have all the data required, now moving to the next step. We need to get to the transit Station locations that have no Starbucks near them. For that we can create an area of particular radius around each Transit Station. Then check all Starbucks locations with respect to them, whether they are within that area or not.

If none of the Starbucks are within that particular Transit Station’s area, we can append it to a list. At the end, we have a list of all Transit locations with no Starbucks near them. There are a total of 6 Transit Stations with no Starbucks near them.

Crime near Transit Stations

Now lets filter out all crime records and get just what we are interested in, which means the crime near Transit stations. For that we will plot an area of specific radius around each of them to see the crimes. These are more than 110,000 crime records.

Crime near located Transit Stations

Now that we have all the Transit Stations that don’t have any Starbucks near them and also the crime near all Transit Stations. So, let’s use this information and get crime near the located Transit Stations. These are about 44,000 crime records.

This may seem correct at first glance, but the points are overlapping due to abundance, so we can create different lists of crimes based on their types.

Theft

Break and Enter

Mischief

Generating all possible coordinates

Now finally, we have all the prerequisites and let’s get to the main task at hand, predicting the best coordinate for the coffee shop.

There may be many approaches to solve this problem, but the one I used in this program is that I will create a grid of all possible locations (coordinates) in the area of 1 km radius around each located transit station.

Initially I generated 1 coordinate for every m, this resulted in 1000,000 coordinates in every km. This is a huge number, and for the 6 located Transit stations, it becomes 6 Million. It may not seem much at first glance because computers can handle such data in a few seconds.

But for location prediction we need to compare each coordinate with crime coordinates. As the algorithm has to check for ~7,000 Thefts, ~19,000 Break ins, and ~17,000 Mischiefs around each generated coordinate. Computing this would want the program to process an estimate of 432.4 Billion times. This sort of execution takes many hours on normal computers (sometimes days).

The solution to this is to create a coordinate for each 10 m area, this results about 10,000 coordinate per km. For the above mentioned number of crimes, the estimated processes will be several Billions. That would significantly reduce the time, but is still not less.

To control this, we can remove the duplicate values in crime coordinates and those which are too close to each other ~1m. Doing so, we are left with just 816 Thefts, 2,654 Break ins, and 8,234 Mischiefs around each generated coordinate.
The precision will not be affected much but the time and computational resources required will be reduced a lot.

 

Checking Crime near Generated coordinates

Now that we have all the locations, we will start some processing on it and check each coordinate against some constraints. That are respectively;

  1. Filter out Coordinates having Theft near 1 km
    We get 122,000 coordinates with no Thefts (Below merged 1000 to 1)
  2. Filter out Coordinates having Break Ins near 200m
    We get 8000 coordinates with no Thefts (Below merged 1000 to 1)
  3. Filter out Coordinates having Mischief near 200m
    We get 6000 coordinates with no Thefts (Below merged 1000 to 1)
    Now that we have 6 Coordinates of best locations that have passed through all the constraints, we will order them.To order them, we will check their distance from the nearest transit location. The nearest will be on top of the list as the best possible location, then the second and so on. The generated List is;

    1. -123.0419406741792, 49.24824259252004
    2. -123.05887151659479, 49.24327221040713
    3. -123.05287151659476, 49.24327221040713
    4. -123.04994067417924, 49.239242592520064
    5. -123.0419406741792, 49.239242592520064
    6. -123.0409406741792, 49.239242592520064

How can MindTrades help?

MindTrades Consulting Services, a leading marketing agency provides in-depth analysis and insights for the global IT sector including leading data integration brands such as Diyotta. From Cloud Migration, Big Data, Digital Transformation, Agile Deliver, Cyber Security, to Analytics- Mind trades provides published breakthrough ideas, and prompt content delivery. For more information, refer to mindtrades.com.

Code

https://github.com/Mindtrades-Consulting/Coffee-Shop-Location-Predictor

 

What Is Data Lake Architecture?

The volume of information produced by everyone in the world is growing exponentially. To put it in perspective, it’s estimated that by 2023 the big data analytics market will reach $103 billion.

Finding probable solutions for storing big data is a challenge. It’s no easy task to hold enormous amounts of information, clean it and transform it into understandable subsets — it’s best to take one step at a time.

Some reasons why companies access their big data is to:

  • Improve their consumer experience
  • Draw conclusions and make data-driven decisions
  • Identify potential problems
  • Create innovative products

There are ways to help define big data. Combining its characteristics with storage management methods help experts make their clients’ information digestible and understandable. Cue data lakes, which are repositories for big data in its native form.

Think of an actual lake with multiple water sources around the perimeter flowing into it. Picture these as three types of data: structured, semi-structured and unstructured. All this information can remain in a data lake and be accessed in its raw form at any time, making it an attractive storage method.

Here’s how data lakes are created, some of their components and how to avoid common pitfalls.

Creating a Data Lake

One benefit of creating and implementing a data lake is that structuring becomes much more manageable.  Pulling necessary information from a lake allows analysts to compare and contrast data and communicate any connections between datasets to their client.

There are four steps to follow when setting up a data lake:

  1. Choosing a software solution: Microsoft, Amazon and Google are cloud vendors that allow developers to create data lakes without using servers.
  2. Identifying where data is sourced: Where is your information coming from? Once sources are identified, determine how your data will be cleaned or transformed.
  3. Defining process and automation: It’s vital to outline how information should be processed once the data lake ingests it. This creates consistency for businesses.
  4. Establishing retrieval governance: Choosing who has access to what types of information is crucial for companies with multiple locations and departments. It helps with overall organization. Data scientists, for this reason, primarily access data lakes.

The next step would be to determine the extract, transform and load (ETL) process. ETL creates visual interpretations of data to provide context to businesses. When information from a data lake is sent to a warehouse, it can be analyzed.

Components of a Data Lake

Here is what happens to information once a data lake is created:

  • Collection: Data comes in from various sources.
  • Ingestion: Data is processed using management software.
  • Blending: Data is combined from multiple sources.
  • Transformation: Data is analyzed and made sense of.
  • Publication: Data can be used to drive business decisions.

There are other aspects of a data lake to keep in mind. These are the critical components that help provide business solutions:

  • Security: Data lakes require security to protect information — they do not have built-in safety measures.
  • Governance: Determine who can check on the quality of data and perform measurements.
  • Metadata: This provides information about other data to improve understanding.
  • Stewardship: Choose one or more employees to take on the responsibility of managing data.
  • Monitoring: Employ other software to perform the ETL process.

Big data lends itself to incorporating multiple processes to make it usable for companies. The volume of information one company produces is massive — to manage it, experts need to consider these components and steps when building a data lake.

What to Avoid When Using Data Lakes

The last thing people want for their data lake is to see it turn into a swamp. When big data is processed incorrectly, its value decreases, making it useless to the business sourcing it.

The first step in avoiding a common pitfall is to consider the sustainability of the data lake. Planning processes are necessary to ensure it’s secure, and governing and regulating incoming information will allow for long-term use.

A lack of security causes another problem that can arise in data lakes. Safety measures must be implemented. Because enterprises will build data lakes for different purposes, it’s easy for information to become unorganized and vulnerable to hacking. With security, the likelihood of data breaches decreases, and the quality of data remains high.

The most important thing to remember about data lakes is the planning stage. Without proper preparation, they tend to be overwhelming due to their size and complexity. Taking the time and care to establish the processes ahead of time is vital.

Using Data Lake Architecture for Business

Data lakes store massive amounts of information to be used later on to create subsets, analyze metadata and more. Their advantages allow businesses to be flexible, save money and have access to raw information at all times.

How Microsoft Azure Is Impacting Financial Companies

Microsoft Azure has taken a large chunk of the cloud marketplace, transforming companies with the speed and security of the cloud. Microsoft has over the years used Azure to cushion companies against risk, deal with fraud and differentiate their customer experience. 

With Microsoft Cloud App Security, customers experience 75% automatic threat elimination because of increased visibility and automated threat protection. With all these and more amazing benefits of using Azure, its market share is bound to increase even more over the coming years.

https://www.flickr.com/photos/91869083@N05/8493934839/in/photolist-dWzCUp-efhrzk-29k3oWh-9zALPj-9zALPh-9aXgpG-91z6Eo-6pABZ8-2htjpWP-Wrr2UG-aNxVLK-4z3omV-2kEyM6k-9GvMhf-Rf9aM7-4z7CQJ-aS8oqx-ekXUoo-9aU3wz-9aXjnw-aS8HTZ-LPgq61-2kjSEYf-2hamKDd-2h6JfeX-2h7gxoF-Fx6eAM-pQ6Ken-fbNckF-2iMRZSS-2hTUA6v-b8ayve-b8awer-dZwwJ7-2i3mmqV-e1dGQz-2dZwNg6-b8aoSH-b8arkc-6ztgDn-b8asCZ-efwZLM-b8atnM-b8attr-2kGQugq-2iowpX5-6zbcAC-dAQCVY-b8aoq8-517Jxq

Image Source

Financial companies have not been left behind by the Azure bandwagon. The financial industry is using Microsoft Azure to enhance its core functionsinvest money by making informed decisions, and minimize risk while maximizing returns. 

Azure facilitates these core functions by helping with the storage of huge amounts of data—  some dating back to decades ago—, data retrieval and data security. 

It also helps financial companies to keep up with regulatory compliance.

Microsoft Azure is not the only cloud services provider. But here’s why it is the most outstanding when it comes to helping financial companies achieve their business goals.

Azure Offers Hybrid and Multi-Cloud Computing for Financial Companies

The financial services industry is extremely dynamic. Organizations offering financial services have to constantly test the market and come up with new and innovative products and services. 

They are also often under pressure to extend their services across borders. Remember they have to do all of this while at the same time managing their existing customers, containing their risk, and dealing with fraud.

Financial regulations also keep changing. As financial companies increasingly embrace new technology for their services— including intelligent cloud computing— and they have to comply with industry regulations. They cannot afford to leave loopholes as they take on their journey with the cloud.

The financial services industry is highly competitive and keeps up with modernity. These companies have had to resort to the dynamic hybrid, multi-cloud computing, and public cloud strategies to keep up with the trend.

This is how a hybrid cloud model worksit enables existing on-premises applications to be extended through a connection to the public cloud. 

This allows financial companies to enjoy the speed, elasticity, and scale of the public cloud without necessarily having to remodel their entire applications. These organizations are afforded the flexibility of deciding what parts of their application remains in an existing data center and which one resides in the cloud.

Cloud computing with Azure allows financial organizations to operate more efficiently by providing end-to-end protection to information, allowing the digitization of financial services, and providing data security. 

Data security is particularly important to financial firms because they are often targeted by fraudsters and cyber threats. They, therefore, need to protect crucial information which they achieve by authenticating their data centers using Azure.

Here’s why financial companies cannot think of doing without Azure’s hybrid cloud computing even for just a day.

https://unsplash.com/collections/28744506/work?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

Photo by Windows on Unsplash

  • The ability to expand their geographic reach

Azure enables financial companies to establish data centers in new locations to meet globally growing demand. This allows them to open and explore new markets. They can then use Azure DevOps pipelines to maintain their data factories and keep everything consistent.

  • Consistent Infrastructure management

The hybrid cloud model promotes a consistent approach to infrastructure management across all locations, whether it is on-premises, public cloud, or the edge.

  • Increased Elasticity

Financial firms and banks utilizing Azure services can respond with great agility to transactional changes or changes in demand by provisioning or de-provisioning as the situation at hand demands. 

In cases where the organization requires high computation such as complex risk modeling, a hybrid strategy allows it to expand its capacity beyond its data center without overwhelming its servers.

  • Flexibility

A hybrid strategy allows financial organizations to choose cloud services that fall within their budget, match their needs, and suit their features.

  • Data security and enhanced regulatory compliance

Hybrid and multi-cloud strategies are a superb alternative for strictly on-premises strategies when one considers resiliency, data portability, and data security.

  • Reduces CapEx Expenses

Managing on-premises infrastructure is expensive. Financial companies utilizing Azure do not need to spend large amounts of money setting them up and managing them. 

With the increased elasticity of the hybrid system, financial organizations only pay for the resources they actually use, at a relatively lower cost.

Financial Organizations Have Access to an Analytics Platform

As we mentioned earlier, financial companies have the core function of making financial decisions in order to invest money and gain maximum returns at the least possible risk. 

Having been entrusted with their customers’ assets, the best way to ensure success in making profits is by using an analytics system.

Getting the form of analytics that helps with solving this investment problem is the kind of headache that does not go away by taking a tablet of ibuprofen and a glass of waterintegrating data is not an easy task. Besides, building a custom analytics solution from scratch is quite expensive.

Luckily for financial companies, Azure has a dedicated analytics platform for the financial services industry. It is custom-made just for these types of organizations. 

Their system is quite intuitive and easy to use. Companies not only get to save the resources they would have otherwise used to build a custom solution, but they get to learn about their investment risks and get instant results at cloud speed. 

They can mitigate against negatively impactful market occurrences and gain profits even when operating in adverse market conditions.

https://unsplash.com/collections/28744506/work?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

Image by Headway on Unsplash

Financial Companies Get Advanced Data Management

Good analytics goes hand-in-hand with a great data management system. Financial companies need to have good data, create an organized data warehouse, and have a secure data storage system.

In addition to storing your data, Microsoft Azure ensures your storage can be optimized to support advanced applications, for example, machine learning and forecasting. 

Azure even allows you to compress and store documents for long periods of time when you write the data to Microsoft Azure Blob Storage. These documents can be retrieved anytime when the need arises for auditors’, regulators’, and lawyers’ perusal. 

Conclusion

Microsoft has over time managed to gain the trust of many industries, the financial services industry inclusive. Using its cloud computing giant, Azure, it has empowered these companies to carry out their functions efficiently and at the lowest cost and risk possible.

Azure’s hybrid cloud computing strategy has made financial operations flexible, opened doors for financial companies to establish their services in multiple locations, and provided them with consistent infrastructure management, among many other benefits.

With their futuristic model and commitment to growth, it’s only prudent to assume that Microsoft Azure will continue carrying the mantle as the best cloud services provider in the financial services industry.

Rethinking linear algebra part two: ellipsoids in data science

1 Our expedition of eigenvectors still continues

This article is still going to be about eigenvectors and PCA, and this article still will not cover LDA (linear discriminant analysis). Hereby I would like you to have more organic links of the data science ideas with eigenvectors.

In the second article, we have covered the following points:

  • You can visualize linear transformations with matrices by calculating displacement vectors, and they usually look like vectors swirling.
  • Diagonalization is finding a direction in which the displacement vectors do not swirl, and that is equal to finding new axis/basis where you can describe its linear transformations more straightforwardly. But we have to consider diagonalizability of the matrices.
  • In linear dimension reduction such as PCA or LDA, we mainly use types of matrices called positive definite or positive semidefinite matrices.

In the last article we have seen the following points:

  • PCA is an algorithm of calculating orthogonal axes along which data “swell” the most.
  • PCA is equivalent to calculating a new orthonormal basis for the data where the covariance between components is zero.
  • You can reduced the dimension of the data in the new coordinate system by ignoring the axes corresponding to small eigenvalues.
  • Covariance matrices enable linear transformation of rotation and expansion and contraction of vectors.

I emphasized that the axes are more important than the surface of the high dimensional ellipsoids, but in this article let’s focus more on the surface of ellipsoids, or I would rather say general quadratic curves. After also seeing how to draw ellipsoids on data, you would see the following points about PCA or eigenvectors.

  • Covariance matrices are real symmetric matrices, and also they are positive semidefinite. That means you can always diagonalize covariance matrices, and their eigenvalues are all equal or greater than 0.
  • PCA is equivalent to finding axes of quadratic curves in which gradients are biggest. The values of quadratic curves increases the most in those directions, and that means the directions describe great deal of information of data distribution.
  • Intuitively dimension reduction by PCA is equal to fitting a high dimensional ellipsoid on data and cutting off the axes corresponding to small eigenvalues.

Even if you already understand PCA to some extent, I hope this article provides you with deeper insight into PCA, and at least after reading this article, I think you would be more or less able to visually control eigenvectors and ellipsoids with the Numpy and Maplotlib libraries.

*Let me first introduce some mathematical facts and how I denote them throughout this article in advance. If you are allergic to mathematics, take it easy or please go back to my former articles.

  • Any quadratic curves can be denoted as \boldsymbol{x}^T A\boldsymbol{x} + 2\boldsymbol{b}^T\boldsymbol{x} + s = 0, where \boldsymbol{x}\in \mathbb{R}^D , A \in \mathbb{R}^{D\times D} \boldsymbol{b}\in \mathbb{R}^D s\in \mathbb{R}.
  • When I want to clarify dimensions of variables of quadratic curves, I denote parameters as A_D, b_D.
  • If a matrix A is a real symmetric matrix, there exist a rotation matrix U such that U^T A U = \Lambda, where \Lambda = diag(\lambda_1, \dots, \lambda_D) and U = (\boldsymbol{u}_1, \dots , \boldsymbol{u}_D). \boldsymbol{u}_1, \dots , \boldsymbol{u}_D are eigenvectors corresponding to \lambda_1, \dots, \lambda_D respectively.
  • PCA corresponds to a case of diagonalizing A where A is a covariance matrix of certain data. When I want to clarify that A is a covariance matrix, I denote it as A=\Sigma.
  • Importantly covariance matrices \Sigma are positive semidefinite and real symmetric, which means you can always diagonalize \Sigma and any of their engenvalues cannot be lower than 0.

*In the last article, I denoted the covariance of data as S, based on Pattern Recognition and Machine Learning by C. M. Bishop.

*Sooner or later you are going to see that I am explaining basically the same ideas from different points of view, using the topic of PCA. However I believe they are all important when you learn linear algebra for data science of machine learning. Even you have not learnt linear algebra or if you have to teach linear algebra, I recommend you to first take a review on the idea of diagonalization, like the second article. And you should be conscious that, in the context of machine learning or data science, only a very limited type of matrices are important, which I have been explaining throughout this article.

2 Rotation or projection?

In this section I am going to talk about basic stuff found in most textbooks on linear algebra. In the last article, I mentioned that if A is a real symmetric matrix, you can diagonalize A with a rotation matrix U = (\boldsymbol{u}_1 \: \cdots \: \boldsymbol{u}_D), such that U^{-1}AU = U^{T}AU =\Lambda, where \Lambda = diag(\lambda_{1}, \dots , \lambda_{D}). I also explained that PCA is a case where A=\Sigma, that is, A is the covariance matrix of certain data. \Sigma is known to be positive semidefinite and real symmetric. Thus you can always diagonalize \Sigma and any of their engenvalues cannot be lower than 0.

I think we first need to clarify the difference of rotation and projection. In order to visualize the ideas, let’s consider a case of D=3. Assume that you have got an orthonormal rotation matrix U = (\boldsymbol{u}_1 \: \boldsymbol{u}_2 \: \boldsymbol{u}_3) which diagonalizes A. In the last article I said diagonalization is equivalent to finding new orthogonal axes formed by eigenvectors, and in the case of this section you got new orthonoramal basis (\boldsymbol{u}_1, \boldsymbol{u}_2, \boldsymbol{u}_3) which are in red in the figure below. Projecting a point \boldsymbol{x} = (x, y, z) on the new orthonormal basis is simple: you just have to multiply \boldsymbol{x} with U^T. Let U^T \boldsymbol{x} be (x', y', z')^T, and then \left( \begin{array}{c} x' \\ y' \\ z' \end{array} \right) = U^T\boldsymbol{x} = \left( \begin{array}{c} \boldsymbol{u}_1^{T}\boldsymbol{x} \\ \boldsymbol{u}_2^{T}\boldsymbol{x} \\ \boldsymbol{u}_3^{T}\boldsymbol{x} \end{array} \right). You can see x', y', z' are \boldsymbol{x} projected on \boldsymbol{u}_1, \boldsymbol{u}_2, \boldsymbol{u}_3 respectively, and the left side of the figure below shows the idea. When you replace the orginal orthonormal basis (\boldsymbol{e}_1, \boldsymbol{e}_2, \boldsymbol{e}_3) with (\boldsymbol{u}_1, \boldsymbol{u}_2, \boldsymbol{u}_3) as in the right side of the figure below, you can comprehend the projection as a rotation from (x, y, z) to (x', y', z') by a rotation matrix U^T.

Next, let’s see what rotation is. In case of rotation, you should imagine that you rotate the point \boldsymbol{x} in the same coordinate system, rather than projecting to other coordinate system. You can rotate \boldsymbol{x} by multiplying it with U. This rotation looks like the figure below.

In the initial position, the edges of the cube are aligned with the three orthogonal black axes (\boldsymbol{e}_1,  \boldsymbol{e}_2 , \boldsymbol{e}_3), with one corner of the cube located at the origin point of those axes. The purple dot denotes the corner of the cube directly opposite the origin corner. The cube is rotated in three dimensions, with the origin corner staying fixed in place. After the rotation with a pivot at the origin, the edges of the cube are now aligned with a new set of orthogonal axes (\boldsymbol{u}_1,  \boldsymbol{u}_2 , \boldsymbol{u}_3), shown in red. You might understand that more clearly with an equation: U\boldsymbol{x} = (\boldsymbol{u}_1 \: \boldsymbol{u}_2 \: \boldsymbol{u}_3) \left( \begin{array}{c} x \\ y \\ z \end{array} \right) = x\boldsymbol{u}_1 + y\boldsymbol{u}_2 + z\boldsymbol{u}_3. In short this rotation means you keep relative position of \boldsymbol{x}, I mean its coordinates (x, y, z), in the new orthonormal basis. In this article, let me call this a “cube rotation.”

The discussion above can be generalized to spaces with dimensions higher than 3. When U \in \mathbb{R}^{D \times D} is an orthonormal matrix and a vector \boldsymbol{x} \in \mathbb{R}^D, you can project \boldsymbol{x} to \boldsymbol{x}' = U^T \boldsymbol{x}or rotate it to \boldsymbol{x}'' = U \boldsymbol{x}, where \boldsymbol{x}' = (x_{1}', \dots, x_{D}')^T and \boldsymbol{x}'' = (x_{1}'', \dots, x_{D}'')^T. In other words \boldsymbol{x} = U \boldsymbol{x}', which means you can rotate back \boldsymbol{x}' to the original point \boldsymbol{x} with the rotation matrix U.

I think you at least saw that rotation and projection are basically the same, and that is only a matter of how you look at the coordinate systems. But I would say the idea of projection is more important through out this article.

Let’s consider a function f(\boldsymbol{x}; A) = \boldsymbol{x}^T A \boldsymbol{x} = (\boldsymbol{x}, A \boldsymbol{x}), where A\in \mathbb{R}^{D\times D} is a real symmetric matrix. The distribution of f(\boldsymbol{x}; A) is quadratic curves whose center point covers the origin, and it is known that you can express this distribution in a much simpler way using eigenvectors. When you project this function on eigenvectors of A, that is when you substitute U \boldsymbol{x}' for \boldsymbol{x}, you get f = (\boldsymbol{x}, A \boldsymbol{x}) =(U \boldsymbol{x}', AU \boldsymbol{x}') = (\boldsymbol{x}')^T U^TAU \boldsymbol{x}' = (\boldsymbol{x}')^T \Lambda \boldsymbol{x}' = \lambda_1 ({x'}_1)^2 + \cdots + \lambda_D ({x'}_D)^2. You can always diagonalize real symmetric matrices, so the formula implies that the shapes of quadratic curves largely depend on eigenvectors. We are going to see this in detail in the next section.

*(\boldsymbol{x}, \boldsymbol{y}) denotes an inner product of \boldsymbol{x} and \boldsymbol{y}.

*We are going to see details of the shapes of quadratic “curves” or “functions” in the next section.

To be exact, you cannot naively multiply U or U^T for rotation. Let’s take a part of data I showed in the last article as an example. In the figure below, I projected data on the basis (\boldsymbol{u}_1,  \boldsymbol{u}_2 , \boldsymbol{u}_3).

You might have noticed that you cannot do a “cube rotation” in this case. If you make the coordinate system (\boldsymbol{u}_1, \boldsymbol{u}_2, \boldsymbol{u}_3) with your left hand, like you might have done in science classes in school to learn Fleming’s rule, you would soon realize that the coordinate systems in the figure above do not match. You need to flip the direction of one axis to match them.

Mathematically, you have to consider the determinant of the rotation matrix U. You can do a “cube rotation” when det(U)=1, and in the case above det(U) was -1, and you needed to flip one axis to make the determinant 1. In the example in the figure below, you can match the basis. This also can be generalized to higher dimensions, but that is also beyond the scope of this article series. If you are really interested, you should prepare some coffee and snacks and textbooks on linear algebra, and some weekends.

When you want to make general ellipsoids in a 3d space on Matplotlib, you can take advantage of rotation matrices. You first make a simple ellipsoid symmetric about xyz axis using polar coordinates, and you can rotate the whole ellipsoid with rotation matrices. I made some simple modules for drawing ellipsoid. If you put in a rotation matrix which diagonalize the covariance matrix of data and a list of three radiuses \sqrt{\lambda_1}, \sqrt{\lambda_2}, \sqrt{\lambda_3}, you can rotate the original ellipsoid so that it fits the data well.

3 Types of quadratic curves.

*This article might look like a mathematical writing, but I would say this is more about computer science. Please tolerate some inaccuracy in terms of mathematics. I gave priority to visualizing necessary mathematical ideas in my article series. If you are not sure about details, please let me know.

In linear dimension reduction, or at least in this article series you mainly have to consider ellipsoids. However ellipsoids are just one type of quadratic curves. In the last article, I mentioned that when the center of a D dimensional ellipsoid is the origin point of a normal coordinate system, the formula of the surface of the ellipsoid is as follows: (\boldsymbol{x}, A\boldsymbol{x})=1, where A satisfies certain conditions. To be concrete, when (\boldsymbol{x}, A\boldsymbol{x})=1 is the surface of a ellipsoid, A has to be diagonalizable and positive definite.

*Real symmetric matrices are diagonalizable, and positive definite matrices have only positive eigenvalues. Covariance matrices \Sigma, whose displacement vectors I visualized in the last two articles, are known to be symmetric real matrices and positive semi-defintie. However, the surface of an ellipsoid which fit the data is \boldsymbol{x}^T \Sigma ^{-1} \boldsymbol{x} = const., not \boldsymbol{x}^T \Sigma \boldsymbol{x} = const..

*You have to keep it in mind that \boldsymbol{x} are all deviations.

*You do not have to think too much about what the “semi” of the term “positive semi-definite” means fow now.

As you could imagine, this is just one simple case of richer variety of graphs. Let’s consider a 3-dimensional space. Any quadratic curves in this space can be denoted as ax^2 + by^2 + cz^2 + dxy + eyz + fxz + px + qy + rz + s = 0, where at least one of a, b, c, d, e, f, p, q, r, s is not 0.  Let \boldsymbol{x} be (x, y, z)^T, then the quadratic curves can be simply denoted with a 3\times 3 matrix A and a 3-dimensional vector \boldsymbol{b} as follows: \boldsymbol{x}^T A\boldsymbol{x} + 2\boldsymbol{b}^T\boldsymbol{x} + s = 0, where A = \left( \begin{array}{ccc} a & \frac{d}{2} & \frac{f}{2} \\ \frac{d}{2} & b & \frac{e}{2} \\ \frac{f}{2} & \frac{e}{2} & c \end{array} \right), \boldsymbol{b} = \left( \begin{array}{c} \frac{p}{2} \\ \frac{q}{2} \\ \frac{r}{2} \end{array} \right). General quadratic curves are roughly classified into the 9 types below.

You can shift these quadratic curves so that their center points come to the origin, without rotation, and the resulting curves are as follows. The curves can be all denoted as \boldsymbol{x}^T A\boldsymbol{x}.

As you can see, A is a real symmetric matrix. As I have mentioned repeatedly, when all the elements of a D \times D symmetric matrix A are real values and its eigen values are \lambda_{i} (i=1, \dots , D), there exist orthogonal/orthonormal matrices U such that U^{-1}AU = \Lambda, where \Lambda = diag(\lambda_{1}, \dots , \lambda_{D}). Hence, you can diagonalize the A = \left( \begin{array}{ccc} a & \frac{d}{2} & \frac{f}{2} \\ \frac{d}{2} & b & \frac{e}{2} \\ \frac{f}{2} & \frac{e}{2} & c \end{array} \right) with an orthogonal matrix U. Let U be an orthogonal matrix such that U^T A U = \left( \begin{array}{ccc} \alpha  & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & \gamma \end{array} \right) =\left( \begin{array}{ccc} \lambda_1  & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{array} \right). After you apply rotation by U to the curves (a)” ~ (i)”, those curves are symmetrically placed about the xyz axes, and their center points still cross the origin. The resulting curves look like below. Or rather I should say you projected (a)’ ~ (i)’ on their eigenvectors.

In this article mainly (a)” , (g)”, (h)”, and (i)” are important. General equations for the curves is as follows

  • (a)”: \frac{x^2}{l^2} + \frac{y^2}{m^2} + \frac{z^2}{n^2} = 1
  • (g)”: z = \frac{x^2}{l^2} + \frac{y^2}{m^2}
  • (h)”: z = \frac{x^2}{l^2} - \frac{y^2}{m^2}
  • (i)”: z = \frac{x^2}{l^2}

, where l, m, n \in \mathbb{R}^+.

Even if this section has been puzzling to you, you just have to keep one point in your mind: we have been discussing general quadratic curves, but in PCA, you only need to consider a case where A is a covariance matrix, that is A=\Sigma. PCA corresponds to the case where you shift and rotate the curve (a) into (a)”. Subtracting the mean of data from each point of data corresponds to shifting quadratic curve (a) to (a)’. Calculating eigenvectors of A corresponds to calculating a rotation matrix U such that the curve (a)’ comes to (a)” after applying the rotation, or projecting curves on eigenvectors of \Sigma. Importantly we are only discussing the covariance of certain data, not the distribution of the data itself.

*Just in case you are interested in a little more mathematical sides: it is known that if you rotate all the points \boldsymbol{x} on the curve \boldsymbol{x}^T A\boldsymbol{x} + 2\boldsymbol{b}^T\boldsymbol{x} + s = 0 with the rotation matrix P, those points \boldsymbol{x} are mapped into a new quadratic curve \alpha x^2 + \beta y^2 + \gamma z^2 + \lambda x + \mu y + \nu z + \rho = 0. That means the rotation of the original quadratic curve with P (or rather rotating axes) enables getting rid of the terms xy, yz, zx. Also it is known that when \alpha ' \neq 0, with proper translations and rotations, the quadratic curve \alpha x^2 + \beta y^2 + \gamma z^2 + \lambda x + \mu y + \nu z + \rho = 0 can be mapped into one of the types of quadratic curves in the figure below, depending on coefficients of the original quadratic curve. And the discussion so far can be generalized to higher dimensional spaces, but that is beyond the scope of this article series. Please consult decent textbooks on linear algebra around you for further details.

4 Eigenvectors are gradients and sometimes variances.

In the second section I explained that you can express quadratic functions f(\boldsymbol{x}; A) = \boldsymbol{x}^T A \boldsymbol{x} in a very simple way by projecting \boldsymbol{x} on eigenvectors of A.

You can comprehend what I have explained in another way: eigenvectors, to be exact eigenvectors of real symmetric matrices A, are gradients. And in case of PCA, I mean when A=\Sigma eigenvalues are also variances. Before explaining what that means, let me explain a little of the totally common facts on mathematics. If you have variables \boldsymbol{x}\in \mathbb{R}^D, I think you can comprehend functions f(\boldysmbol{x}) in two ways. One is a normal “functions” f(\boldsymbol{x}), and the others are “curves” f(\boldsymbol{x}) = const.. “Functions” get an input \boldsymbol{x} and gives out an output f(\boldsymbol{x}), just as well as normal functions you would imagine. “Curves” are rather sets of \boldsymbol{x} \in \mathbb{R}^D such that f(\boldsymbol{x}) = const..

*Please assume that the terms “functions” and “curves” are my original words. I use them just in case I fail to use functions and curves properly.

The quadratic curves in the figure above are all “curves” in my term, which can be denoted as f(\boldsymbol{x}; A_3, \boldsymbol{b}_3)=const or f(\boldsymbol{x}; A_3)=const. However if you replace z of (g)”, (h)”, and (i)” with f, you can interpret the “curves” as “functions” which are denoted as f(\boldsymbol{x}; A_2). This might sounds too obvious to you, and my point is you can visualize how values of “functions” change only when the inputs are 2 dimensional.

When a symmetric 2\times 2 real matrices A_2 have two eigenvalues \lambda_1, \lambda_2, the distribution of quadratic curves can be roughly classified to the following three types.

  • (g): Both \lambda_1 and \lambda_2 are positive or negative.
  • (h): Either of \lambda_1 or \lambda_2 is positive and the other is negative.
  • (i): Either of \lambda_1 or \lambda_2 is 0 and the other is not.

The equations of (g)” , (h)”, and (i)” correspond to each type of f=(\boldsymbol{x}; A_2), and thier curves look like the three graphs below.

And in fact, when start from the origin and go in the direction of an eigenvector \boldsymbol{u}_i, \lambda_i is the gradient of the direction. You can see that more clearly when you restrict the distribution of f=(\boldsymbol{x}; A_2) to a unit circle. Like in the figure below, in case \lambda_1 = 7, \lambda_2 = 3, which is classified to (g), the distribution looks like the left side, and if you restrict the distribution in the unit circle, the distribution looks like a bowl like the middle and the right side. When you move in the direction of \boldsymbol{u}_1, you can climb the bowl as as high as \lambda_1, in \boldsymbol{u}_2 as high as \lambda_2.

Also in case of (h), the same facts hold. But in this case, you can also descend the curve.

*You might have seen the curve above in the context of optimization with stochastic gradient descent. The origin of the curve above is a notorious saddle point, where gradients are all 0 in any directions but not a local maximum or minimum. Points can be stuck in this point during optimization.

Especially in case of PCA, A is a covariance matrix, thus A=\Sigma. Eigenvalues of \Sigma are all equal to or greater than 0. And it is known that in this case \lambda_i is the variance of data projected on its corresponding eigenvector \boldsymbol{u}_i (i=0, \dots , D). Hence, if you project f(\boldsymbol{x}; \Sigma), quadratic curves formed by a covariance matrix \Sigma, on eigenvectors of \Sigma, you get f(\boldsymbol{x}; \Sigma) = ({x'}_1 \: \dots \: {x'}_D) (\lambda_1 {x'}_1 \: \dots \: \lambda_D {x'}_D)^t =\lambda_1 ({x'}_1)^2 + \cdots + \lambda_D ({x'}_D)^2.  This shows that you can re-weight ({x'}_1 \: \dots \: {x'}_D), the coordinates of data projected projected on eigenvectors of A, with \lambda_1, \dots, \lambda_D, which are variances ({x'}_1 \: \dots \: {x'}_D). As I mentioned in an example of data of exam scores in the last article, the bigger a variance \lambda_i is, the more the feature described by \boldsymbol{u}_i vary from sample to sample. In other words, you can ignore eigenvectors corresponding to small eigenvalues.

That is a great hint why principal components corresponding to large eigenvectors contain much information of the data distribution. And you can also interpret PCA as a “climbing” a bowl of f(\boldsymbol{x}; A_D), as I have visualized in the case of (g) type curve in the figure above.

*But as I have repeatedly mentioned, ellipsoid which fit data well isf(\boldsymbol{x}; \Sigma ^{-1}) =(\boldsymbol{x}')^T diag(\frac{1}{\lambda_1}, \dots, \frac{1}{\lambda_D})\boldsymbol{x}' = \frac{({x'}_{1})^2}{\lambda_1} + \cdots + \frac{({x'}_{D})^2}{\lambda_D} = const..

*You have to be careful that even if you slice a type (h) curve f(\boldsymbol{x}; A_D) with a place z=const. the resulting cross section does not fit the original data well because the equation of the cross section is \lambda_1 ({x'}_1)^2 + \cdots + \lambda_D ({x'}_D)^2 = const. The figure below is an example of slicing the same f(\boldsymbol{x}; A_2) as the one above with z=1, and the resulting cross section.

As we have seen, \lambda_i, the eigenvalues of the covariance matrix of data are variances or data when projected on it eigenvectors. At the same time, when you fit an ellipsoid on the data, \sqrt{\lambda_i} is the radius of the ellipsoid corresponding to \boldsymbol{u}_i. Thus ignoring data projected on eigenvectors corresponding to small eigenvalues is equivalent to cutting of the axes of the ellipsoid with small radiusses.

I have explained PCA in three different ways over three articles.

  • The second article: I focused on what kind of linear transformations convariance matrices \Sigma enable, by visualizing displacement vectors. And those vectors look like swirling and extending into directions of eigenvectors of \Sigma.
  • The third article: We directly found directions where certain data distribution “swell” the most, to find that data swell the most in directions of eigenvectors.
  • In this article, we have seen PCA corresponds to only one case of quadratic functions, where the matrix A is a covariance matrix. When you go in the directions of eigenvectors corresponding to big eigenvalues, the quadratic function increases the most. Also that means data samples have bigger variances when projected on the eigenvectors. Thus you can cut off eigenvectors corresponding to small eigenvectors because they retain little information about data, and that is equivalent to fitting an ellipsoid on data and cutting off axes with small radiuses.

*Let A be a covariance matrix, and you can diagonalize it with an orthogonal matrix U as follow: U^{T}AU = \Lambda, where \Lambda = diag(\lambda_1, \dots, \lambda_D). Thus A = U \Lambda U^{T}. U is a rotation, and multiplying a \boldsymbol{x} with \Lambda means you multiply each eigenvalue to each element of \boldsymbol{x}. At the end U^T enables the reverse rotation.

If you get data like the left side of the figure below, most explanation on PCA would just fit an oval on this data distribution. However after reading this articles series so far, you would have learned to see PCA from different viewpoints like at the right side of the figure below.

 

5 Ellipsoids in Gaussian distributions.

I have explained that if the covariance of a data distribution is \boldsymbol{\Sigma}, the ellipsoid which fits the distribution the best is \bigl((\boldsymbol{x} - \boldsymbol{\mu}), \boldsymbol{\Sigma}^{-1}(\boldsymbol{x} - \boldsymbol{\mu})\bigr) = 1. You might have seen the part \bigl((\boldsymbol{x} - \boldsymbol{\mu}), \boldsymbol{\Sigma}^{-1}(\boldsymbol{x} - \boldsymbol{\mu})\bigr) = (\boldsymbol{x} - \boldsymbol{\mu}) \boldsymbol{\Sigma}^{-1}(\boldsymbol{x} - \boldsymbol{\mu}) somewhere else. It is the exponent of general Gaussian distributions: \mathcal{N}(\boldsymbol{x} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|} exp\{ -\frac{1}{2}(\boldsymbol{x} - \boldsymbol{\mu}) \boldsymbol{\Sigma}^{-1}(\boldsymbol{x} - \boldsymbol{\mu}) \}.  It is known that the eigenvalues of \Sigma ^{-1} are \frac{1}{\lambda_1}, \dots, \frac{1}{\lambda_D}, and eigenvectors corresponding to each eigenvalue are also \boldsymbol{u}_1, \dots, \boldsymbol{u}_D respectively. Hence just as well as what we have seen, if you project (\boldsymbol{x} - \boldsymbol{\mu}) on each eigenvector of \Sigma ^{-1}, we can convert the exponent of the Gaussian distribution.

Let -\frac{1}{2}(\boldsymbol{x} - \boldsymbol{\mu}) \boldsymbol{\Sigma}^{-1}(\boldsymbol{x} - \boldsymbol{\mu}) be \boldsymbol{y} and U ^{-1} \boldsymbol{y}= U^{T} \boldsymbol{y} be \boldsymbol{y}', where U=(\boldsymbol{u}_1 \: \dots \: \boldsymbol{u}_D). Just as we have seen, (\boldsymbol{x} - \boldsymbol{\mu}) \boldsymbol{\Sigma}^{-1}(\boldsymbol{x} - \boldsymbol{\mu}) =\boldsymbol{y}^T\Sigma^{-1} \boldsymbol{y} =(U\boldsymbol{y}')^T \Sigma^{-1} U\boldsymbol{y}' =((\boldsymbol{y}')^T U^T \Sigma^{-1} U\boldsymbol{y}' = (\boldsymbol{y}')^T diag(\frac{1}{\lambda_1}, \dots, \frac{1}{\lambda_D}) \boldsymbol{y}' = \frac{({y'}_{1})^2}{\lambda_1} + \cdots + \frac{({y'}_{D})^2}{\lambda_D}. Hence \mathcal{N}(\boldsymbol{x} | \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|} exp\{ -\frac{1}{2}(\boldsymbol{y}) \boldsymbol{\Sigma}^{-1}(\boldsymbol{y}) \} =  \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|} exp\{ -\frac{1}{2}(\frac{({y'}_{1})^2}{\lambda_1} + \cdots + \frac{({y'}_{D})^2}{\lambda_D} ) \} =\frac{1}{(2\pi)^{1/2}} \frac{1}{|\boldsymbol{\Sigma}|} exp\biggl( -\frac{1}{2} \frac{({y'}_{1})^2}{\lambda_1} \biggl) \cdots \frac{1}{(2\pi)^{1/2}} \frac{1}{|\boldsymbol{\Sigma}|} exp\biggl( -\frac{1}{2}\frac{({y'}_{D})^2}{\lambda_D} \biggl).

*To be mathematically exact about changing variants of normal distributions, you have to consider for example Jacobian matrices.

This results above demonstrate that, by projecting data on the eigenvectors of its covariance matrix, you can factorize the original multi-dimensional Gaussian distribution into a product of Gaussian distributions which are irrelevant to each other. However, at the same time, that is the potential limit of approximating data with PCA. This idea is going to be more important when you think about more probabilistic ways to handle PCA, which is more robust to lack of data.

I have explained PCA over 3 articles from various viewpoints. If you have been patient enough to read my article series, I think you have gained some deeper insight into not only PCA, but also linear algebra, and that should be helpful when you learn or teach data science. I hope my codes also help you. In fact these are not the only topics about PCA. There are a lot of important PCA-like algorithms.

In fact our expedition of ellipsoids, or PCA still continues, just as Star Wars series still continues. Especially if I have to explain an algorithm named probabilistic PCA, I need to explain the “Bayesian world” of machine learning. Most machine learning algorithms covered by major introductory textbooks tend to be too deterministic and dependent on the size of data. Many of those algorithms have another “parallel world,” where you can handle inaccuracy in better ways. I hope I can also write about them, and I might prepare another trilogy for such PCA. But I will not disappoint you, like “The Phantom Menace.”

Appendix: making a model of a bunch of grape with ellipsoid berries.

If you can control quadratic curves, reshaping and rotating them, you can make a model of a grape of olive bunch on Matplotlib. I made a program of making a model of a bunch of berries on Matplotlib using the module to draw ellipsoids which I introduced earlier. You can check the codes in this page.

*I have no idea how many people on this earth are in need of making such models.

I made some modules so that you can see the grape bunch from several angles. This might look very simple to you, but the locations of berries are organized carefully so that it looks like they are placed around a stem and that the berries are not too close to each other.

 

The programming code I created for this article is completly available here.

[Refereces]

[1]C. M. Bishop, “Pattern Recognition and Machine Learning,” (2006), Springer, pp. 78-83, 559-577

[2]「理工系新課程 線形代数 基礎から応用まで」, 培風館、(2017)

[3]「これなら分かる 最適化数学 基礎原理から計算手法まで」, 金谷健一著、共立出版, (2019), pp. 17-49

[4]「これなら分かる 応用数学教室 最小二乗法からウェーブレットまで」, 金谷健一著、共立出版, (2019), pp.165-208

[5] 「サボテンパイソン 」
https://sabopy.com/

 

How to make a toy English-German translator with multi-head attention heat maps: the overall architecture of Transformer

If you have been patient enough to read the former articles of this article series Instructions on Transformer for people outside NLP field, but with examples of NLP, you should have already learned a great deal of Transformer model, and I hope you gained a solid foundation of learning theoretical sides on this algorithm.

This article is going to focus more on practical implementation of a transformer model. We use codes in the Tensorflow official tutorial. They are maintained well by Google, and I think it is the best practice to use widely known codes.

The figure below shows what I have explained in the articles so far. Depending on your level of understanding, you can go back to my former articles. If you are familiar with NLP with deep learning, you can start with the third article.

1 The datasets

I think this article series appears to be on NLP, and I do believe that learning Transformer through NLP examples is very effective. But I cannot delve into effective techniques of processing corpus in each language. Thus we are going to use a library named BPEmb. This library enables you to encode any sentences in various languages into lists of integers. And conversely you can decode lists of integers to the language. Thanks to this library, we do not have to do simplification of alphabets, such as getting rid of Umlaut.

*Actually, I am studying in computer vision field, so my codes would look elementary to those in NLP fields.

The official Tensorflow tutorial makes a Portuguese-English translator, but in article we are going to make an English-German translator. Basically, only the codes below are my original. As I said, this is not an article on NLP, so all you have to know is that at every iteration you get a batch of (64, 41) sized tensor as the source sentences, and a batch of (64, 42) tensor as corresponding target sentences. 41, 42 are respectively the maximum lengths of the input or target sentences, and when input sentences are shorter than them, the rest positions are zero padded, as you can see in the codes below.

*If you just replace datasets and modules for encoding, you can make translators of other pairs of languages.

We are going to train a seq2seq-like Transformer model of converting those list of integers, thus a mapping from a vector to another vector. But each word, or integer is encoded as an embedding vector, so virtually the Transformer model is going to learn a mapping from sequence data to another sequence data. Let’s formulate this into a bit more mathematics-like way: when we get a pair of sequence data \boldsymbol{X} = (\boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau _x)}) and \boldsymbol{Y} = (\boldsymbol{y}^{(1)}, \dots, \boldsymbol{y}^{(\tau _y)}), where \boldsymbol{x}^{(t)} \in \mathbb{R}^{|\mathcal{V}_{\mathcal{X}}|}, \boldsymbol{x}^{(t)} \in \mathbb{R}^{|\mathcal{V}_{\mathcal{Y}}|}, respectively from English and German corpus, then we learn a mapping f: \boldsymbol{X} \to \boldsymbol{Y}.

*In this implementation the vocabulary sizes are both 10002. Thus |\mathcal{V}_{\mathcal{X}}|=|\mathcal{V}_{\mathcal{Y}}|=10002

2 The whole architecture

This article series has covered most of components of Transformer model, but you might not understand how seq2seq-like models can be constructed with them. It is very effective to understand how transformer is constructed by actually reading or writing codes, and in this article we are finally going to construct the whole architecture of a Transforme translator, following the Tensorflow official tutorial. At the end of this article, you would be able to make a toy English-German translator.

The implementation is mainly composed of 4 classes, EncoderLayer(), Encoder(), DecoderLayer(), and Decoder() class. The inclusion relations of the classes are displayed in the figure below.

To be more exact in a seq2seq-like model with Transformer, the encoder and the decoder are connected like in the figure below. The encoder part keeps converting input sentences in the original language through N layers. The decoder part also keeps converting the inputs in the target languages, also through N layers, but it receives the output of the final layer of the Encoder at every layer.

You can see how the Encoder() class and the Decoder() class are combined in Transformer in the codes below. If you have used Tensorflow or Pytorch to some extent, the codes below should not be that hard to read.

3 The encoder

*From now on “sentences” do not mean only the input tokens in natural language, but also the reweighted and concatenated “values,” which I repeatedly explained in explained in the former articles. By the end of this section, you will see that Transformer repeatedly converts sentences layer by layer, remaining the shape of the original sentence.

I have explained multi-head attention mechanism in the third article, precisely, and I explained positional encoding and masked multi-head attention in the last article. Thus if you have read them and have ever written some codes in Tensorflow or Pytorch, I think the codes of Transformer in the official Tensorflow tutorial is not so hard to read. What is more, you do not use CNNs or RNNs in this implementation. Basically all you need is linear transformations. First of all let’s see how the EncoderLayer() and the Encoder() classes are implemented in the codes below.

You might be confused what “Feed Forward” means in  this article or the original paper on Transformer. The original paper says this layer is calculated as FFN(x) = max(0, xW_1 + b_1)W_2 +b_2. In short you stack two fully connected layers and activate it with a ReLU function. Let’s see how point_wise_feed_forward_network() function works in the implementation with some simple codes. As you can see from the number of parameters in each layer of the position wise feed forward neural network, the network does not depend on the length of the sentences.

From the number of parameters of the position-wise feed forward neural networks, you can see that you share the same parameters over all the positions of the sentences. That means in the figure above, you use the same densely connected layers at all the positions, in single layer. But you also have to keep it in mind that parameters for position-wise feed-forward networks change from layer to layer. That is also true of “Layer” parts in Transformer model, including the output part of the decoder: there are no learnable parameters which cover over different positions of tokens. These facts lead to one very important feature of Transformer: the number of parameters does not depend on the length of input or target sentences. You can offset the influences of the length of sentences with multi-head attention mechanisms. Also in the decoder part, you can keep the shape of sentences, or reweighted values, layer by layer, which is expected to enhance calculation efficiency of Transformer models.

4, The decoder

The structures of DecoderLayer() and the Decoder() classes are quite similar to those of EncoderLayer() and the Encoder() classes, so if you understand the last section, you would not find it hard to understand the codes below. What you have to care additionally in this section is inter-language multi-head attention mechanism. In the third article I was repeatedly explaining multi-head self attention mechanism, taking the input sentence “Anthony Hopkins admired Michael Bay as a great director.” as an example. However, as I explained in the second article, usually in attention mechanism, you compare sentences with the same meaning in two languages. Thus the decoder part of Transformer model has not only self-attention multi-head attention mechanism of the target sentence, but also an inter-language multi-head attention mechanism. That means, In case of translating from English to German, you compare the sentence “Anthony Hopkins hat Michael Bay als einen großartigen Regisseur bewundert.” with the sentence itself in masked multi-head attention mechanism (, just as I repeatedly explained in the third article). On the other hand, you compare “Anthony Hopkins hat Michael Bay als einen großartigen Regisseur bewundert.” with “Anthony Hopkins admired Michael Bay as a great director.” in the inter-language multi-head attention mechanism (, just as you can see in the figure above).

*The “inter-language multi-head attention mechanism” is my original way to call it.

I briefly mentioned how you calculate the inter-language multi-head attention mechanism in the end of the third article, with some simple codes, but let’s see that again, with more straightforward figures. If you understand my explanation on multi-head attention mechanism in the third article, the inter-language multi-head attention mechanism is nothing difficult to understand. In the multi-head attention mechanism in encoder layers, “queries”, “keys”, and “values” come from the same sentence in English, but in case of inter-language one, only “keys” and “values” come from the original sentence, and “queries” come from the target sentence. You compare “queries” in German with the “keys” in the original sentence in English, and you re-weight the sentence in English. You use the re-weighted English sentence in the decoder part, and you do not need look-ahead mask in this inter-language multi-head attention mechanism.

Just as well as multi-head self-attention, you can calculate inter-language multi-head attention mechanism as follows: softmax(\frac{\boldsymbol{Q} \boldsymbol{K} ^T}{\sqrt{d}_k}). In the example above, the resulting multi-head attention map is a 10 \times 9 matrix like in the figure below.

Once you keep the points above in you mind, the implementation of the decoder part should not be that hard.

5 Masking tokens in practice

I explained masked-multi-head attention mechanism in the last article, and the ideas itself is not so difficult. However in practice this is implemented in a little tricky way. You might have realized that the size of input matrices is fixed so that it fits the longest sentence. That means, when the maximum length of the input sentences is 41, even if the sentences in a batch have less than 41 tokens, you sample (64, 41) sized tensor as a batch every time (The 64 is a batch size). Let “Anthony Hopkins admired Michael Bay as a great director.”, which has 9 tokens in total, be an input. We have been considering calculating (9, 9) sized attention maps or (10, 9) sized attention maps, but in practice you use (41, 41) or (42, 41) sized ones. When it comes to calculating self attentions in the encoder part, you zero pad self attention maps with encoder padding masks, like in the figure below. The black dots denote the zero valued elements.

As you can see in the codes below, encode padding masks are quite simple. You just multiply the padding masks with -1e9 and add them to attention maps and apply a softmax function. Thereby you can zero-pad the columns in the positions/columns where you added -1e9 to.

I explained look ahead mask in the last article, and in practice you combine normal padding masks and look ahead masks like in the figure below. You can see that you can compare each token with only its previous tokens. For example you can compare “als” only with “Anthony”, “Hopkins”, “hat”, “Michael”, “Bay”, “als”, not with “einen”, “großartigen”, “Regisseur” or “bewundert.”

Decoder padding masks are almost the same as encoder one. You have to keep it in mind that you zero pad positions which surpassed the length of the source input sentence.

6 Decoding process

In the last section we have seen that we can zero-pad columns, but still the rows are redundant. However I guess that is not a big problem because you decode the final output in the direction of the rows of attention maps. Once you decode <end> token, you stop decoding. The redundant rows would not affect the decoding anymore.

This decoding process is similar to that of seq2seq models with RNNs, and that is why you need to hide future tokens in the self-multi-head attention mechanism in the decoder. You share the same densely connected layers followed by a softmax function, at all the time steps of decoding. Transformer has to learn how to decode only based on the words which have appeared so far.

According to the original paper, “We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.” After these explanations, I think you understand the part more clearly.

The codes blow is for the decoding part. You can see that you first start decoding an output sentence with a sentence composed of only <start>, and you decide which word to decoded, step by step.

*It easy to imagine that this decoding procedure is not the best. In reality you have to consider some possibilities of decoding, and you can do that with beam search decoding.

After training this English-German translator for 30 epochs you can translate relatively simple English sentences into German. I displayed some results below, with heat maps of multi-head attention. Each colored attention maps corresponds to each head of multi-head attention. The examples below are all from the fourth (last) layer, but you can visualize maps in any layers. When it comes to look ahead attention, naturally only the lower triangular part of the maps is activated.

This article series has not covered some important topics machine translation, for example how to calculate translation errors. Actually there are many other fascinating topics related to machine translation. For example beam search decoding, which consider some decoding possibilities, or other topics like how to handle proper nouns such as “Anthony” or “Hopkins.” But this article series is not on NLP. I hope you could effectively learn the architecture of Transformer model with examples of languages so far. And also I have not explained some details of training the network, but I will not cover that because I think that depends on tasks. The next article is going to be the last one of this series, and I hope you can see how Transformer is applied in computer vision fields, in a more “linguistic” manner.

But anyway we have finally made it. In this article series we have seen that one of the earliest computers was invented to break Enigma. And today we can quickly make a more or less accurate translator on our desk. With Transformer models, you can even translate deadly funny jokes into German.

*You can train a translator with this code.

*After training a translator, you can translate English sentences into German with this code.

[References]

[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, “Attention Is All You Need” (2017)

[2] “Transformer model for language understanding,” Tensorflow Core
https://www.tensorflow.org/overview

[3] Jay Alammar, “The Illustrated Transformer,”
http://jalammar.github.io/illustrated-transformer/

[4] “Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 14 – Transformers and Self-Attention,” stanfordonline, (2019)
https://www.youtube.com/watch?v=5vcj8kSwBCY

[5]Tsuboi Yuuta, Unno Yuuya, Suzuki Jun, “Machine Learning Professional Series: Natural Language Processing with Deep Learning,” (2017), pp. 91-94
坪井祐太、海野裕也、鈴木潤 著, 「機械学習プロフェッショナルシリーズ 深層学習による自然言語処理」, (2017), pp. 191-193

* I make study materials on machine learning, sponsored by DATANOMIQ. I do my best to make my content as straightforward but as precise as possible. I include all of my reference sources. If you notice any mistakes in my materials, including grammatical errors, please let me know (email: yasuto.tamura@datanomiq.de). And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.

Leveraging Data Science for Vaccine Access and Administration

As people across the world become eligible for receiving their doses, governments and pharmaceutical companies must act efficiently to provide those vaccines. Alongside distribution, people need more information about these newer vaccines. As a solution for both of these obstacles, data science is a useful tool for vaccine development, distribution, and access throughout the COVID-19 pandemic.

From the initial stages of social distancing and vaccine development to broadening access to the vaccines, data science uses information from countless resources to provide evidence-based, actionable recommendations. Governments and health care providers then act upon this data to help the public and move towards the global goal of eliminating the virus.

Vaccine Development

Since the pandemic began, vaccines have been a sign of hope and a return to a new normalcy. However, getting to effective vaccine distribution first required using data science to develop the doses themselves.

The COVID-19 vaccines have been some of the fastest-developed inoculations in history, which is partly because of the efforts of data scientists. Using machine learning, researchers were able to analyze the sequences of strains of the virus and establish what parts a vaccine would best respond to. Specifically, the sequences had to be those that would be less likely to mutate in the future and less likely to cause an adverse reaction in humans with an injection.

Machine learning helped scientists predict and theorize about which proteins would be the best to work within the SARS-CoV-2 strains. From there, they proceeded with creating vaccines that are now in use all over the world, like Pfizer’s or Moderna’s.

Then, as vaccines become more available, governments again rely on data science to dictate eligibility. Data analytics systems take into account exposure risk, demographics, jobs, and health conditions, which have helped countries break up eligibility into phases.

Supply and Demand

Vaccine supply chains had rocky beginnings throughout the world. In Germany, residents faced shortages of doses, where demand far outweighed the available vials. This type of shortage is especially dangerous, as it can lead to an increase in cases or a full-on spike.

To avoid these uneven dynamics, data science can provide more accurate projections of how many vaccines regions will need on a weekly basis. Data science systems that use machine learning can account for the population that’s eligible and historical COVID-19 vaccination numbers. Then, as eligibility opens up, these systems predict how many vaccines a facility or county will need in the future.

Vaccine administrative organizations can then communicate better with the government to request the doses they will properly handle and use weekly.

In the United States, West Virginia is working with data science dashboards to identify who is most at risk of contracting the virus. Then, they can request the right amount of vials each week and give them to the people who need them the most.

Information Access

As new vaccines and government mandates come into play, residents all over the world need more information to feel safe and to know what they should do. Vaccine scams, for example, have increased with distribution. These scams will ask for personal information like a Social Security number or a form of payment.

To avoid these scams and learn about the vaccine options available, the public needs more access to information. Data science is again helpful to distribute this information.

Google has become a leader in information access with its Intelligence Vaccine Impact initiative. With this program, Google uses machine learning and artificial intelligence (AI) to process data regarding government policy changes, vaccine availability, eligibility, and demographics. That way, people know when they can receive a vaccine, the research and information that supports the vaccines, and what scams to avoid along the way.

Then, based on the vaccine information Google gathers, data scientists can provide a clearer trajectory of the pandemic globally and locally as vaccines help cases go down.

A Data-Driven Path Forward

Data science provides solutions for vaccine access, distribution, and administration. With these powerful dynamics in place, it’s clear that data will lead the world towards a healthier future. Based on evidence from the pandemic, data helps governments and health care providers offer the best solutions for eliminating the virus and protecting people everywhere.

The Role Data Plays in Customer Relationship Management

Image Source: Pixabay

The longevity of your business is dependent on your customer relationships. The depth of your customer relationships is connected to how well they’re managed. How well you work your customer relationships relies on the data you collect about them, like buying behaviors, values, needs, and lifestyle choices.

In addition to collecting quality customer data, organizing and effectively analyzing this data is vital to creating content, products, services, and anything else that resonates with your current and potential customers. Building an accurate customer journey and successfully using data in customer relationship management results in retaining customers and acquiring new ones to help drive more sales.

How is data used to manage customer relationships and navigate your customer’s journey with your business? Let’s first examine the difference between business analytics and data science and their respective roles in the effective use of data. We’ll then touch on the role data plays in creating your customer journey map and the best ways to use data to educate your CRM team.

Business Analytics vs. Data Science: What’s the Difference?

Business analytics and data science both play integral roles in the collection and usage of data. Knowing the difference between business analytics and data science makes building and maintaining customer relationships more productive. When you see the difference between data science and business analytics, you can make informed decisions about marketing techniques, sales funnels, software use, and other operational choices using your CRM team’s data.

What is business analytics?

Business analytics is the process of collecting data on overall processes, software, and other products and services, understanding it, and using it to make better business decisions. Data mining strategies and predictive analytics are among the tools and techniques used to execute the above actions.

Business analysts are adept in:

  • Assessing and organizing raw data.
  • Analyzing data about how to help a business improve.
  • Forecasting and identifying patterns and sequences.
  • Data visualization and conceptualizing business “big pictures.”
  • Overall information optimization.
  • Working with stakeholders to help them understand data.

Business analytics differs from data science in that it’s primarily used to establish how customer, software, system, tool, and technique data influence business growth. It transforms information into actionable steps that incite progress within a company and connection with current and potential customers.

What is data science?

Data science analyzes the collection process, the most streamlined and efficient ways to process data, and the most effective ways to communicate complex information to an analyst or company leader responsible for making critical business decisions. Data science takes advantage of emerging technologies like artificial intelligence, algorithms, and machine learning systems.

Data scientists are adept in:

  • Evaluating specific questions and customer inquiries to develop data-driven strategies.
  • Managing and analyzing critical information with the help of advanced technology.
  • Detecting useful statistical patterns.
  • Cleaning or “scrubbing” data.
  • Making data easier to understand through efficient organization.
  • Understanding how to leverage algorithms.

Data science differs from business analytics in that its primary focus is on collecting useful data and understanding different ways to best process and implement the collected data. Data science isn’t necessarily concerned with choosing the best ways to implement the lessons learned from data collection. Instead, it focuses on clearly communicating what the data means and listing implementation strategies, and leaving the decision up to business analysts and company leaders.

The Role Data Plays in Creating a Strong Customer Journey

According to Lucidspark, “A customer journey map is a diagram that illustrates how your customers interact with your company and engage with your products, website, and/or services.” It gives you a holistic view of a customer’s life-cycle with your business, from how they’re introduced to your brand, retained long enough to make a purchase, and nurtured into a long-term customer relationship.

The collection of customer data allows you to honestly know your audience and target them with personalized sales and marketing campaigns. It’s essential to understand how to collect data and interpret and implement it to create personal relationships with customers that result in loyalty and retention.

Data’s role in creating a robust customer journey map includes:

  • Tracking why individual visitors don’t become customers.
  • Identifying gaps in user experience.
  • Understanding why people connect with your brand the way they do.
  • Identifying similarities your long-term customers have.
  • Identifying similarities new visitors have.
  • Highlighting what messaging resonates most with your customers.
  • Identifying marketing channels responsible for the most conversions.
  • Pinpointing the best ways to communicate with your customers.
  • Showing which digital platforms they make the most use of.
  • Highlighting specific pain points, challenges, and problems common among your customers.

Your customer journey map gives specific details about your customers that you can use to construct an accurate representation of how they’d experience your brand. The more detailed the data, the more precise the customer journey. You can accurately map out:

  • What problem or pain points do they have?
  • Precisely what would bring them to researching a solution?
  • How they’d choose your product as the solution?
  • How they’d choose your brand to purchase that product?
  • How do they make purchases in general?
  • How they’d potentially purchase your product?
  • How they’d receive your product and experience it?
  • How they’d ultimately become a loyal customer?

The Best Ways to Use Data to Inform Your CRM Team

How can you use data to inform your customer relationship management team?

Using data effectively to inform your CRM team starts with functional CRM software. A CRM software can help seamlessly establish a relationship with customers by ensuring all the right information is collected, stored, and grouped correctly. The organizational structure a CRM software offers makes it easier for your CRM team to leverage collected data and shift strategies to accommodate customer needs and values.

CRM software is only as efficient as your team members are. Each team member plays an integral role in analytics and understands human behavior that drives data collection. Practical data analysis can inform your CRM team and help them with things like:

  • Tailoring your social media content.
  • Tracking spending habits and purchasing patterns.
  • Catering website structure towards increased customer satisfaction.
  • Boosting organic traffic.
  • Identifying what data to collect and how to manage it.
  • Establishing goals for your marketing efforts.
  • Understanding the most efficient communication channels for your customers.
  • Identifying what lifestyle behaviors could influence them completing a purchase with your business.
  • Exploring paid advertising avenues.
  • Segmenting your customer base.
  • Constructing ads for digital platforms.
  • Furthering your business’ online presence.
  • Personalizing customer experiences with your brand through specific written content and visual media.
  • Converting visitors to paying customers through sales techniques.

The ultimate goal for data collection and CRM software is customer acquisition and retention. Your CRM team can also use data to engage with customers consistently, streamline marketing funnels, and improve profitability with better products and services. They’re responsible for finding out what specific data is essential to collect, how it relates to moving your business forward, how you can turn the data into actionable strategies.

Other ways data collection informs your CRM team and helps shape overall business strategy include:

  • Bettering customer resolution strategies and implementing simple systems for addressing customer issues.
  • Choosing the most effective marketing media, brand colors, and overall business aesthetic.
  • Generating personalized purchase recommendations and suggestions and automating how they’re sent to customers.
  • Identifying opportunities for customer retention and gaps in acquisition techniques.
  • Achieving a deeper understanding of the lifestyle your ideal customer lives.
  • Improving your consumer database with a clean, simple, concise organizational structure.

You Can Maneuver Your Customer Journey Through Effective Use of Data in CRM

Data shows your CRM team how your customers are interacting with the brand and experiencing each stage of the buyer’s journey. Data can show you exactly who and what to focus on in building brand awareness. Tailor your messaging, aesthetics, events, engagement techniques, and marketing to your ideal customer’s behaviors.

By intentionally tracking essential customer data, you can implement the following potential solutions to maintain a healthy connection with each customer:

  • Changing your brand colors to those that evoke emotions specific to how you want visitors to experience your product or service.
  • Experimenting with different Call-to-action or CTA button colors and messaging.
  • Shifting media and image choices to those most popular with loyal customers.
  • Focusing solely on social media platforms most used by your target audience.
  • Using marketing messaging to speak directly to their current experiences.
  • Using the platforms your target audience gets their information from.
  • Establishing a preferred communication method.
  • Offering a variety of payment options that appreciate advanced technology.

Your CRM team can use data to make sales and marketing decisions that best align with company goals. Collecting and using customer relationship management data is the best practice for any business looking to define their customer’s journey with their brand clearly.

5 AI Tricks to Grow Your Online Sales

The way people shop is currently changing. This only means that online stores need optimization to stay competitive and answer to the needs of customers. In this post, we’ll bring up the five ways in which you can use artificial intelligence technology in an online store to grow your revenues. Let’s begin!

1. Personalization with AI

Opening the list of AI trends that are certainly worth covering deals with a step up in personalization. Did you know that according to the results of a survey that was held by Accenture, more than 90% of shoppers are likelier to buy things from those stores and brands that propose suitable product recommendations?

This is exactly where artificial intelligence can give you a big hand. Such progressive technology analyzes the behavior of your consumers individually, keeping in mind their browsing and purchasing history. After collecting all the data, AI draws the necessary conclusions and offers those product recommendations that the user might like.

Look at the example below with the block has a carousel of neat product options. Obviously, this “move” can give a big boost to the average cart sizes.

Screenshot taken on the official Reebok website

Screenshot taken on the official Reebok website

2. Smarter Search Options

With the rise of the popularity of AI voice assistants and the leap in technology in general, the way people look for things on the web has changed. Everything is moving towards saving time and getting faster better results.

One of such trends deals with embracing the text to speech and image search technology. Did you notice how many search bars have “microphone icons” for talking out your request?

On a similar note, numerous sites have made a big jump forward after incorporating search by picture. In this case, uploaded photos get analyzed by artificial intelligence technology. The system studies what’s depicted on the image and cross-checks it with the products sold in the store. In several seconds the user is provided with a selection of similar products.

Without any doubt, this greatly helps users find what they were looking for faster. As you might have guessed, this is a time-saving feature. In essence, this omits the necessity to open dozens of product pages on multiple sites when seeking out a liked item that they’ve taken a screenshot or photo of.

Check out how such a feature works on the official Amazon website by taking a look at the screenshots of StyleSnap provided below.

Screenshot taken on the official Amazon StyleSnap website

Screenshot taken on the official Amazon StyleSnap website

3. Assisting Clients via Chatbots

The next point on the list is devoted to AI chatbots. This feature can be a real magic wand with client support which is also beneficial for online sales.

Real customer support specialists usually aren’t available 24/7. And keeping in mind that most requests are on repetitive topics, having a chatbot instantly handle many of the questions is a neat way to “unload” the work of humans.

Such chatbots use machine learning to get better at understanding and processing client queries. How do they work? They’re “taught” via scripts and scenario schemes. Therefore, the more data you supply them with, the more matters they’ll be able to cover.

Case in point, there’s such a chat available on the official Victoria’s Secret website. If the user launches the Digital Assistant, the messenger bot starts the conversation. Based on the selected topic the user selects from the options, the bot defines what will be discussed.

Screenshot taken on the official Victoria’s Secret website

Screenshot taken on the official Victoria’s Secret website

4. Determining Top-Selling Product Combos

A similar AI use case for boosting online revenues to the one mentioned in the first point, it becomes much easier to cross-sell products when artificial intelligence “cracks” the actual top matches. Based on the findings by Sumo, you can boost your revenues by 10 to 30% if you upsell wisely!

The product database of online stores gets larger by the month, making it harder to know for good which items go well together and complement each other. With AI on your analytics team, you don’t have to scratch your head guessing which products people are likely to additionally buy along with the item they’re browsing at the moment. This work on singling out data can be done for you.

As seen on the screenshot from the official MAC Cosmetics website, the upselling section on the product page presents supplement items in a carousel. Thus, the chance of these products getting added to the shopping cart increases (if you compare it to the situation when the client would search the site and find these products by himself).

Screenshot taken on the official MAC Cosmetics website

Screenshot taken on the official MAC Cosmetics website

5. “Try It On” with a Camera

The fifth AI technology in this list is virtual try on that borrowed the power of augmented reality technology in the world of sales.

Especially for fields like cosmetics or accessories, it is important to find ways to help clients to make up their minds and encourage them to buy an item without testing it physically. If you want, you can play around with such real-time functionality and put on makeup using your camera on the official Maybelline New York site.

Consumers, ultimately, become happier because this solution omits frustration and unneeded doubts. With everything evident and clear, people don’t have the need to take a shot in the dark what will be a good match, they can see it.

Screenshot taken on the official Maybelline New York website

Screenshot taken on the official Maybelline New York website

In Closing

To conclude everything stated in this article, artificial intelligence is a big crunch point. Incorporating various AI-powered features into an online retail store can be a neat advancement leading to a visible growth in conversions.

What is Data Warehousing and Data Mining – Know the difference between Data Warehousing and Data Mining

Getting started

Before we start off with Data Warehousing and Data Mining, let us first set the ground for the same. This will help in understanding why we need them in the first place. By the end of the post, you would feel much more acquainted with the two topics at hand. So here goes!

With the exponential increase in the generation and consumption of the data, the organizations have to deal with a humungous amount of data at their end. We all have heard the talks about data being the new oil, which is rather turning out to be a reality. Data is considered to be an extremely valuable asset for every organization and they attempt to put it to good use. The data assists the organizations in making business decisions that will generate significant revenues. It helps in understanding the current requirements of the market which is vital for some organizations to stay in business. Thus, it is essential for organizations to store the data somewhere which can be later utilized for analytical purposes.

 

Introduction to Data Warehousing

Data Warehousing is a process to collect and manage data from a variety of sources. The data can also come from different departments of an organization like Finance, Marketing, etc. The idea of constructing a Data Warehouse is to be able to use the data for analytical purposes and make decisions based upon the analysis. Data warehouses are a pivotal component of any analytical and business intelligence operations at an organization. As the organizations can generate data at various sources, we might need to use different tools in order to store the data at a single source. The process of data warehousing generally involves ETL (Extract – Transform – Load) tools that help to extract the data from different sources, transform the data into a suitable format, and load the data into a single source. There are some tools like Google BigQuery, Amazon Redshift, etc that allow you to connect with a vast number of sources to store the data in one place. The data warehouses can be implemented on-premise as well as on the cloud. On-premise data warehouses are implemented on the local networks of the organization while cloud data warehouses are implemented over the internet. There is always a trade-off in making decisions as to which one to choose because there are multiple factors to be considered like scalability, initial investment, recurring costs, security, speed, etc.

General steps to implement a data warehouse

  1. Determining the business objectives.

Every organization can have different business objectives that define success in its terms. Some organizations are involved in a constantly changing market which will require a large number of sources while others might just need to use the data for better administration purposes. Hence the key step to initiate the creation of a data warehouse is to include the stakeholders and determine their business objectives.

  1. Analyzing and obtaining the information regarding the objectives.

Once the business objectives are decided, the information regarding those objectives needs to be obtained. The information can be obtained using any periodic report, or any CRM application, etc depending upon the organization. An extensive amount of interaction with all the supervisors attending to that information can be crucial for this process. Interacting with the people that are daily involved in this routine can serve a lot of information. These people tend to know the bits and pieces of the entire task and any information obtained from these people can lead to better implementation of the data warehouse. This step also helps in identifying the key performance indicators for the desired objectives.

  1. Identifying the concerned departments in the organization.

The key performance indicators can be different for different organizations. For example, in an organization that deals with the manufacturing of different products, if their objective were to increase the revenue then the number of units sold would be one of the key performance indicators. Based on these indicators, the involvement of the concerned departments proves to be significant.

  1. Create a layout of the data warehouse.

With all the key performance indicators discovered, create a layout of the data warehouse. It will help in providing an overview of the entire data warehouse and the data that will be stored in it. It will determine what key indicators are being stored in the data warehouse and whether all the indicators required for our objectives exist or not. As the data will be pulled in periodically, think about all of the investment costs including the hardware costs and recurring costs.

  1. Locating the sources of data and its transformation.

After finalizing the layout, we need to locate the sources of the data and figure out how the data can be extracted from it. The data can be in a CRM application or any database, we need to export the data or make use of an ETL tool that can connect with the data source. As the data comes from different sources, there is a need to consolidate all of the data. Also, there are high chances that the data is not clean and needs some transformation. In case some data may not be extracted then we need the reconsider the layout of the data warehouse. Many times, these two steps are performed in a parallel fashion.

  1. Implementation of the data warehouse layout

After the objectives are set in place, the concerned stakeholders are looped into the plan, the information is collected and analyzed, a layout for creating a data warehouse is planned, the data sources are located and transformed, now it is time to put all the things to work. After the data from the warehouse can be accessed, the data needs to be pulled from the sources periodically. We need to monitor the data warehouse continuously and check for any irregularities.

 

Introduction to Data Mining

Data Mining is a process to extract insightful information from a large amount of raw data. The intent of this process is to find some trends and patterns which would help organizations in making data-driven decisions. This process is one of the steps of KDD or Knowledge Discovery in Database. KDD also includes different sub-processes like data cleaning, data transformation, data pre-processing, etc. There are a number of tools that we can use for performing Data Mining – Tableau and Power BI being two of them. We can also make use of certain packages in Python and R languages to extract information. Data Mining helps in analyzing a huge amount of data in a quick amount of time. It is an essential step in any data science project because it provides some exploratory insights which might tell us which features are very important in prediction or which features provide very little information. Data Mining helps the organization in various ways like analyzing their market expenditure, resource management, fraud detection, etc. There are a lot of Data mining techniques which include Association rules, Classification, Clustering, Regression, Outlier Detection, etc.  It is a very cost-effective solution for the organization and it can regularly provide new information and analysis depending upon the skillsets. The process of data mining also begins with the understanding of the business and the data. Developing a thorough knowledge about the business and its related data is extremely pivotal for the analysts to be able to perform some operations on it.

Examples of Data Mining

  1. Ever got any recommendations on an e-commerce store when you are buying a product? Like when you buy a smartphone, the website will show you some phone cases or accessories. This is what is known as Basket Analysis. In this analysis, the buying patterns of the customers and what they tend to buy along with the other products are analyzed. It not only helps in an e-commerce website but also is implemented at any supermarket or grocery store. It can be done using Association based learning and creating some rules.
  2. Fraud detection is one of the most vital use-cases of Data Mining. Banks have a lot at stake due to the fraudulent transactions because they have to bear the losses for these transactions. Data Mining can help in analyzing the data and catching these fraudulent activities. Although it is a tedious task, the organizations can try to extract some patterns that will help in getting a hold of these fraudsters.

 

The link between Data Warehousing and Data Mining

Although we will mention the differences that lie between these two terms, let us see how data warehousing data mining is linked to each other. In fact, data warehousing and data mining work in conjunction with each other. As mentioned earlier in the article, we all know that data mining includes the extraction of useful information from tons of data. In order to perform data mining, from where will the analyst obtain the data? Their search ends at the data warehouse itself as it is a single source of contact for all their data needs. The data mining process is provided with all the information that is required for the analysis from the data warehouse. In many instances, the analytical team is able to extract some useful information from the data merged from two completely different departments of the organizations or even from different offices of the same department.

Distinguishing between Data Warehousing and Data Mining

Parameter Data Warehousing Data Mining
Process It is a process of storing data from multiple sources. It is a process of using different methods to analyze the raw data.

 

Ideology The idea behind it was to centralize all the sources of data into one location for ease of use in analytical processes. The idea behind it was to use the data to find some trends and patterns and help the organization in making good decisions.
Requirements To implement a data warehouse, we need to locate the different means of sources and how the data can be extracted from those sources. In order to perform data mining, a data warehouse needs to be implemented to be able to look at all the sources and then analyze the raw data.
Maintenance The data pipelines need to be maintained and monitored to prevent any loss of data. The methods used for extracting information need to be maintained and monitored in order to check if they provide any useful information or not.
Periodicity The data is extracted and stored in the data warehouse periodically. The data needs to be analyzed periodically for continuously extracting useful information.
Tools Tools used for this process include Google BigQuery, Amazon Redshift, etc Tools used for this process include Tableau, Power BI, etc.
Benefits Easy access to the historic data of the organization Helps in detecting any fraudulent operations, financial and market analysis, etc.

 

End Notes

Any organization that plans to use the data at hand for analytical purposes needs to implement a data warehouse and different data mining techniques. It requires a good amount of skillset and resources for getting good use of it. Another element that is also vital in the entire data analysis process is the interpretation of the analysis. One should be able to correctly interpret what the data is trying to tell you because all the decisions are based on these interpretations. Bad decisions could really cost organizations a fortune of money. But the decisions that are spot-on can make the organizations earn a fortune of money as well. This explains the increasing demand for different positions such as Data Engineers, Data Scientists, and Data Analysts.

This article was centered on giving its readers an overview of data warehousing and data mining. It mentioned different steps that are generally involved during the implementation of a data warehouse. It illustrated a couple of examples of Data Mining and explained how data warehousing and data mining are linked to each other. And lastly, provided some distinguishing parameters between data warehousing and data mining.