Posts

Role of Data Science in Education

Ad / Sponsored Post

Data science is a new science that appeared thanks to a lot of reasons. The first reason is that nowadays, we have enough capacity to gather data and later work with it. The second reason is that society accumulates a lot of information every minute, and gadgets can save and then send it to data centers without any communication between people. But saving data is just one step in the world of science. The main task is how to analyze and show the results and later make a conclusion and prognoses. A team of online essay experts from a professional academic writing company SmartWritingService.com say that requests for academic papers for data science research topics increase extremely. It happens because data science analysts’ knowledge is useful in a lot of spheres, and the demand for such specialists is very high. Data science is an important part of sociology, political forecasting, the theory of games, statistics and others. Students need to study it and use it for future research. That’s why the universities added the courses of big data to be modern and meet requirements. Let’s try to discover the role of data science in education to make your own conclusion about its importance.

Why is data science necessary and how to become good in it?

The improvement of the studying process.

All students are different. They use different skills for studying and perceive information differently according to a lot of nuances. For some of them, the best way of getting information from lectors is listening without interruption. Other students prefer discussion during lectures. Some prefer to make notes. Others like to listen carefully and make notes later using the audio version of the lecture. Every group is different but has the same goal, and this goal is to absorb as much information as possible. The best assistant for this goal is data science that will show the teacher the best ways of communicating with students.

Using big data for personal needs. 

Have you ever thought that every one of us is a data scientist? We all use data, analyze it, and act according to conclusions. For example, shopping. Every time you go to the grocery, you notice how many people are there and how long the line is. When you plan your next shopping, you make a prognosis according to that data — the time you need to spend in the market and make a decision if it is optimal to go right now or it is better to visit the shop later when it is almost empty. The same thing works when we talk about studying. According to your observation and experience (that are both data), you make a conclusion on how much time you need to spend on every task.

Learning data science as an additional course. 

To know data science as an additional profession nowadays is very helpful. For the employer, it will be a bonus that can be a decisive factor. The skill to analyze is essential for every profession and helps to understand the market now and necessary for sales. The hardest thing for a data scientist is to ask the right questions for collecting data, and if you are good at it, your salary will increase immensely.

How to become a data science specialist?

Big data and artificial intelligence (AI). The importance of development. We believe that AI is the future of studying. First of all, the machine can collect the information and repeat it as much as the student needs. It is studying with the group, and the quality of communication grows rapidly. The base for machine learning in AI is data science. The analysis of data gives the set of possible reactions and actions to AI that can be changed or improved according to new data that was processed by AI. But from the beginning, AI is a huge set of data. It consists of reactions to life situations, speed and timbre of the voice of the interlocutor, country, the hour of the day, and a lot of other data that finally lead to the reaction of AI to the situation. The development of AI is a question of time, and it will help us to move faster in all spheres of life.

Can we ignore data analytics and don’t take a part in it?

The only way to ignore data science is to throw away all gadgets and become citizens of the wood cabin. And even this step won’t help. Those interested in the amount of population of people who live in the woods will be happy to add you to their list and analyze your +1 using data science technology. Every time you buy bread or don’t buy something on the internet, you are counted. Later they will analyze why you became or not their customer and save for statistics you age, sex, country of request, and all other parameters they can catch. We can only accept this reality and try to use it to our needs.

Don’t be afraid to become a subject for data science. It doesn’t affect your privacy a lot because there are billions of us, and we are only one point for statistics. Thanks to such analytics, scientists can make better specific offers and content for you. On the one hand, it is a kind of manipulation, but it saves time and resources for research from another hand. Use it to make your life more comfortable but remember that there are no coincidences when we talk about data.

Ad / Sponsored Post

CRISP-DM methodology in technical view

On this paper discuss about CRISP-DM (Cross Industry Standard Process for data mining) methodology and its steps including selecting technique to successful the data mining process. Before going to CRISP-DM it is better to understand what data mining is? So, here first I introduce the data mining and then discuss about CRISP-DM and its steps for any beginner (data scientist) need to know.

1 Data Mining

Data mining is an exploratory analysis where has no idea about interesting outcome (Kantardzic, 2003). So data mining is a process to explore by analysis a large set of data to discover meaningful information which help the business to take a proper decision. For better business decision data mining is a way to select feature, correlation, and interesting patterns from large dataset (Fu, 1997; SPSS White Paper, 1999).

Data mining is a step by step process to discover knowledge from data. Pre-processing data is vital part for a data mining. In pre-process remove noisy data, combining multiple sources of data, retrieve relevant feature and transforming data for analysis. After pre-process mining algorithm applied to extract data pattern so data mining is a step by step process and applied algorithm to find meaning full data pattern. Actually data mining is not only conventional analysis it is more than that (Read, 1999).

Data mining and statistics closely related. Main goal of data mining and statistic is find the structure of data because data mining is a part of statistics (Hand, 1999). However, data mining use tools, techniques, database, machine learning which not part of statistics but data mining use statistics algorithm to find a pattern or discover hidden decision.

Data mining objective could be prediction or description. On prediction data mining considering several features of dataset to predict unidentified future, on the other hand description involve identifying pattern of data to interpreted (Kantardzic, 2003).

From figure 1.1 shows data mining is the only one part of getting unknown information from data but it is the central process of whole process. Before data mining there are several processes need to be done like collecting data from several sources than integrated data and keep in data storage. Stored unprocessed data evaluated and selected with pre-processed activity to give a standard format than data mining algorithm to analysis for hidden pattern.

Data Mining Process

2 CRISP-DM Methodologies

Cross Industry Standard Process for data mining (CRISP-DM) is most popular and widely uses data mining methodology. CRISP-DM breaks down the data mining project life cycle into six phases and each phase consists of many second-level generic tasks. Generic task cover all possible data mining application. CRISP-DM extends KDD (Knowledge Discovery and Data Mining) into six steps which are sequence of data mining application (Martínez-Plumed 2019).

Data science and data mining project extract meaningful information from data. Data science is an art where a lot of time need to spend for understanding the business value and data before applying any algorithm then evaluate and deployed a project. CRISP-DM help any data science and data mining project from start to end by giving step by step process.

Present world every day billions of data are generating. So organisations are struggling with overwhelmed data to process and find a business goal. Comprehensive data mining methodology, CRISP-DM help business to achieve desirable goal by analysing data.

CRISP-DM (Cross Industry Standard Process for Data Mining) is well documented, freely available, data mining methodology. CRISP-DM is developed by more than 200 data mining users and many mining tool and service providers funded by European Union. CRISP-DM encourages organization for best practice and provides a structure of data mining to get better, faster result.

CRISP-DM is a step by step methodology. Figure-2.1 show the phases of CRISP-DM and process of data mining. Here one side arrow indicates the dependency between phases and double side arrow represents repeatable process. Six phases of CRISP-DM are Business understanding, Data understanding, Modelling, Evaluation and Deployment.

CRISP-DM

2.1 Business Understanding

Business Understanding or domain understanding is the first step of CRISP-DM methodology. On this stage identify the area of business which is going to transform into meaningful information by analysing, processing and implementing several algorithms. Business understanding identifies the available resource (human and hardware), problems and set a goal. Identification of business objective should be agreed with project sponsors and other unit of business which will be affected. This step also focuses about details business success criteria, requirements, constraints, risk, project plan and timeline.

2.2 Data Understanding

Data understanding is the second and closely related with the business understanding phase. This phase mainly focus on data collection and proceeds to get familiar with the data and also detect interesting subset from data. Data understanding has four subsets these are:-

2.2.1 Initial data collection

On this subset considering the data collection sources which is mainly divided into two categories like outsource data or internal source data.  If data is from outsource then it may costly, time consuming and may be low quality but if data is collected form internal source it is an easy and less costly, but it may be contain irrelevant data. If internal source data does not fulfil the interest of analysis than it is necessary to move outsource data. Data collection also give an assumption that the data is quantitative (continuous, count) or qualitative (categorical).  It also gives information about balance or imbalanced dataset.  On data collection should avoid random error, systematic error, exclusion errors, and errors of choosing.

2.2.2 Data Description

Data description performs initial analysis about data. On this stage it is going to determine about the source of data like RDBMS, SQL, NoSQL, Big data etc. then analysis and describe the data about size (large data set give more accurate result but time consuming), number of records, tables, database, variables, and data types (numeric, categorical or Boolean). On this phase examine the accessibility and availability of attributes.

2.2.3 Exploratory data analysis (EDA)

On exploratory data analysis describe the inferential statistics, descriptive statistics and graphical representation of data. Inferential statistics summarize the entire population from the sample data to perform sampling and hypothesis testing. On Parametric hypothesis testing  (Null or alternate – ANOVA, t-test, chi square test) perform for known distribution (based on population) like mean, variance, standard deviation, proportion and Non-parametric hypothesis testing perform when distribution is unknown or sample size is small. On sample dataset, random sampling implement when dataset is balance but for imbalance dataset should be follow random resampling (under  and over sampling), k fold cross validation, SMOTE (synthetic minority oversampling technique), cluster base sampling, ensemble techniques (bagging and boosting – Add boost, Gradient Tree Boosting, XG Boost) to form a balance dataset.

On descriptive statistics analysis describe about the mean, median, mode for measures of central tendency on first moment business decision. On second moment business decision describe the measure of dispersion about the variance, standard deviation and range of data.  On third and fourth moment business decision describe accordingly skewness (Positive skewness – heavier tail to the right, negative skewness – heavier tail to the left, Zero skewness – symmetric distribution) and Kurtosis (Leptokurtosis – heavy tail, platykurtosis – light tail, mesokurtic – normal distribution).

Graphical representation is divided into univariate, bivariate and multivariate analysis. Under univariate whisker plot, histogram identify the outliers and shape of distribution of data and Q-Q plot (Quantile – Quantile) plot describe the normality of data that means data is normally distribution or not.  On whisker plot if data present above of Q3 + 1.5 (IQR) and below of Q1 – 1.5 (IQR) is outlier. For Bivariate correlations identify with scatter plot which describe positive, negative or no correlation and also identify the data linearity or non-linearity. Scatter plot also describe the clusters and outliers of data.  For multivariate has no graphical analysis but used to use regression analysis, ANOVA, Hypothesis analysis.

2.2.4 Data Quality analysis

This phase identified and describes the potential errors like outliers, missing data, level of granularity, validation, reliability, bad metadata and inconsistency.  On this phase AAA (attribute agreement analysis) analysed discrete data for data error. Continuous data analysed with Gage repeatability and reproducibility (Gage R & R) which follow SOP (standard operating procedures). Here Gage R & R define the aggregation of variation in the measurement data because of the measurement system.

2.3 Data Preparation

Data Preparation is the time consuming stage for every data science project. Overall on every data science project 60% to 70% time spend on data preparation stage. Data preparation stapes are described below.

2.3.1 Data integration

Data integration involved to integrate or merged multiple dataset. Integration integrates data from different dataset where same attribute or same columns presents but when there is different attribute then merging the both dataset.

2.3.2 Data Wrangling

On this subset data are going to clean, curate and prepare for next level. Here analysis the outlier and treatment done with 3 R technique (Rectify, Remove, Retain) and for special cases if there are lots of outliner then need to treat outlier separately (upper outliner in an one dataset and lower outliner in another dataset) and alpha (significant value) trim technique use to separate the outliner from the original dataset. If dataset has a missing data then need to use imputation technique like mean, median, mode, regression, KNN etc.

If dataset is not normal or has a collinearity problem or autocorrelation then need to implement transformation techniques like log, exponential, sort, Reciprocal, Box-cox etc. On this subset use the data normalization (data –means/standard deviation) or standardization (min- max scaler) technique to make unitless and scale free data. This step also help if data required converting into categorical then need to use discretization or binning or grouping technique. For factor variable (where has limited set of values), dummy variable creation technique need to apply like one hot encoding.  On this subset also help heterogeneous data to transform into homogenous with clustering technique. Data inconsistencies also handle the inconsistence of data to make data in a single scale.

2.3.3 Feature engineering and selection/reduction

Feature engineering may called as attribute generation or feature extraction. Feature extraction creating new feature by reducing original feature to make simplex model. Feature engineering also do the normalized feature by producing calculative new feature. So feature engineering is a data pre-process technique where improve data quality by cleaning, integration, reduction, transformation and scaling.

Feature selections reduce the multicollinearity or high correlated data and make model simple. Main two type of feature selection technique are supervised and unsupervised. Principal Components Analysis (PCA) is an unsupervised feature reduction/ feature selection technique and LDA is a Linear Discriminant analysis supervised technique mainly use for classification problem. LDA analyse by comparing mean of the variables. Supervised technique is three types filter, wrapper and ensemble method. Filter method is easy to implement but wrapper is costly method and ensemble use inside a model.

2.4 Model

2.4.1 Model Selection Technique

Model selection techniques are influence by accuracy and performance.  Because recommendation need better performance but banking fraud detection needs better accuracy technique.  Model is mainly subdivided into two category supervised learning where predict an output variable according to given an input variable and unsupervised learning where has not output variable.

On supervised learning if an output variable is categorical than it is classification problem like two classes or multiclass classification problem. If an output variable is continuous (numerical) then the problem is called prediction problem. If need to recommending according to relevant information is called recommendation problem or if need to retrieve data according to relevance data is called retrieval problem.

On unsupervised learning where target or output variable is not present. On this technique all variable is treated as an input variable. Unsupervised learning also called clustering problem where clustering the dataset for future decision.

Reinforcement learning agent solves the problem by getting reward for success and penalty for any failure. And semi-supervised learning is a process to solve the problem by combining supervised and unsupervised learning method. On semi-supervised, a problem solved by apply unsupervised clustering technique then for each cluster apply different type of supervised machine learning algorithm like linear algorithm, neural network, K nearest  neighbour etc.

On data mining model selection technique, where output variable is known, then need to implement supervised learning.  Regression is the first choice where interpretation of parameter is important. If response variable is continuous then linear regression or if response variable is discrete with 2 categories value then logistic regression or if response variable is discrete with more than 2 categorical values then multinomial or ordinal regression or if response variable is count then poission where mean is equal to variance or negative binomial regression where variance is grater then mean or if response variable contain excessive zero values then need to choose Zero inflated poission (ZIP) or Zero inflated negative binomial (ZINB).

On supervised technique except regression technique all other technique can be used for both continuous or categorical response variable like KNN (K-Nearest Neighbour),  Naïve Bays, Black box techniques (Neural network, Support vector machine), Ensemble Techniques (Stacking, Bagging like random forest, Boosting like Decision tree, Gradient boosting, XGB, Adaboost).

When response variable is unknown then need to implement unsupervised learning. Unsupervised learning for row reduction is K-Means, Hierarchical etc., for columns reduction or dimension reduction PCA (principal component analysis), LDA (Linear Discriminant analysis), SVD (singular value decomposition) etc. On market basket analysis or association rules where measure are support and confidence then lift ration to determine which rules is important. There are recommendation systems, text analysis and NLP (Natural language processing) also unsupervised learning technique.

For time series need to select forecasting technique. Where forecasting may model based or data based. For Trend under model based need to use linear, exponential, quadratic techniques. And for seasonality need to use additive, multiplicative techniques. On data base approaches used auto regressive, moving average, last sample, exponential smoothing (e.g. SES – simple exponential smoothing, double exponential smoothing, and winters method).

2.4.2 Model building

After selection model according to model criterion model is need to be build. On model building provided data is subdivided with training, validation and testing.  But sometime data is subdivided just training and testing where information may leak from testing data to training data and cause an overfitting problem. So training dataset should be divided into training and validation whereas training model is tested with validation data and if need any tuning to do according to feedback from validation dataset. If accuracy is acceptable and error is reasonable then combine the training and validation data and build the model and test it on unknown testing dataset. If the training error and testing error is minimal or reasonable then the model is right fit or if the training error is low and testing error is high then model is over fitted (Variance) or if training error is high and testing error is also high then model is under fitted (bias). When model is over fitted then need to implement regularization technique (e.g. linear – lasso, ridge regression, Decision tree – pre-pruning, post-pruning, Knn – K value, Naïve Bays – Laplace, Neural network – dropout, drop connect, batch normalization, SVM –  kernel trick)

When data is balance then split the data training, validation and testing and here training is larger dataset then validation and testing. If data set is imbalance then need to use random resampling (over and under) by artificially increases training dataset. On random resampling by randomly partitioning data and for each partition implement the model and taking the average of accuracy. Under K fold cross validation creating K times cross dataset and creating model for every dataset and validate, after validation taking the average of accuracy of all model. There is more technique for imbalance dataset like SMOTH (synthetic minority oversampling technique), cluster based sampling, ensemble techniques e.g. Bagging, Boosting (Ada Boost, XGBoost).

2.4.3 Model evaluation and Tuning

On this stage model evaluate according to errors and accuracy and tune the error and accuracy for acceptable manner. For continuous outcome variable there are several way to measure the error like mean error, mean absolute deviation, Mean squared error, Root mean squared error, Mean percentage error and Mean absolute percentage error but more acceptable way is Mean absolute percentage error. For this continuous data if error is known then it is easy to find out the accuracy because accuracy and error combining value is one. The error function also called cost function or loss function.

For discrete output variable model, for evaluation and tuning need to use confusion matrix or cross table. From confusion matrix, by measuring accuracy, error, precision, sensitivity, specificity, F1 help to take decision about model fitness. ROC curve (Receiver operating characteristic curve), AUC curve (Area under the ROC curve) also evaluate the discrete output variable. AUC and ROC curve plot of sensitivity (true positive rate) vs 1-specificity (false positive rate).  Here sensitivity is a positive recall and  recall is basically out of all positive samples, how sample classifier able to identify. Specificity is negative recall here recall is out of all negative samples, how many sample classifier able to identify.  On AUC where more the area under the ROC is represent better accuracy. On ROC were step bend it’s indicate the cut off value.

2.4.4 Model Assessment

There is several ways to assess the model. First it is need to verify model performance and success according to desire achievement. It needs to identify the implemented model result according to accuracy where accuracy is repeatable and reproducible. It is also need to identify that the model is scalable, maintainable, robust and easy to deploy. On assessment identify that the model evaluation about satisfactory results (identify the precision, recall, sensitivity are balance) and meet business requirements.

2.5 Evaluation

On evaluation steps, all models which are built with same dataset, given a rank to find out the best model by assessing model quality of result and simplicity of algorithm and also cost of deployment. Evaluation part contains the data sufficiency report according to model result and also contain suggestion, feedback and recommendation from solutions team and SMEs (Subject matter experts) and record all these under OPA (organizational process assets).

2.6 Deployment

Deployment process needs to monitor under PEST (political economical social technological) changes within the organization and outside of the organization. PEST is similar to SWOT (strength weakness opportunity and thread) where SW represents the changes of internal and OT represents external changes.

On this deployment steps model should be seamless (like same environment, same result etc.) from development to production. Deployment plan contain the details of human resources, hardware, software requirements. Deployment plan also contain maintenance and monitoring plan by checking the model result and validity and if required then implement retire, replace and update plan.

3 Summaries

CRISP-DM implementation is costly and time consuming. But CRISP-DM methodology is an umbrella for data mining process. CRISP-DM has six phases, Business understanding, Data understanding, Modelling, Evaluation and Deployment. Every phase has several individual criteria, standard and process. CRISP-DM is Guideline for data mining process so if CRISP-DM is going to implement in any project it is necessary to follow each and every single guideline and maintain standard and criteria to get required result.

4 References

  1. Fu, Y., (1997), “Data Mining: Tasks, Techniques and Applications”, Potentials, IEEE, 16: 4, 18–20.
  2. Hand, D. J., (1999), “Statistics and Data Mining: Intersecting Disciplines”, ACM SIGKDD Explorations Newsletter, 1: 1, 16 – 19.
  3. Kantardzic, M., (2003), “Data Mining: Concepts, Models, Methods, and Algorithms” John Wiley and Sons, Inc., Hoboken, New Jersey
  4. Martínez-Plumed, F., Contreras-Ochando, L., Ferri, C., Orallo, J.H., Kull, M., Lachiche, N., Quintana, M.J.R. and Flach, P.A., 2019. CRISP-DM Twenty Years Later: From Data Mining Processes to Data Science Trajectories. IEEE Transactions on Knowledge and Data Engineering.
  5. Read, B.J., (1999), “Data Mining and Science? Knowledge discovery in science as opposed to business”, 12th ERCIM Workshop on Database Research.

Top 10 Python Libraries Of All Time

Python is a very popular and renowned language that has replaced several programming languages in the market. Its amazing collection of libraries makes it a convenient programming language for developers.

Python is an ocean of libraries serving an ample number of purposes and as a developer; you must possess sound knowledge of the 10 libraries. One needs to familiarize themselves with the libraries to go on and work on different projects. For the data scientist, it has been a charmer now.

Here today, for you this is a curated list of 10 Python libraries that can help you along with its significant features, when to use them, and also the benefits.

10 Best Python Libraries of All Times

  1. Pandas: Pandas is an open-source library that offers instant high performance, data analysis, and simple data structures. When can you use it? It can be used for data munging and wrangling. If one is looking for quick data visuals, aggregation, manipulation, and reading, then this library is suitable. You can impute the missing data files, plot the data, and make edits in the data column. Moreover, for renaming and merging, this tool can do wonders. It is a foundation library, and a data scientist should have in-depth knowledge about Pandas before any other library knowledge.
  1. TensorFlow: TensorFlow is developed by Google in collaboration with the Brain Team. Using this tool, you can instantly visualize any part of the graphical representation. It comes with modularity and offers high flexibility in its operations. This library is ideal for running and operating in large scale systems. So, as long as you have good internet connectivity, you can use it because it is an open-source platform. What is the beauty of this library? It comes with an unending list of applications associated with it.
  1. NumPy: NumPy is the most popular Python library used by developers. It is used by various libraries for conducting easy operations. What is the beauty of NumPy? Array Interface is the beauty of NumPy and it is always a highlighted feature. NumPy is interactive and very simple to use. It can instantly solve complicated mathematical problems. With this, you need not worry about daunting phases of coding and offering open-source contributions. This interface is widely used for expressing raw streams, sound waves, and other images. If you are looking to implement this into machine learning, you must possess in-depth knowledge about NumPy.
  1. Keras: Are you looking for a cool Python library? Well, Keras is the coolest machine learning python library. It runs smoothly on both CPU and GPU. Do you want to know where Keras is used? It is used in popular applications like Uber, Swiggy, Netflix, Square, and Yelp. Keras easily supports the fully connected, pooling, convolution, and recurrent neural networks. For any innovative research, it does fine because it is expressive and flexible. Keras is completely based on a framework, which enables easy debugging and exploring. Various large scientific organizations use Keras for innovative research.
  1. Scikit- Learn: If your project deals with complex data, it has to be the Scikit- Learn python library. This Python Machine Learning Library is associated with NumPy and SciPy. After various modifications, one such feather cross-validation is used for enabling more than one metric. It is used for extracting features and data from texts and images. It uses various algorithms to make changes in machine learning. What are its functions? It is used in model selection, classification, clustering, and regression. Various training methods like nearest neighbor and logistics regressions are subjected to minimal modification.
  1. PyTorch: PyTorch is the largest library which conducts various computations and accelerations. Also, it solves complicated application issues that are related to the neural networks. It is completely based on the machine language Torch, which is a free and open-source platform. PyTorch is new but gaining huge popularity and very much a favorite among the developers. Why such popularity? It comes with a hybrid end-user which ensures easy usage and flexibility. For processing natural language applications, this library is used. Do you know what the best part is? It is outperforming and taking the popularity of Tensor Flow in recent times.
  1. MoviePy: The MoviePy is a tool that offers unending functionality related to movies and visuals. It is used for exporting, modifying, and importing various video files. Do you want to add a title to your video or rotate it 90 degrees? Well, MoviePy helps you to do all such tasks related to videos. It is not a tool for manipulating data like Pillow. In any task related to movies and videos in python coding, you can no doubt rely on the functionality of MoviePy. It is designed to conduct all the aspects of a standard task and can get it done instantly. For any common task associated with videos, it has a MoviePy library.
  1. Matplotlib: Matplotlib is no doubt a quintessential python library whose presence can never be forgotten. You can visualize data and create innovative and interesting stories. When can you use it? You can use Matplotlib for embedding different plots into the application as it provides an object-oriented application program interface. Any sort of visualization, be it bar graph, histogram, pie chart, or graphs, Matplotlib can easily depict it. With this library, you can create any type of visualization. Do you want to know what visualizations you can create? You can create a histogram, Bar graph, pie chart, area plot, stem plot, and line plot. It also facilitates the legends, grids, and labels.
  2. Tkinter: Tkinter is a library that can help you create any Python application with the help of a graphical user interface. Tkinter is the most common and easy to use python library for developing apps with GUI. It binds python to the GUI tool kit which can be used in any modern operating system. To create a python GUI, Tkinter is the only best way to start instantly.
  3. Plotly: The Plotly is an essential graph plotting python library for developers. Users can import, copy, paste, export the data that needs to be analyzed and visualized. When can you use it? You can use Plotly to display and create figures and visual images. What is interesting is that it has amazing features for sending data to the various cloud servers.

What are the visual charts prepared with Plotly? You can create line pie, bubble, dot, scatter, and pie. One can also construct financial charts, contours, maps, subplots, carpet, radar, and logs. Do you have anything in your mind which needs to be represented visually? Use Plotly!

Finishing Up

In a nutshell, you have the best python libraries of recent times which contribute hugely to development. If your favorite python library didn’t make it in this list of the top 10 best python libraries, do not take offense.

Python comes with unending library packages, and these 10 are some of its popular and best-used ones. If you are a python developer, these are the best libraries you must have in-depth knowledge of.

New Era of Data Science in Today’s World

In today’s digital world, most organizations are flooded with data, both structured and unstructured. Data is a commodity now, and organizations should know how to monetize that data and derive a profit from the deluge. And valuing data is one of the best ways enterprises can become successful in distinguishing themselves in the marketplace.

Data is the new oil

Indeed, data itself has become a commodity, and the mere possession of abundant amounts of data is not enough. But the ability to monetize data effectively (and not merely hoard it) can undoubtedly be a source of competitive advantage in the digital economy. However, we need to refine this data. And refinement of this “new oil” will take a reasonable amount of time. In my opinion, we are still not there. As a result, “data refinement” remains a key factor for successful advanced analytics.

If we talk about the level of activity in data and analytics space in the last two years, most advanced analytics evolved around three categories:

  • Descriptive, or what has happened
  • Predictive, or what could happen, and
  • Prescriptive, or what we should do.

Descriptive analytics has been the core analytics for many years. In the past, we could only describe what has happened to historical data (such as that found in a data warehouse), with dashboard reporting, using traditional analytics. But with the advent of advanced analytics, machine learning (ML), and deep learning and artificial intelligence (AI), our focus has changed to real-time analytics. In the last two years, much work has been done in predictive analytics, and as we move forward into our analytics journey, data-centric organizations will now focus on prescriptive analytics. The use of prescriptive analytics, along with predictive analytics, is very important for any organization to be successful in the future.

Current and recent trends in data and analytics

The analytics trends revolve around AI and ML. The Analytics-as-a-Service model is an essential model for any smart, data-driven organization. We can make an impact on society and try to make a better place to live with the use of advanced analytics. At NTT DATA, we strive to solve these problems to improve the quality, safety and advancement of humanity. From a business perspective, we use data analytics and predictive modeling to help companies increase their sales and revenue.

Let me give you some examples. We have been involved with several technology partners in a project for the Smart City. This project involved the use of predictive analytics for the validation of critical alerts to help reduce the time and amount of data required to be processed. It used Internet of Things (IoT) devices, high-definition video cameras, and sound sensors, as well as video and sound data captured from specific locations. Eventually, the solution also integrated with available data from data sources such as crime, weather and social media. The overall objective of the Smart City project was to use and apply advanced analytics with cognitive computing to facilitate safety decision-making, and for a responder to respond earlier based on real-time data.

Another example is the Smart ICU System developed by NTT DATA for predictive detection of threats for seriously ill patients in an ICU, based on the data. This data was consolidated from various medical devices in the ICU into one platform. From that data, we developed a model that predicts the risk of complications that might occur within the next couple of hours or so of a medical event. We have also used advanced analytics provided by weather data forecasting and used predictive models to predict natural disasters.

Data and analytics strategy

A strategy is an essential aspect of any data-driven organization. It should cover data strategy for AI, ML, statistical modeling and other data science disciplines, such as predictive and prescriptive analytics. In general, advanced analytics is more predictive and actionable than retrospective. Smart organizations see positive results when they place a strategy for data and analytics in the hands of employees who are well-positioned to make decisions, such as those who interact with customers, oversee product development, or run production processes. With data-based insight and clear decision rules, employees can deliver more meaningful services, better assess and address customer demands, and optimize production.

Smart organizations must take time to clean and update their underlying modern data architecture — along with their data governance process, for a cleaner data and analytics strategy. A modern data architecture, combined with a good governance process, can leverage AI and ML to help organizations stay ahead of their competitors.

Data analytics innovation

Machine and Deep Learning, along with AI, are all very popular, but I would like to reiterate that advanced technologies like AI and machine learning will continue to transform data analytics. The next innovation could be the use of automated analytics, which machine learning tools can use to identify hidden patterns in data. For example, customer retention issues, customer default on loans, or predicting customers who are prone to auto accidents. Also, predictive analytics and prescriptive analytics are going to be the key for any future innovations in AI and ML.

We must make targeted investments in traditional business innovation tools, along with emerging data analytics tools to derive benefits from data-driven business initiatives. We need to invest in cloud and underlying IT infrastructure to support these analytics and business initiatives. Most importantly, we also need to invest in people — cross training skilled resources and empowering the people who work closely with clients to make the right decisions for analytics.

Bias and Variance in Machine Learning

Machine learning continues to be an ever more vital component of our lives and ecosystem, whether we’re applying the techniques to answer research or business problems or in some cases even predicting the future. Machine learning models need to give accurate predictions in order to create real value for a given industry or domain.

While training a model is one of the key steps in the Data Science Project Life Cycle, how the model generalizes on unseen data is an equally important aspect that should be considered in every Data Science Project Life Cycle. We need to know whether it works and, consequently, if we can trust its predictions. Could the model be merely memorizing the data it is fed with, and therefore unable to make good predictions on future samples, or samples that it hasn’t seen before?

Let’s know the importance of evaluation with a simple example, There are two student’s Ramesh and Suresh preparing for the CAT exam to get into top IIMs (Indian Institute of Management). They both are quite good friends and stayed in the room during preparation and put an equal amount of hard work while solving numerical problems.

They both prepared for almost the same number of hours for the entire year and appeared in the final CAT exam. Surprisingly, Ramesh cleared, but Suresh did not. When asked, we got to know that there was one difference in their strategy of preparation between them, Ramesh had joined a Test Series course where he used to test his knowledge and understanding by giving mock exams and then further evaluating on which portions he is lagging and making necessary adjustments to he is preparation cycle in order to do well in those areas. But Suresh was confident, and he just kept training himself without testing on the preparation he had done.

Like the above situation we can train a Machine Learning Algorithm extensively with many parameters and new techniques, but if you are skipping its evaluation step, you cannot trust your model to perform well on the unseen data. In this article, we explain the importance of Bias, Variance and the trade-off between them in order to know how well a machine learning model generalizes to new, previously unseen data.

Training of Supervised Machine Learning

Bias

Bias is the difference between the Predicted Value and the Expected Value or how far are the predicted values from the actual values. During the training process the model makes certain assumptions on the training data provided. After Training, when it is introduced to the testing/validation data or unseen data, these assumptions may not always be correct.

If we use a large number of nearest neighbors in the K-Nearest Neighbors Algorithm, the model can totally decide that some parameters are not important at all for the modelling.  For example, it can just consider that only two predictor variables are enough to classify the data point though we have more than 10 variables.

This type of model will make very strong assumptions about the other parameters not affecting the outcome at all. You can take it as a model predicting or understanding only the simple relationship when the data points clearly indicate a more complex relationship.

When the model has high bias error, it results in a very simplistic model that does not consider the complexity of the data very well leading to Underfitting.

Variance

Variance occurs when the model performs well on the trained dataset but does not do well on an unseen data set, it is when the model considers the fluctuations or i.e. the noise as in the data as well. The model will still consider the variance as something to learn from because it learns too much from the noise inside the trained data set that it fails to perform as expected on the unseen data.

Based on the above example from Bias, if the model learns that all the ten predictor variables are important to classify a given data point then it tends to have high variance. You can take it as the model is trying to understand every minute detail making it more complex and failing to perform well on the unseen data.

When a model has High Bias error, it underfits the data and makes very simplistic assumptions on it. When a model has High Variance error, it overfits the data and learns too much from it. When a model has balanced Bias and Variance errors, it performs well on the unseen data.

Bias-Variance Trade-off

Based on the definitions of bias and variance, there is clear trade-off between bias and variance when it comes to the performance of the model. A model will have a high error if it has very high bias and low variance and have a high error if it has high variance and low bias.

A model that strikes a balance between the bias and variance can minimize the error better than those that live on extreme ends.

We can find whether the model has High Bias using the below steps:

  1. We tend to get high training errors.
  2. The validation error or test error will be similar to the training error.

We can find whether the model has High Bias using the below steps:

  1. We tend to get low training error
  2. The validation error or test error will be very high.

We can fix the High Bias using below steps:

  1. We need to gather more input features or can even try to create few using the feature engineering techniques.
  2. We can even add few polynomial features in order to increase the complexity.
  3. If we are using any regularization terms in our model, we can try to minimize it.

We can fix the High Variance using below steps:

  1. We can gather more training data so that the model can learn more on the patterns rather than the noise.
  2. We can even try to reduce the input features or do feature selection.
  3.  If we are using any regularization terms in our model we can try to maximize it.

Conclusion

In this article, we got to know the importance of the evaluation step in the Data Science Project Life Cycle, definitions of Bias and Variance, the trade-off between them and the steps we can take to fix the Underfitting and Overfitting of a Machine Learning Model.

Connections Between Data Science & Finance

Image Source: pixabay.com

The world of finance is changing at an unprecedented rate. Data science has completely altered the face of traditional finance management. Though data has long been a critical component to finances, the introduction of big data and artificial intelligence have created new tools that are strengthening the predictive ability of many financial institutions.

These changes have led to a rapid increase in the need for financial professionals with data science skills. Nearly every sector in finances is converting to greater use of data science and management from the stock market and retirement accounts to credit score calculation. A greater understanding of the interplay between data and finance is a key skill gap.

Likewise, they have opened many doors for those that are interested in analyzing their personal finances. More and more people are taking their finances into their own hands and using the data tools available to make the best decisions for them. In today’s world, the sky’s the limit for financial analysis and management!

The Rise of the Financial Analyst

Financial analysts are the professionals who are responsible for the general management of money and investments both in an industrial and personal finance realm. Typically a financial analyst will spend time reviewing and understanding the overall stock portfolio and financial standing of a client including:

  • Stocks
  • Bonds
  • Retirement accounts
  • Financial history
  • Current financial statements and reports
  • Overarching business and industry trends

From there, the analyst will provide a recommendation with data-backed findings to the client on how they should manage their finances going into the future.

As you can imagine, with all of this data to analyze, the need for financial analysts to have a background or understanding of data science has never been higher! Finance jobs requiring skills such as artificial intelligence and big data increased by over 60% in the last year. Though these new jobs are typically rooted in computer science and data analytics, most professionals still need a background in financial management as well.

The unique skills required for a position like this means there is a huge (and growing) skills gap in the financial sector. Those professionals that are qualified and able to rise to fill the need are seeing substantial pay increases and hundreds of job opportunities across the nation and the globe.

A Credit Score Example

But where does all of this data science and professional financial account management come back to impact the everyday person making financial decisions? Surprisingly, pretty much in every facet of their lives. From things like retirement accounts to faster response times in financial analysis to credit scores — data science in the financial industry is like a cloaked hand pulling the strings in the background.

Take, for example, your credit score. It is one of the single most important numbers in your life, for better or worse. A high credit score can open all sorts of financial doors and get you better interest rates on the things you need loans for. A bad score can limit the amount lenders willing to qualify you for a loan and increase the interest rate substantially, meaning you will end up paying far more money in the end.

Your credit score is calculated by several things — though we understand the basic outline of what goes into the formula, the finer points are somewhat of a mystery. We know the big factors are:

  • Personal financial history
  • Debit-credit ratio
  • Length of credit history
  • Number of new credit hits or applications

All of this data and number crunching can have a real impact on your life, just one example of how data in the financial world is relevant.

Using Data Science in Personal Finance

Given all this information, you might be thinking to yourself that what you really need is a certificate in data science. Certainly, that will open a number of career doors for you in a multitude of realms, not just the finance industry. Data science is quickly becoming a cornerstone of how most major industries do business.

However, that isn’t necessarily required to get ahead on managing your personal finances. Just a little information about programs such as Excel can get you a long way. Some may even argue that Excel is the original online data management tool as it can be used to do things like:

  • Create schedules
  • Manage budgets
  • Visualize data in charts and graphs
  • Track revenues and expenses
  • Conditionally format information
  • Manage inventory
  • Identify trends in large data sets

There are even several tools and guides out there that will help you to get started!

***

Data analysis and management is here to stay, especially when it comes to the financial industry. The tools are likely to continue to become more important and skills in their use will increase in value. Though there are a lot of professional skills using big data to manage finances, there are still a lot of tools out there that are making it easier than ever to glean insights into your personal finances and make informed financial decisions.

K Nearest Neighbour For Supervised Learning

K-Nearest Neighbour (KNN) Algorithms is an easy-to-implement & advanced level supervised machine learning algorithm used for both – classification as well as regression problems. However, you can see a wide of its applications in classification problems across various industries.

If you’ve been shopping a lot in e-commerce sites like Amazon, Flipkart, Myntra, or love watching web series over Netflix and Amazon Prime, one common thing you’ve always noticed, and that is recommendations.

Are you wondering how they recommend you following your choice? They use KNN Supervised Learning to find out what you may need the next when you’re buying and recommend you with a few more products.

Imagine you’re looking for an iPhone to purchase. When you scroll down a little, you see some iPhone cases, tempered glasses – saying, “People who purchased an iPhone have also purchased these items. The same applies to Netflix and Amazon Prime. When you finished a show or a series, they give you recommendations of the same genre. And do it all using KNN supervised learning and classify the items for the best user experience.

Advantages Of KNN

  • Quickest Calculation Time
  • Simple Algorithms
  • High Accuracy
  • Versatile – best use for Regression and Classification.
  • Doesn’t make any assumptions about data.

Where KNN Are Mostly Used

  • Simple Recommendation Models
  • Image Recognition Technology
  • Decision-Making Models
  • Calculating Credit Rating

Choosing The Right Value For K

 To choose the right value of K, you have to run KNN algorithms several times with different values of K and select the value of K, which reduces the number of errors you’ve come across and come out as the most stable value for K.

Your Step-By-Step Guide For Choosing The Value Of K

  • As you decrease the value of K to 1 (K = 1), you’ll reach a query point, where you get to see many elements from class A (-) and class B (+) where (-) is the only nearest neighbor. Reasonably, you would think about the query point to be most likely the red one. As K =1, which has a blue color, KNN incorrectly predicts the wrong color blue.
  • As you increase the value of K to 2 (K=2), you get to see two elements, (-) and (+) are the only nearest neighbor. As you have two values, which are of Class A and Class B, KNN incorrectly predicts the wrong values (Blue and Red).
  • As you increase the value of K to 3 (K=3), you get to see three elements (-) and (+), (+) are the only nearest neighbor. And this time, you got three values, one from blue and two from red. As your assumption is red, KNN correctly predicts the right value (Blue and Red, Red). Your answer is more stable this time compared to previous ones.

Conclusion

KNN works by finding the nearest distance between a query and all the elements in the database. By choosing the value for K, we get the closest to the query. And then, KNN algorithms look for the most frequent labels in classification and averages of labels in regression.

Spiky cubes, Pac-Man walking, empty M&M’s chocolate: curse of dimensionality

This is the first article of the article series Illustrative introductions on dimension reduction.

“Curse of dimensionality” means the difficulties of machine learning which arise when the dimension of data is higher. In short if the data have too many features like “weight,” “height,” “width,” “strength,” “temperature”…., that can undermine the performances of machine learning. The fact might be contrary to your image which you get from the terms “big” data or “deep” learning. You might assume that the more hints you have, the better the performances of machine learning are. There are some reasons for curse of dimensionality, and in this article I am going to introduce two major reasons below.

  1. High dimensional data usually have rich expressiveness, but usually training data are too poor for that.
  2. The behaviors of data points in high dimensional space are totally different from our common sense.

Through these topics, you will see that you always have to think about which features to use considering the number of data points.

*From now on I am going to talk about only Euclidean distance. If you are not sure what Euclidean distance means, please just keep it in mind that it is the type of distance most people wold have learnt in normal compulsory education.

*This is the first article of the article series ” Illustrative introductions on dimension reduction .”

1. Number of samples and degree of dimension

The most straightforward demerit of adding many features, or increasing dimensions of data, is the growth of computational costs. More importantly, however, you always have to think about the degree of dimensions in relation of the number of data points you have. Let me take a simple example in a book “Pattern Recognition and Machine Learning” by C. M. Bishop (PRML). This is an example of measurements of a pipeline. The figure below shows a comparison plot of 3 classes (red, green and blue), with parameter x_7 plotted against parameter x_6 out of 12 parameters.

* The meaning of data is not important in this article. If you are interested please refer to the appendix in PRML.

Assume that we are interested in classifying the cross in black into one of the three classes. One of the most naive ideas of this classification is dividing the graph into grids and labeling each grid depending on the number of samples in the classes (which are colored at the right side of the figure). And you can classify the test sample, the cross in black, into the class of the grid where the test sample is in. Thereby the cross is classified to the class in red.

Source: C.M. Bishop, “Pattern Recognition and Machine Learning,” (2006), Springer, pp. 34-35

As I mentioned in the figure above, we used only two features out of 12 features in total. When the total number of data points is fixed and you add remaining ten axes/features one after another, what would happen? Let’s see what “adding axes/features” means. If you are talking about 1, 2, or 3 dimensional grids, you can visualize them. And as you can see from the figure below, if you make each 10^1, 10^2, 100^3 grids respectively in 1, 2, 3 dimensional spaces, the number of the small regions in the grids are respectively 10, 100, 1000. Even though you cannot visualize it anymore, you can make grids for more than 3 dimensional data. If you continue increasing the degree of dimension, the number of grids increases exponentially, and that can soon surpass the number of training data points. That means there would be a lot of empty spaces in such high dimensional grids. And the classifying method above: coloring each grid and classifying unknown samples depending on the colors of the grids, does not work out anymore because there would be a lot of empty grids.

* If you are still puzzled by the idea of “more than 3 dimensional grids,” you should not think too much about that now. It is enough if you can get some understandings on high dimensional data after reading the whole article of this.

Source: Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning, (2016), MIT Press, p. 153

I said the method above is the most naive way, but other classical classification methods , for example k-nearest neighbors algorithm, are more or less base on a similar idea. Many of classical machine learning algorithms are based on the idea of smoothness prior, or local constancy prior. In short in classical ways, you  do not expect data to change so much in a small region, so you can expect unknown samples to be similar to data in vicinity. But that soon turns out to be problematic when the dimension of data is bigger because training data would be sparse because the area of multidimensional space grows exponentially as I mentioned above. And sometimes you would not be able to find training data around test data. Plus, in high dimensional data, you cannot treat distance in the same as you do in lower dimensional space. The ideas of “close,” “nearby,” or “vicinity” get more obscure in high dimensional data. That point is related to the next topic: the intuition have cultivated in normal life is not applicable to higher dimensional data.

2. Bizarre characteristics of high dimensional data

We form our sense of recognition in 3-dimensional ways in our normal life. Even though we can visualize only 1, 2, or 3 dimensional data, we can actually generalize the ideas in 1, 2, or 3 dimensional ideas to higher dimensions. For example 4 dimensional cubes, 100 dimensional spheres, or orthogonality in 255 dimensional space. Again, you cannot exactly visualize those ideas, and for many people, such high dimensional phenomenon are just imaginary matters on blackboards. Those high dimensional ideas are designed to retain some conditions just as well as 1, 2, or 3 dimensional space. Let’s take an example of spheres in several dimensional spaces. General spheres in any D-dimensional space can be defined as a set of any \boldsymbol{x}, such that |\boldsymbol{x} - \boldsymbol{c}| = r, where \boldsymbol{c} is the center point and r is length of radius. When \boldsymbol{x} is 2-dimensional, the spheres are called “circles.” When \boldsymbol{x} is 3-dimensional, the spheres are called “spheres” in our normal life, unless it is used in a conversation in a college cafeteria, by some students in mathematics department. And when \boldsymbol{x} is D-dimensional, they are called D-ball, and again, this is just a imaginary phenomenon on blackboard.

* Vectors and points are almost the same because all the vectors are denoted as “arrows” from the an origin point to sample data points.  The only difference is that when you use vectors, you have to consider their directions.

* “D-ball” is usually called “n-ball,” and in such context it is a sphere in a n-dimensional space. But please let me use the term “D-ball” in this article.

Not only spheres, but only many other ideas have been generalized to D-dimensional space, and many of them are indispensable also for data science. But there is one severe problem: the behaviors of data in high dimensional field is quite different from those in two or three dimensional space. To be concrete, in high dimensional field, cubes are spiky, you have to move like Pac-Man, and M & M’s Chocolate looks empty inside but tastes normal.

2.1: spiky cubes
Let’s take a look at an elementary-school-level example of geometry first. Assume that you have several unit squares or unit cubes like below. In each of them a circle or sphere with diameter 1 is inscribed. The length of a diagonal line in each square is \sqrt{2}, and that in each cube is \sqrt{3}.

If you stack the squares or cubes as below, what are the length of diameters of the blue circle or sphere, circumscribing all the 4 orange circles or the 8 orange spheres?

The answers are, the diameter of the blue circle is \sqrt{2} - 1, and the diameter of the blue sphere is \sqrt{3} - 1.

Next let’s think about the same situation in higher dimensional space. Assume that there are some unit D-dimensional hypercubes stacked, in each of which a D-ball with diameter 1 is inscribed, touching all the surfaces inside. Then what is the length of the diameter of  a D-ball circumscribing all the unit D-ball in the hypercubes ? Given the results above, it ca be predicted that its diameter is \sqrt{D}  -1. If that is true, there is one strange point: \sqrt{D} - 1 can soon surpass 2: that means in the chart above the blue sphere will stick out of the stacked cubes. That sounds like a paradox, but with one hypothesis, the phenomenon makes sense: cubes become more spiky as the degree of dimension grows. This hypothesis is a natural deduction because diagonal lines of hyper cubes get longer, and the the center of each surface of hypercubes still touches the unit D-ball with diameter 1, inscribing inscribing inside each unit hypercube.

If you stack 4 hypercubes, the blue sphere circumscribing them will not stick out of the stacked hypercubes anymore like the figure below.

*Of course you cannot visualize what is going on in D-dimensional space, so the figure below is just a pseudo simulation of D-dimensional space in our 3-dimensional sense. I guess you have to stack more than four hyper cubes in higher dimensional data, but you cannot easily imagine what will go on in such space anymore.

 

*You can confirm the fact that hypercube gets more spiky as the degree of dimension growth, by comparing the volume of the hypercube and the volume of the D-ball inscribed inside the hypercube. Thereby you can prove that the volume of hypercube concentrates on the corners of the hypercube. Plus, as I mentioned the longest diagonal distance of hypercube gets longer as dimension degree increases. That is why hypercube is said to be spiky. For mathematical proof, please check the Exercise 1.19 of PRML.

2.2: Pac-Man walking

Next intriguing phenomenon in high dimensional field is that most of pairs of vectors in high dimensional space are orthogonal. In other words, if you select two random vectors in high dimensional space, the angle between them are mostly close to 90^\circ. Let’s see the general meaning of angle between two vectors in any dimensional spaces. Assume that the angle between two vectors \boldsymbol{u}, and \boldsymbol{v} is \theta, then cos\theta is calculated as cos\theta = \frac{<\boldsymbol{u}, \boldsymbol{v}>}{|\boldsymbol{u}||\boldsymbol{v}|}. In 1, 2, or 3 dimensional space, you can actually see the angle, but again you can define higher dimensional angle, which you cannot visualize anymore. And angles are sometimes used as similarity of two vectors.

* <\boldsymbol{u}, \boldsymbol{v}> is the inner product of \boldsymbol{u}, and \boldsymbol{v}.

Assume that you generate a pair of two points inside a D-dimensional unit sphere and make two vectors \boldsymbol{u}, and \boldsymbol{v} by connecting the origin point and those two points respectively. When D is 2, I mean spheres are circles in this case, any \theta are equally generated as in the chart below. The fact might be the same as your intuition.   How about in 3-dimensional space? In fact the distribution of \theta is not uniform. \theta = 90^\circ is the most likely to be generated. As I explain in the figure below, if you compare the area of cross section of a hemisphere and the area of a cone whose vertex is the center point of the sphere, you can see why.

I generated 10000 random pairs of points in side a D-dimensional unit sphere, and calculated the angle between them. In other words I just randomly generated two D-dimensional vectors \boldsymbol{u} and \boldsymbol{v}, whose elements are randomly generated values between -1 and 1, and calculated the angle between them, repeating this process 10000 times. The chart below are the histograms of angle between pairs of generated vectors in respectively 2, 3, 50, and 100 dimensional space.

As I explained above, in 2-dimensional space, the distribution of \theta is almost uniform. However the distribution concentrates a little around 90^\circ in 3-dimensional space. You can see that the bigger the degree of dimension is, the more the angles of generated vectors concentrate around 90^\circ. That means most pairs of vectors in high dimensional space are close to orthogonal. Movements are also sequence of vectors, so when most pairs of movement vectors are orthogonal, that means you can only move like Pac-Man in such space.

Source: https://edition.cnn.com/style/article/pac-man-40-anniversary-history/index.html

* Of course I am talking about arcade Mac-Man game. Not Pac-Man in Super Smash Bros.  Retro RPG video games might have more similar playability, but in high dimensional space it is also difficult to turn back. At any rate, I think you have understood it is even difficult to move smoothly in high dimensional space, just like the first notorious Resident Evil on the first PS console also had terrible playability .

2.3: empty M & M’s chocolate

Let’s think about the proportion of the volume of the outermost \epsilon surface of general spheres with radius r. First, in 2 two dimensional space, spheres are circles. The area of the brown part of the circle below is \pi r^2. In order calculate the are of \epsilon \cdot r thick surface of the circle, you have only to subtract the area of \pi \{ (1 - \epsilon)\cdot r\} ^2. When \epsilon = 0.01, the area of outer most surface is \pi r^2 - \pi (0.99\cdot r)^2, and its proportion to the area of the whole circle is \frac{\pi r^2 - \pi (0.99\cdot r)^2}{\pi r^2} = 0.0199.

In case of 3-dimensional space, the value of a sphere with radius r is \frac{4}{3} \pi r^2, so the proportion of the \epsilon surface is calculated in the same way: \frac{\frac{4}{3} \pi r^3 -\frac{4}{3} \pi (0.99\cdot r)^2}{\frac{4}{3}\pi r^2} = 0.0297. Compared to the case in 2 dimensional space, the proportion is a little bigger.

How about in D-dimensional space? We have seen that even in  D-dimensional space the surface of a sphere, I mean D-ball, can be defined as a set of any points whose distance from the center point is all r. And it is known that the volume of D-ball is defined as below.

\Gamma () is called gamma function, but in this article it is not so important. The most important point now is, if you discuss any D-ball, their volume only depends on their radius r. That meas the proportion of outer \epsilon surface of D-ball is calculated as \frac{\pi r^2 - \pi \{ (1 - \epsilon)\cdot r\} ^2}{\pi r^2}. When \epsilon is 0.01, the proportion of the 1% surface of D-ball changes like in the chart below.

* And of course when D is 2,  \frac{\pi ^{(\frac{D}{2})}}{\Gamma (\frac{D}{2} + 1)} = \pi, and when D is 3 ,  \frac{\pi ^{(\frac{D}{2})}}{\Gamma (\frac{D}{2} + 1)} = \frac{4}{3} \pi

You can see that when D is over 400, around 90% of volume is concentrated in the very thin 1% surface of D-ball. That is why, in high dimensional space, M & M’s chocolate look empty but tastes normal: all the chocolate are concentrated beneath the sugar coating.

More interestingly, even if you choose any points as a central point of a sphere with radius r, the other points are squashed to the surface of the sphere, even if all the data points are uniformly distributed. This situation is problematic for classical machine learning algorithms, which are often based on the Euclidean distances between pairs of two sample data points: if you go from the central point to another sample point, the possibility of finding the point within (1 - \epsilon)\cdot r radius of the center is almost zero. But if you reach the outermost \epsilon part of the surface of the sphere, most data points are there. However, for one of the data points in the surface, any other data points are distant in the same way.

Inside M & M’s chocolate is a mysterious world.

Source: https://hipwallpaper.com/mms-wallpapers/

You have seen that using high dimensional data can be problematic in many ways. Data science and machine learning are largely based on one idea: you can find a lower dimensional meaningful and easier structure in data. In the next articles I am going to introduce some famous dimension reduction algorithms. And hopefully I would like to give some deeper insights in to these algorithms, in straightforward ways.

* I could not explain the relationships of variance and bias of data. This is also a very important factor when you think about dimensionality of data. I hope I can write about this topic someday. You can also look it up if you are interested.

* I make study materials on machine learning, sponsored by DATANOMIQ. I do my best to make my content as straightforward but as precise as possible. I include all of my reference sources. If you notice any mistakes in my materials, including grammatical errors, please let me know (email: yasuto.tamura@datanomiq.de). And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.

Illustrative introductions on dimension reduction

“What is your image on dimensions?”

….That might be a cheesy question to ask to reader of Data Science Blog, but most people, with no scientific background, would answer “One dimension is a line, and two dimension is a plain, and we live in three-dimensional world.” After that if you ask “How about the fourth dimension?” many people would answer “Time?”

You can find books or writings about dimensions in various field. And you can use the word “dimension” in normal conversations, in many contexts.

*In Japanese, if you say “He likes two dimension.” that means he prefers anime characters to real women, as is often the case with Japanese computer science students.

The meanings of “dimensions” depend on the context, but in data science dimension is usually the number of rows of your Excel data.

When you study data science or machine learning, usually you should start with understanding the algorithms with 2 or 3 dimensional data, and you can apply those ideas to any D dimensional data. But of course you cannot visualize D dimensional data anymore, and you always have to be careful of what happens if you expand degree of dimension.

Conversely it is also important to reduce dimension to understand abstract high dimensional stuff in 2 or 3 dimensional space, which are close to our everyday sense. That means dimension reduction is one powerful way of data visualization.

In this blog series I am going to explain meanings of dimension itself in machine learning context and algorithms for dimension reductions, such as PCA, LDA, and t-SNE, with 2 or 3 dimensional visible data. Along with that, I am going to delve into the meaning of calculations so that you can understand them in more like everyday-life sense.

This article series is going to be roughly divided into the contents below.

  1. Curse of Dimensionality
  2. Rethinking linear algebra: visualizing linear transformations and eigen vector
  3. The algorithm known as PCA and my taxonomy of linear dimension reductions
  4. KL expansion and subspace method (to be published soon)
  5. Autoencoder as dimension reduction (to be published soon)
  6. t-SNE (to be published soon)

I hope you could see that reducing dimension is one of the fundamental approaches in data science or machine learning.

Data Science in Engineering Process - Product Lifecycle Management

How to develop digital products and solutions for industrial environments?

The Data Science and Engineering Process in PLM.

Huge opportunities for digital products are accompanied by huge risks

Digitalization is about to profoundly change the way we live and work. The increasing availability of data combined with growing storage capacities and computing power make it possible to create data-based products, services, and customer specific solutions to create insight with value for the business. Successful implementation requires systematic procedures for managing and analyzing data, but today such procedures are not covered in the PLM processes.

From our experience in industrial settings, organizations start processing the data that happens to be available. This data often does not fully cover the situation of interest, typically has poor quality, and in turn the results of data analysis are misleading. In industrial environments, the reliability and accuracy of results are crucial. Therefore, an enormous responsibility comes with the development of digital products and solutions. Unless there are systematic procedures in place to guide data management and data analysis in the development lifecycle, many promising digital products will not meet expectations.

Various methodologies exist but no comprehensive framework

Over the last decades, various methodologies focusing on specific aspects of how to deal with data were promoted across industries and academia. Examples are Six Sigma, CRISP-DM, JDM standard, DMM model, and KDD process. These methodologies aim at introducing principles for systematic data management and data analysis. Each methodology makes an important contribution to the overall picture of how to deal with data, but none provides a comprehensive framework covering all the necessary tasks and activities for the development of digital products. We should take these approaches as valuable input and integrate their strengths into a comprehensive Data Science and Engineering framework.

In fact, we believe it is time to establish an independent discipline to address the specific challenges of developing digital products, services and customer specific solutions. We need the same kind of professionalism in dealing with data that has been achieved in the established branches of engineering.

Data Science and Engineering as new discipline

Whereas the implementation of software algorithms is adequately guided by software engineering practices, there is currently no established engineering discipline covering the important tasks that focus on the data and how to develop causal models that capture the real world. We believe the development of industrial grade digital products and services requires an additional process area comprising best practices for data management and data analysis. This process area addresses the specific roles, skills, tasks, methods, tools, and management that are needed to succeed.

Figure: Data Science and Engineering as new engineering discipline

More than in other engineering disciplines, the outputs of Data Science and Engineering are created in repetitions of tasks in iterative cycles. The tasks are therefore organized into workflows with distinct objectives that clearly overlap along the phases of the PLM process.

Feasibility of Objectives
  Understand the business situation, confirm the feasibility of the product idea, clarify the data infrastructure needs, and create transparency on opportunities and risks related to the product idea from the data perspective.
Domain Understanding
  Establish an understanding of the causal context of the application domain, identify the influencing factors with impact on the outcomes in the operational scenarios where the digital product or service is going to be used.
Data Management
  Develop the data management strategy, define policies on data lifecycle management, design the specific solution architecture, and validate the technical solution after implementation.
Data Collection
  Define, implement and execute operational procedures for selecting, pre-processing, and transforming data as basis for further analysis. Ensure data quality by performing measurement system analysis and data integrity checks.
Modeling
  Select suitable modeling techniques and create a calibrated prediction model, which includes fitting the parameters or training the model and verifying the accuracy and precision of the prediction model.
Insight Provision
  Incorporate the prediction model into a digital product or solution, provide suitable visualizations to address the information needs, evaluate the accuracy of the prediction results, and establish feedback loops.

Real business value will be generated only if the prediction model at the core of the digital product reliably and accurately reflects the real world, and the results allow to derive not only correct but also helpful conclusions. Now is the time to embrace the unique chances by establishing professionalism in data science and engineering.

Authors

Peter Louis                               

Peter Louis is working at Siemens Advanta Consulting as Senior Key Expert. He has 25 years’ experience in Project Management, Quality Management, Software Engineering, Statistical Process Control, and various process frameworks (Lean, Agile, CMMI). He is an expert on SPC, KPI systems, data analytics, prediction modelling, and Six Sigma Black Belt.


Ralf Russ    

Ralf Russ works as a Principal Key Expert at Siemens Advanta Consulting. He has more than two decades experience rolling out frameworks for development of industrial-grade high quality products, services, and solutions. He is Six Sigma Master Black Belt and passionate about process transparency, optimization, anomaly detection, and prediction modelling using statistics and data analytics.4