Posts

The Inside Out of ML Based Prescriptive Analytics

With the constantly growing number of data, more and more companies are shifting towards analytic solutions. Analytic solutions help in extracting the meaning from the huge amount of data available. Thus, improving decision making.

Decision making is an important aspect of businesses, and technologies like Machine Learning are enhancing it further. The growing use of Machine Learning has changed the way of prescriptive analytics. In order to optimize the efforts, companies need to be more accurate with the historical and present data. This is because the historical and present data are the essentials of analytics. This article helps describe the inside out of Machine Learning-based prescriptive analytics.

Phases of business analytics

Descriptive analytics, predictive analytics, and prescriptive analytics are the three phases of business analytics. Descriptive analytics, being the first one, deals with past performance. Historical data is mined to understand past performance. This serves as a way to look for the reasons behind past success and failure. It is a kind of post-mortem analysis and most management reporting like sales, marketing, operations, and finance etc. make use of this.

The second one is a predictive analysis which answers the question of what is likely to happen. The historical data is now combined with rules, algorithms etc. to determine the possible future outcome or likelihood of a situation occurring.

The final phase, well known to everyone, is prescriptive analytics. It can continually take in new data and re-predict and re-prescribe. This improves the accuracy of the prediction and prescribes better decision options.  Professional services or technology or their combination can be chosen to perform all the three analytics.

More about prescriptive analytics

The analysis of business activities goes through many phases. Prescriptive analytics is one such. It is known to be the third phase of business analytics and comes after descriptive and predictive analytics. It entails the application of mathematical and computational sciences. It makes use of the results obtained from descriptive and predictive analysis to suggest decision options. It goes beyond predicting future outcomes and suggests actions to benefit from the predictions. It shows the implications of each decision option. It anticipates on what will happen when it will happen as well as why it will happen.

ML-based prescriptive analytics

Being just before the prescriptive analytics, predictive analytics is often confused with it. What actually happens is predictive analysis leads to prescriptive analysis. Thus, a Machine Learning based prescriptive analytics goes through an ML-based predictive analysis first. Therefore, it becomes necessary to consider the ML-based predictive analysis first.

ML-based predictive analytics:

A lot of things prevent businesses from achieving predictive analysis capabilities.  Machine Learning can be a great help in boosting Predictive analytics. Use of Machine Learning and Artificial Intelligence algorithms helps businesses in optimizing and uncovering the new statistical patterns. These statistical patterns form the backbone of predictive analysis. E-commerce, marketing, customer service, medical diagnosis etc. are some of the prospective use cases for Machine Learning based predictive analytics.

In E-commerce, machine learning can help in predicting the usual choices of the customer. Thus, presenting him/her according to his/her likes and dislikes. It can also help in predicting fraudulent transaction. Similarly, B2B marketing also makes good use of Machine learning based predictive analytics. Customer services and medical diagnosis also benefit from predictive analytics. Thus, a prediction and a prescription based on machine learning can boost various business functions.

Organizations and software development companies are making more and more use of machine learning based predictive analytics. The advancements like neural networks and deep learning algorithms are able to uncover hidden information. This all requires a well-researched approach. Big data and progressive IT systems also act as important factors in this.

Data Science and Predictive Analytics in Healthcare

Doing data science in a healthcare company can save lives. Whether it’s by predicting which patients have a tumor on an MRI, are at risk of re-admission, or have misclassified diagnoses in electronic medical records are all examples of how predictive models can lead to better health outcomes and improve the quality of life of patients.  Nevertheless, the healthcare industry presents many unique challenges and opportunities for data scientists.

The impact of data science in healthcare

Healthcare providers have a plethora of important but sensitive data. Medical records include a diverse set of data such as basic demographics, diagnosed illnesses, and a wealth of clinical information such as lab test results. For patients with chronic diseases, there could be a long and detailed history of data available on a number of health indicators due to the frequency of visits to a healthcare provider. Information from medical records can often be combined with outside data as well. For example, a patient’s address can be combined with other publicly available information to determine the number of surgeons that practice near a patient or other relevant information about the type of area that patients reside in.

With this rich data about a patient as well as their surroundings, models can be built and trained to predict many outcomes of interest. One important area of interest is models predicting disease progression, which can be used for disease management and planning. For example, at Fresenius Medical Care (where we primarily care for patients with chronic conditions such as kidney disease), we use a Chronic Kidney Disease progression model that can predict the trajectory of a patient’s condition to help clinicians decide whether and when to proceed to the next stage in their medical care. Predictive models can also notify clinicians about patients who may require interventions to reduce risk of negative outcomes. For instance, we use models to predict which patients are at risk for hospitalization or missing a dialysis treatment. These predictions, along with the key factors driving the prediction, are presented to clinicians who can decide if certain interventions might help reduce the patient’s risk.

Challenges of data science in healthcare

One challenge is that the healthcare industry is far behind other sectors in terms of adopting the latest technology and analytics tools. This does present some challenges, and data scientists should be aware that the data infrastructure and development environment at many healthcare companies will not be at the bleeding edge of the field. However it also means there are a lot of opportunities for improvement, and even small simple models can yield vast improvements over current methods.

Another challenge in the healthcare sector arises from the sensitive nature of medical information. Due to concerns over data privacy, it can often be difficult to obtain access to data that the company has. For this reason, data scientists considering a position at a healthcare company should be aware of whether there is already an established protocol for data professionals to get access to the data. If there isn’t, be aware that simply getting access to the data may be a major effort in itself.

Finally, it is important to keep in mind the end-use of any predictive model. In many cases, there are very different costs to false-negatives and false-positives. A false-negative may be detrimental to a patient’s health, while too many false-positives may lead to many costly and unnecessary treatments (also to the detriment of patients’ health for certain treatments as well as economy overall). Education about the proper use of predictive models and their limitations is essential for end-users. Finally, making sure the output of a predictive model is actionable is important. Predicting that a patient is at high-risk is only useful if the model outputs is interpretable enough to explain what factors are putting that patient at risk. Furthermore, if the model is being used to plan interventions, the factors that can be changed need to be highlighted in some way – telling a clinician that a patient is at risk because of their age is not useful if the point of the prediction is to lower risk through intervention.

The future of data science in the healthcare sector

The future holds a lot of promise for data science in healthcare. Wearable devices that track all kinds of activity and biometric data are becoming more sophisticated and more common. Streaming data coming from either wearables or devices providing treatment (such as dialysis machines) could eventually be used to provide real-time alerts to patients or clinicians about health events outside of the hospital.

Currently, a major issue facing medical providers is that patients’ data tends to exist in silos. There is little integration across electronic medical record systems (both between and within medical providers), which can lead to fragmented care. This can lead to clinicians receiving out of date or incomplete information about a patient, or to duplication of treatments. Through a major data engineering effort, these systems could (and should) be integrated. This would vastly increase the potential of data scientists and data engineers, who could then provide analytics services that took into account the whole patients’ history to provide a level of consistency across care providers. Data workers could use such an integrated record to alert clinicians to duplications of procedures or dangerous prescription drug combinations.

Data scientists have a lot to offer in the healthcare industry. The advances of machine learning and data science can and should be adopted in a space where the health of individuals can be improved. The opportunities for data scientists in this sector are nearly endless, and the potential for good is enormous.

Neural Nets: Time Series Prediction

Artificial neural networks are very strong universal approximators. Google recently defeated the worlds strongest Go (“chinese chess”) player with two neural nets, which captured the game board as a picture. Aside from these classification tasks, neural nets can be used to predict future values, behaviors or patterns solely based on learned history. In the machine learning literature, this is often referred to as time series prediction, because, you know, values over time need to be predicted. Hah! To illustrate the concept, we will train a neural net to learn the shape of a sinusoidal wave, so it can continue to draw the shape without any help. We will do this with Scala. Scala is a great lang, because it is strongly typed but feels easy like Python. Throughout this article, I will use the library NeuroFlow, which is a simple, lightweight library I wrote to build and train nets. Because Open Source is the way to go, feel free to check (and contribute to? :-)) the code on GitHub.

Introduction of the shape

If we, as humans, want to predict the future based on historic observations, we would have no other chance but to be guided by the shape drawn so far. Let’s study the plot below, asking ourselves: How would a human continue the plot?

sinuspredictdr
f(x) = sin(10*x)

Intuitively, we would keep on oscillating up and down, just like the grey dotted line tries to rough out. To us, the continuation of the shape is reasonably easy to understand, but a machine does not have a gut feeling to ask for a good guess. However, we can summon a Frankenstein, which will be able to learn and continue the shape based on numbers. In order to do so, let’s have a look at the raw, discrete data of our sinusoidal wave:

x f(x)
0.0 0.0
0.05 0.479425538604203
0.10 0.8414709848078965
0.15 0.9974949866040544
0.20 0.9092974268256817
0.25 0.5984721441039564
0.30 0.1411200080598672
0.35 -0.35078322768961984
0.75 0.9379999767747389

Ranging from 0.0 until 0.75, these discrete values drawn from our function with step size 0.05 will be the basis for training. Now, one could come up with the idea to just memorize all values, so a sufficiently reasonable value can be picked based on comparison. For instance, to continue at the point 0.75 in our plot, we could simply examine the area close to 0.15, noticing a similar value close to 1, and hence go downwards. Well, of course this is cheating, but if a good cheat is a superior solution, why not cheat? Being hackers, we wouldn’t care. What’s really limiting here is the fact that the whole data set needs to be kept in memory, which can be infeasible for large sets, plus for more complex shapes, this approach would quickly result in a lot of weird rules and exceptions to be made in order to find comprehensible predictions.

Net to the rescue

Let’s go back to our table and see if a neural net can learn the shape, instead of simply memorizing it. Here, we want our net architecture to be of kind [3, 5, 3, 1]. Three input neurons, two hidden layers with five and three neurons respectively, as well as one neuron for the output layer will capture the data shown in the table.

sinuspredictnet

A supervised training mode means, that we want to train our net with three discrete steps as input and the fourth step as the supervised training element. So we will train a, b, c -> d and e, f, g -> h et cetera, hoping that this way our net will capture the slope pattern of our sinusoidal wave. Let’s code this in Scala:

import neuroflow.core.Activator.Tanh 
import neuroflow.core.WeightProvider.randomWeights 
import neuroflow.nets.DynamicNetwork.constructor

First, we want a Tanh activation function, because the domain of our sinusoidal wave is [-1, 1], just like the hyperbolic tangent. This way we can be sure that we are not comparing apples with oranges. Further, we want a dynamic network (adaptive learning rate) and random initial weights. Let’s put this down:

val fn = Tanh.apply
val sets = Settings(true, 10.0, 0.0000001, 500, None, None, Some(Map("τ" -> 0.25, "c" -> 0.25)))
val net = Network(Input(3) :: Hidden(5, fn) :: Hidden(3, fn) :: Output(1, fn) :: Nil, sets)

No surprises here. After some experiments, we can pick values for the settings instance, which will promise good convergence during training. Now, let’s prepare our discrete steps drawn from the sinus function:

val group = 4
val sinusoidal = Range.Double(0.0, 0.8, 0.05).grouped(group).toList.map(i => i.map(k => (k, Math.sin(10 * k))))
val xsys = sinusoidal.map(s => (s.dropRight(1).map(_._2), s.takeRight(1).map(_._2)))
val xs = xsys.map(_._1)
val ys = xsys.map(_._2)
net.train(xs, ys)

We will draw samples from the range with step size 0.05. After this, we will construct our training values xs as well as our supervised output values ys. Here, a group consists of 4 steps, with 3 steps as input and the last step as the supervised value.

[INFO] [25.01.2016 14:07:51:677] [run-main-5] Taking step 499 - error: 1.4395661497489177E-4  , error per sample: 3.598915374372294E-5
[INFO] [25.01.2016 14:07:51:681] [run-main-5] Took 500 iterations of 500 with error 1.4304189739640242E-4  
[success] Total time: 4 s, completed 25.01.2016 14:20:56

After a pretty short time, we will see good news. Now, how can we check if our net can successfully predict the sinusoidal wave? We can’t simply call our net like a sinus function to map from one input value to one output value, e. g. something like net(0.75) == sin(0.75). Our net does not care about any x values, because it was trained purely based on the function values f(x), or the slope pattern in general. We need to feed our net with a three-dimensional input vector holding the first three, original function values to predict the fourth step, then drop the first original step and append the recently predicted step to predict the fifth step, et cetera. In other words, we need to traverse the net. Let’s code this:

val initial = Range.Double(0.0, 0.15, 0.05).zipWithIndex.map(p => (p._1, xs.head(p._2)))
val result = predict(net, xs.head, 0.15, initial)
result.foreach(r => println(s"${r._1}, ${r._2}"))

with

@tailrec def predict(net: Network, last: Seq[Double], i: Double, results: Seq[(Double, Double)]): Seq[(Double, Double)] = {
  if (i < 4.0) {
    val score = net.evaluate(last).head
    predict(net, last.drop(1) :+ score, i + 0.05, results :+ (i, score))
  } else results
}

So, basically we don’t just continue to draw the sinusoidal shape at the point 0.75, we draw the entire shape right from the start until 4.0 – solely based on our trained net! Now, let’s see how our Frankenstein will complete the sinusoidal shape from 0.75 on:

sinuspredictfintwo

I’d say, pretty neat? Keep in mind, here, the discrete predictions are connected through splines. Another interesting property of our trained net is its prediction compared to the original sinus function when taking the limit towards 4.0. Let’s plot both:

sinuspredictfin

The purple line is the original sinusoidal wave, whereas the green line is the prediction of our net. The first steps show great consistency, but slowly the curves diverge a little over time, as uncertainties will add up. To keep this divergence rather low, one could fine tune settings, for instance numeric precision. However, if one is taking the limit towards infinity, a perfect fit is illusory.

Final thoughts

That’s it! We have trained our net to learn and continue the sinusoidal shape. Now, I know that this is a rather academic example, but to train a neural net to learn more complex shapes is straightforward from here.

Thanks for reading!