Tag Archive for: Interview

Interview – There is no stand-alone strategy for AI, it must be part of the company-wide strategy

Ronny FehlingRonny Fehling is Partner and Associate Director for Artificial Intelligence as the Boston Consulting Group GAMMA. With more than 20 years of continually progressive experience in leading business and technology innovation, spearheading digital transformation, and aligning the corporate strategy with Artificial Intelligence he industry-leading organizations to grow their top-line and kick-start their digital transformation.

Ronny Fehling is furthermore speaker of the Predictive Analytics World for Industry 4.0 in May 2020.

Data Science Blog: Mr. Fehling, you are consulting companies and business leaders about AI and how to get started with it. AI as a definition is often misleading. How do you define AI?

This is a good question. I think there are two ways to answer this:

From a technical definition, I often see expressions about “simulation of human intelligence” and “acting like a human”. I find using these terms more often misleading rather than helpful. I studied AI back when it wasn’t yet “cool” and still middle of the AI winter. And yes, we have much more compute power and access to data, but we also think about data in a very different way. For me, I typically distinguish between machine learning, which uses algorithms and statistical methods to identify patterns in data, and AI, which for me attempts to interpret the data in a given context. So machine learning can help me identify and analyze frequency patterns in text and even predict the next word I will type based on my history. AI will help me identify ‘what’ I’m writing about – even if I don’t explicitly name it. It can tell me that when I’m asking “I’m looking for a place to stay” that I might want to see a list of hotels around me. In other words: machine learning can detect correlations and similar patterns, AI uses machine learning to generate insights.

I always wondered why top executives are so frequently asking about the definition of AI because at first it seemed to me not as relevant to the discussion on how to align AI with their corporate strategy. However, I started to realize that their question is ultimately about “What is AI and what can it do for me?”.

For me, AI can do three things really good, which humans cannot really do and previous approaches couldn’t cope with:

  1. Finding similar patterns in historical data. Imagine 20 years of data like maintenance or repair documents of a manufacturing plant. Although they describe work done on a multitude of products due to a multitude of possible problems, AI can use this to look for a very similar situation based on a current problem description. This can be used to identify a common root cause as well as a common solution approach, saving valuable time for the operation.
  2. Finding correlations across time or processes. This is often used in predictive maintenance use cases. Here, the AI tries to see what similar events happen typically at some time before a failure happen. This way, it can alert the operator much earlier about an impending failure, say due to a change in the vibration pattern of the machine.
  3. Finding an optimal solution path based on many constraints. There are many problems in the business world, where choosing the optimal path based on complex situations is critical. Let’s say that suddenly a severe weather warning at an airport forces an airline to have to change their scheduling because of a reduced airport capacity. Delays for some aircraft can cause disruptions because passengers or personnel not being able to connect anymore. Knowing which aircraft to delay, which to cancel, which to switch while causing the minimal amount of disruption to passengers, crew, maintenance and ground-crew is something AI can help with.

The key now is to link these fundamental capabilities with the business context of the company and how it can ultimately help transform.

Data Science Blog: Companies are still starting with their own company-wide data strategy. And now they are talking about AI strategies. Is that something which should be handled separately?

In my experience – both based on having seen the implementations of several corporate data strategies as well as my upbringing at Oracle – the data strategy and AI strategy are co-dependent and cannot be separated. Very often I hear from clients that they think they first need to bring their data in order before doing AI project. And yes, without good data access, AI cannot really work. In fact, most of the time spent on AI is spent on processing, cleansing, understanding and contextualizing the data. However, you cannot really know what data will be needed in which form without knowing what you want to use it for. This is why strategies that handle data and AI separately mostly fail and generate huge costs.

Data Science Blog: What are the important steps for developing a good data strategy? Is there something like a general approach?

In my eyes, the AI strategy defines the data strategy step by step as more use cases are implemented. Rather than focusing too quickly at how to get all corporate data into a data lake, it will be much more important to start creating a use-case, technology and data governance. This governance has to be established once the AI strategy is starting to mature to enable the scale up and productization. At the beginning is to find the (very few) use-cases that can serve as light house projects to demonstrate (1) value impact, (2) a way to go from MVP to Pilot, and (3) how to address the data challenge. This will then more naturally identify the elements of governance, data access and technology that are required.

Data Science Blog: What are the most common questions from business leaders to you regarding AI? Why do they hesitate to get started?

By far it the most common question I get is: how do I get started? The hesitations often come from multiple sources like: “We don’t have the talent in house to do AI”, “Our data is not good enough”, “We don’t know which use-case to start with”, “It’s not easy for us to embrace agile and failure culture because our products are mission critical”, “We don’t know how much value this can bring us”.

Data Science Blog: Most managers prefer to start small and with lower risk. They seem to postpone bigger ideas to a later stage, at least some milestones should be reached. Is that a good idea or should they think bigger?

AI is often associated (rightfully so) with a new way of working – agile and embracing failures. Similarly, there is also the perception of significant cost to starting with AI (talent, technology, data). These perceptions often lead managers wanting to start with several smaller ambition use-cases where failure isn’t that grave. Once they have proven itself somehow, they would then move on to bigger projects. The problem with this strategy is on the one side that you fragment your few precious AI resources on too many projects and at the same time you cannot really demonstrate an impact since the projects weren’t chosen based on their impact potential.

The AI pioneers typically were successful by “thinking big, starting small and scaling fast”. You start by assessing the value potential of a use-case, for example: my current OEE (Overall Equipment Efficiency) is at 65%. There is an addressable loss of 25% which would grow my top line by $X. With the help of AI experts, you then create a hypothesis of how you think you can reduce that loss. This might be by choosing one specific equipment and 50% of the addressable loss. This is now the measure against which you define your failure or non-failure criteria. Once you have proven an MVP that can solve this loss, you scale up by piloting it in real-life setting and then scaling it to all the equipment. At every step of this process, you have a failure criterion that is measured by the impact value.


Virtual Edition, 11-12 MAY, 2020

The premier machine learning
conference for industry 4.0

This year Predictive Analytics World for Industry 4.0 runs alongside Deep Learning World and Predictive Analytics World for Healthcare.

Interview – Predictive Maintenance and how it can unleash cost savings

Interview with Dr. Kai Goebel, Principal Scientist at PARC, a Xerox Company, about Predictive Maintenance and how it can unleash cost savings.

Dr. Kai Goebel is principal scientist as PARC with more than two decades experience in corporate and government research organizations. He is responsible for leading applied research on state awareness, prognostics and decision-making using data analytics, AI, hybrid methods and physics-base methods. He has also fielded numerous applications for Predictive Maintenance at General Electric, NASA, and PARC for uses as diverse as rocket launchpads, jet engines, and chemical plants.

Data Science Blog: Mr. Goebel, predictive maintenance is not just a hype since industrial companies are already trying to establish this use case of predictive analytics. What benefits do they really expect from it?

Predictive Maintenance is a good example for how value can be realized from analytics. The result of the analytics drives decisions about when to schedule maintenance in advance of an event that might cause unexpected shutdown of the process line. This is in contrast to an uninformed process where the decision is mostly reactive, that is, maintenance is scheduled because equipment has already failed. It is also in contrast to a time-based maintenance schedule. The benefits of Predictive Maintenance are immediately clear: one can avoid unexpected downtime, which can lead to substantial production loss. One can manage inventory better since lead times for equipment replacement can be managed well. One can also manage safety better since equipment health is understood and safety averse situations can potentially be avoided. Finally, maintenance operations will be inherently more efficient as they shift significant time from inspection to mitigation of.

Data Science Blog: What are the most critical success factors for implementing predictive maintenance?

Critical for success is to get the trust of the operator. To that end, it is imperative to understand the limitations of the analytics approach and to not make false performance promises. Often, success factors for implementation hinge on understanding the underlying process and the fault modes reasonably well. It is important to be able to recognize the difference between operational changes and abnormal conditions. It is equally important to recognize rare events reliably while keeping false positives in check.

Data Science Blog: What kind of algorithm does predictive maintenance work with? Do you differentiate between approaches based on classical machine learning and those based on deep learning?

Well, there is no one kind of algorithm that works for Predictive Mantenance everywhere. Instead, one should look at the plurality of all algorithms as tools in a toolbox. Then analyze the problem – how many examples for run-to-failure trajectories are there; what is the desired lead time to report on a problem; what is the acceptable false positive/false negative rate; what are the different fault modes; etc – and use the right kind of tool to do the job. Just because a particular approach (like the one you mentioned in your question) is all the hype right now does not mean it is the right tool for the problem. Sometimes, approaches from what you call “classical machine learning” actually work better. In fact, one should consider approaches even outside the machine learning domain, either as stand-alone approach as in a hybrid configuration. One may also have to invent new methods, for example to perform online learning of the dynamic changes that a system undergoes through its (long) life. In the end, a customer does not care about what approach one is using, only if it solves the problem.

Data Science Blog: There are several providers for predictive analytics software. Is it all about software tools? What makes the difference for having success?

Frequently, industrial partners lament that they have to spend a lot of effort in teaching a new software provider about the underlying industrial processes as well as the equipment and their fault modes. Others are tired of false promises that any kind of data (as long as you have massive amounts of it) can produce any kind of performance. If one does not physically sense a certain modality, no algorithmic magic can take place. In other words, it is not just all about the software. The difference for having success is understanding that there is no cookie cutter approach. And that realization means that one may have to role up the sleeves and to install new instrumentation.

Data Science Blog: What are coming trends? What do you think will be the main topic 2020 and 2021?

Predictive Maintenance is slowly evolving towards Prescriptive Maintenance. Here, one does not only seek to inform about an impending problem, but also what to do about it. Such an approach needs to integrate with the logistics element of an organization to find an optimal decision that trades off several objectives with regards to equipment uptime, process quality, repair shop loading, procurement lead time, maintainer availability, safety constraints, contractual obligations, etc.

Looking for the ‘aha moment’: An expert’s insights on process mining

Henny Selig is a specialist in process mining, with significant expertise in the implementation of process mining solutions and supporting customers with process analysis. As a Solution Owner at Signavio, Henny is also well versed in bringing Signavio Process Intelligence online for businesses of all shapes and sizes. In this interview, Henny shares her thoughts about the challenges and opportunities of process mining. 


Read this interview in German:

Im Interview mit Henny Selig zu Process Mining: “Für den Kunden sind solche Aha-Momente toll“

 


Henny, could you give a simple explanation of the concept of process mining?

Basically, process mining is a combination of data analysis and business process management. IT systems support almost every business process, meaning they leave behind digital traces. We extrapolate all the data from the IT systems connected to a particular process, then visualize and evaluate it with the help of data science technology.

In short, process mining builds a bridge between employees, process experts and management, allowing for a data-driven and fact-based approach to business process optimization. This helps avoid thinking in siloes, as well as enabling transparent design of handovers and process steps that cross departmental boundaries within an organization.

When a business starts to analyze their process data, what are the sorts of questions they ask? Do they have at least have some expectation about what process mining can offer?

That’s a really good question! There isn’t really a single good answer to it, as it is different for different companies. For example, there was one procurement manager, and we were presenting the complete data set to him, and it turned out there was an approval at one point, but it should have been at another. He was really surprised, but we weren’t, because we sat outside the process itself and were able to take a broader view. 

We also had different questions that the company hadn’t considered, things like what was the process flow if an order amount is below 1000 euros, and how often that occurs—just questions that seem clear to an outsider but often do not occur to process owners.

So do people typically just have an idea that something is wrong, or do they generally understand there is a specific problem in one area, and they want to dive deeper? 

There are those people who know that a process is running well, but they know a particular problem pops up repeatedly. Usually, even if people say they don’t have a particular focus or question, most of them actually do because they know their area. They already have some assumptions and ideas, but it is sometimes so deep in their mind they can’t actually articulate it.

Often, if you ask people directly how they do things, it can put pressure on them, even if that’s not the intention. If this happens, people may hide things without meaning to, because they already have a feeling that the process or workflow they are describing is not perfect, and they want to avoid blame. 

The approvals example I mentioned above is my favorite because it is so simple. We had a team who all said, over and over, “We don’t approve this type of request.” However, the data said they did–the team didn’t even know. 

We then talked to the manager, who was interested in totally different ideas, like all these risks, approvals, are they happening, how many times this, how many times that — the process flow in general. Just by having this conversation, we were able to remove the mismatch between management and the team, and that is before we even optimized the actual process itself. 

So are there other common issues or mismatches that people should be aware of when beginning their process mining initiative?

The one I often return to is that not every variation that is out of line with the target model is necessarily negative. Very few processes, apart from those that run entirely automatically, actually conform 100% to the intended process model—even when the environment is ideal. For this reason, there will always be exceptions requiring a different approach. This is the challenge in projects: finding out which variations are desirable, and where to make necessary exceptions.

So would you say that data-based process analysis is a team effort?

Absolutely! In every phase of a process mining project, all sorts of project members are included. IT makes the data available and helps with the interpretation of the data. Analysts then carry out the analysis and discuss the anomalies they find with IT, the process owners, and experts from the respective departments. Sometimes there are good reasons to explain why a process is behaving differently than expected. 

In this discussion, it is incredibly helpful to document the thought process of the team with technical means, such as Signavio Process Intelligence. In this way, it is possible to break down the analysis into individual processes and to bring the right person into the discussion at the right point without losing the thread of the discussion. Then, the next colleague who picks up the topic can then see the thread of the analysis and properly classify the results.

At the very least, we can provide some starting points. Helping people reach an “aha moment” is one of the best parts of my job!

To find out more about how process mining can help you understand and optimize your business processes, visit the Signavio Process Intelligence product page. If you would like to get a group effort started in your organization right now, why not sign up for a free 30-day trial with Signavio, today.

Interview – Customer Data Platform, more than CRM 2.0?

Interview with David M. Raab from the CDP Institute

David M. Raab is as a consultant specialized in marketing software and service vendor selection, marketing analytics and marketing technology assessment. Furthermore he is the founder of the Customer Data Platform Institute which is a vendor-neutral educational project to help marketers build a unified customer view that is available to all of their company systems.

Furthermore he is a Keynote-Speaker for the Predictive Analytics World Event 2019 in Berlin.

Data Science Blog: Mr. Raab, what exactly is a Customer Data Platform (CDP)? And where is the need for it?

The CDP Institute defines a Customer Data Platform as „packaged software that builds a unified, persistent customer database that is accessible by other systems“.  In plainer language, a CDP assembles customer data from all sources, combines it into customer profiles, and makes the profiles available for any use.  It’s important because customer data is collected in so many different systems today and must be unified to give customers the experience they expect.

Data Science Blog: Is it something like a CRM System 2.0? What Use Cases can be realized by a Customer Data Platform?

CRM systems are used to interact directly with customers, usually by telephone or in the field.  They work almost exclusively with data that is entered during those interactions.  This gives a very limited view of the customer since interactions through other channels such as order processing or Web sites are not included.  In fact, one common use case for CDP is to give CRM users a view of all customer interactions, typically by opening a window into the CDP database without needing to import the data into the CRM.  There are many other use cases for unified data, including customer segmentation, journey analysis, and personalization.  Anything that requires sharing data across different systems is a CDP use case.

Data Science Blog: When does a CDP make sense for a company? It is more relevant for retail and financial companies than for industrial companies, isn´t it?

CDP has been adopted most widely in retail and online media, where each customer has many interactions and there are many products to choose from.  This is a combination that can make good use of predictive modeling, which benefits greatly from having more complete data.  Financial services was slower to adopt, probably because they have fewer products but also because they already had pretty good customer data systems.  B2B has also been slow to adopt because so much of their customer relationship is handled by sales people.  We’ve more recently been seeing growth in additional sectors such as travel, healthcare, and education.  Those involve fewer transactions than retail but also rely on building strong customer relationships based on good data.

Data Science Blog: There are several providers for CDPs. Adobe, Tealium, Emarsys or Dynamic Yield, just to name some of them. Do they differ a lot between each other?

Yes they do.  All CDPs build the customer profiles I mentioned.  But some do more things, such as predictive modeling, message selection, and, increasingly, message delivery.  Of course they also vary in the industries they specialize in, regions they support, size of clients they work with, and many technical details.  This makes it hard to buy a CDP but also means buyers are more likely to find a system that fits their needs.

Data Science Blog: How established is the concept of the CDP in Europe in general? And how in comparison with the United States?

CDP is becoming more familiar in Europe but is not as well understood as in the U.S.  The European market spent a lot of money on Data Management Platforms (DMPs) which promised to do much of what a CDP does but were not able to because they do not store the level of detail that a CDP does.  Many DMPs also don’t work with personally identifiable data because the DMPs primarily support Web advertising, where many customers are anonymous.  The failures of DMPs have harmed CDPs because they have made buyers skeptical that any system can meet their needs, having already failed once.  But we are overcoming this as the market becomes better educated and more success stories are available.  What’s the same in Europe and the U.S. is that marketers face the same needs.  This will push European marketers towards CDPs as the best solution in many cases.

Data Science Blog: What are coming trends? What will be the main topic 2020?

We see many CDPs with broader functions for marketing execution: campaign management, personalization, and message delivery in particular.  This is because marketers would like to buy as few systems as possible, so they want broader scope in each systems.  We’re seeing expansion into new industries such as financial services, travel, telecommunications, healthcare, and education.  Perhaps most interesting will be the entry of Adobe, Salesforce, and Oracle, who have all promised CDP products late this year or early next year.  That will encourage many more people to consider buying CDPs.  We expect that market will expand quite rapidly, so current CDP vendors will be able to grow even as Adobe, Salesforce, and Oracle make new CDP sales.


You want to get in touch with Daniel M. Raab and understand more about the concept of a CDP? Meet him at the Predictive Analytics World 18th and 19th November 2019 in Berlin, Germany. As a Keynote-Speaker, he will introduce the concept of a Customer Data Platform in the light of Predictive Analytics. Click here to see the agenda of the event.

 


 

Interview – The Importance of Machine Learning for the Data Driven Business

To become more data-driven, organizations must mature their analytics and automate more of their decision making processes for innovation and differentiation. Data science seems like the right approach, yet is a new and fast moving field that seems to have as many dead ends as it has high ways to value. Cloudera Fast Forward Labs, led by Hilary Mason, shows companies the way.

Alice Albrecht is a research engineer at Cloudera Fast Forward Labs.  She spends her days researching the latest and greatest in machine learning and artificial intelligence and bringing that knowledge to working prototypes and delivering concrete advice for clients.  Prior to joining Fast Forward Labs, Alice worked in both finance and technology companies as a practicing data scientist, data science leader, and – most recently – a data product manager.  In addition to teaching machines to do cool things, Alice is passionate about mentoring and helping others grow in their careers.  Alice holds a PhD from Yale in cognitive neuroscience where she studied how humans summarize sensory information from the world around them and the neural substrates that underlie those summaries.

Read this article in German:
“Interview – Die Bedeutung von Machine Learning für das Data Driven Business“

Data Science Blog: Ms. Albrecht, you are a well-known keynote speaker for data science and artificial intelligence. While data science has arrived business already, deep learning seems to be the new trend. Is artificial intelligence for business already normal business or is it an overrated hype?

I’d say it isn’t either of those two options.  Data science is now widely adopted but companies still struggle to integrate this new discipline into their existing businesses.  As for deep learning, it really depends on the company that’s looking into using this technique.  I wouldn’t say that deep learning is by any means part of business as usual- nor should it be.  It’s a tool like any other and building a capacity for using a tool without clearly defined business needs is a recipe for disaster.

Data Science Blog: Just to make sure what we are talking about: What are the differences and overlaps between data analytics, data science, machine learning, deep learning and artificial intelligence?

Here at Cloudera Fast Forward Labs, we like to think of data analytics as collecting data and counting things (mostly for quick charts and reports).  Data science solves business problems by counting cleverly and predicting things with the data that’s collected.  Machine learning is about solving problems with new kinds of feedback loops that improve with more data.  Deep learning is a particular type of machine learning and is not itself a separate concept or type of tool.  Artificial intelligence taps into something more complicated than what we’re seeing today – it’s much broader than training machines to repetitively do very specialized tasks or solve very narrow problems.

Data Science Blog: And how can we add the context to big data?

From a theoretical perspective, data science has been around for decades. The building blocks for modern day machine learning, deep learning and artificial intelligence are based on mathematical theorems  that go back to the 1940’s and 1950’s. The challenge was that at the time, compute power and data storage capacity were simply too expensive for the approaches to be implemented. Today that’s all changed.. Not only has the cost of data storage dropped considerably, open source technology like Apache Hadoop has made it possible to store any volume of data at costs approaching zero. Compute power, even highly specialised chip architectures, are now also available on demand and only for the time organisations need them through public and private cloud solutions. The decreased cost of both data storage and compute power, together with a growing list of tools and resources readily available via the open source community allows companies of any size to benefit from data (no matter that size of that data).

Data Science Blog: What are the challenges for organizations in getting started with data science?

I see two big challenges when getting started with data science.  One is ensuring that you have organizational alignment around exactly what type of work data scientists will deliver (and timing for those projects).  The second hurdle is around ensuring that you have the right data in place before you start hiring data scientists. This can be tricky if you don’t have in-house expertise in this area, so sometimes it’s better to hire a data engineer or a data strategist (or director of data science) before you ever get started building out a data science team.

Data Science Blog: There are many discussions about how to build a data-driven business. Is it just about using data science to get a better understanding of customer behavior?

No, being data driven doesn’t just mean better understanding your customers (though that is one way that data science can help in an organization).  Aside from building an organization that relies on data and analytics to help them make decisions (about customer behavior or otherwise), being a data-driven business means that data is powering your core products.

Data Science Blog: The number of technologies, tools and frameworks is increasing. For organizations this also means increasing complexity. Do companies need to stay always up-to-date or could it be an advice to wait and imitate pioneers later?

While it’s not critical (or advisable) for organizations to adopt every new advancement that comes along, it is critical for them to stay abreast of emerging frameworks.  If a business waits to see what others are doing, and therefore don’t invest in understanding how new advancements can affect their particular business, they’ve likely already missed the boat.

Data Science Blog: Global players have big budgets just for doing research and setting up data labs. Middle-sized companies need to see the break even point soon. How can we accelerate the value generation of data science?

Having a team that is highly focused on a specific set of projects that are well-scoped and aligned to the business makes all the difference.  Data science and machine learning don’t have to sacrifice doing research and being innovative in order to produce value.  The biggest difference is that smaller teams will have to be more aware of how their choice of project fits into emerging frameworks and their particular acute and near term business needs.

Data Science Blog: How does Cloudera Fast Forward Labs help other organizations to accelerate their start with machine learning?

We advise organizations, based on their particular needs, on what the latest advancements are in machine learning and data science, how to build and structure their data teams to develop the capabilities they need to meet their goals, and how to quickly implement custom forward-looking solutions using their own data and in-house expertise.

Data Science Blog: Finally, a question for our younger readers who are looking for a career as a data expert: What makes a good data scientist? Do you like to work with introverted coding nerds or the data loving business experts?

A good data scientists should be deeply curious and have a love for the ways in which data can lead to new discoveries and power the next generation of products.  We expect the people who thrive in this field to come from a variety of backgrounds and experiences.

Interview – Python as productive data science environment

Miroslav Šedivý is a Senior Software Architect at UBIMET GmbH, using Python to make the sun shine and the wind blow. He is an enthusiast of both human and programming languages and found Python as his language of choice to setup very productive environments. Mr. Šedivý was born in Czechoslovakia, studied in France and is now living in Germany. Furthermore, he helps in the organization of the events PyCon.DE and Polyglot Gathering.


On 26th June 2018 he will explain at the Python@DWX conference why “Lifelong Text Hackers Use Vim and Python”. Insert the promotion code PY18science to unlock your 10% discount on all tickets. More info and tickets on python-con.com.


Data Science Blog: Mr. Šedivý, how did you find the way to Python as your favorite programming language?

Apart from traditional languages taught at school (Basic, Pascal, C, Java), some twenty years ago I learned Perl to hack a dynamic web site and used it to automate my daily tasks. Later I used it professionally for scientific calculations in the production. This was later replaced by Python, its newer versions and more advanced libraries. Nowadays Python has almost completely replaced Perl as my principal language and I use Perl just to hack some command line filters and to impress colleagues.

Data Science Blog: Python is one of the most popular programming language for data scientists. This is remarkable as it is originally not designed for doing data science with it. What made it a competitor to languages like R or Julia?

Python is the most powerful programming language that is still legible. This appeals to data scientists who can enter each line interactively, and immediately see what happens, because each line actually does something. They can inspect their data easily and build automating systems to process their data transparently.

Data Science Blog: Is there anything you could do better with another programming language?

Sometimes I’m playing with some functional languages that would allow me to write code that is easier to test and parallelize.

Data Science Blog: Which libraries are the most important ones for your daily business?

The whole Pandas ecosystem with Numpy and Scipy. Matplotlib for plots, PyTables and Psycopg2 for storage. I’m also importing a few async libs for webservices and similar network-based software.

I also enjoy discovering the world of Unicode and Timezones – both of them are the spots where the programmers absolutely have to obey the chaotic reality of the outside world.

Data Science Blog: Which editor do you use? And how to set it up as a productive environment?

I tried several editors and IDEs, but always came back to Vi or Vim. This is an extremely powerful editor that is around since over forty years, which was probably before most of today’s active developers learned to type. I’m using it for all text editing tasks, which I’m actually going to show in my talk at DWX [Lifelong Text Hackers Use Vim and Python]. Steep learning curve is not an argument against a tool you can grok during your entire career.

Data Science Blog: In your opinion: For all developers and data scientists, who are used to Java, Scala, R oder Perl, is Python easy to learn? Could it be too late to switch for somebody?

Python is a great general language that can be learned rapidly to a usable level. It’s different from the aforementioned languages. I remember my switching process from Perl to Python over ten years ago with a book “Perl to Python Migration”, which forced me to switch my way of thinking. From the question “Why do I have to import ‘re’ for regular expressions if Perl uses them natively?” to “Actually, I can solve this problem without regular expressions.”.

What makes a good Data Scientist? Answered by leading Data Officers!

What makes a good Data Scientist? A question I got asked recently a lot by data science newbies as well as long-established CIOs and my answer ist probably not what you think:
In my opinion is a good Data Scientist somebody with, at least, a good knowledge of computer programming, statistics and the ability of understanding the customer´s business. Above all stands a strong interest in finding value in distributed data sources.

Debatable? Maybe. That’s why I forwarded this question to five other leading Data Scientists and Chief Data Officers in Germany, let’s have a look on their answers to this question and create your own idea of what a good Data Scientist might be:


Dr. Andreas Braun – Head of Global Data & Analytics @ Allianz SE

A data scientist connects thorough analytical and methodological understanding  with a technical hands-on/ engineering mentality.
Data scientists bridge between analytics, tech, and business. “New methods”, such as machine learning, AI, deep learning etc. are crucial and are continuously challenged and improved. (14 February 2017)


Dr. Helmut Linde – Head of Data Science @ SAP SE

The ideal data scientist is a thought leader who creates value from analytics, starting from a vision for improved business processes and an algorithmic concept, down to the technical realization in productive software. (09 February 2017)


Klaas Bollhoefer – Chief Data Scientist @ The unbelievable Machine Company

For me a data scientist thinks ahead, thinks about and thinks in-between. He/she is a motivated, open-minded, enthusiastic and unconventional problem solver and tinkerer. Being a team player and a lone wolf are two sides of the same coin and he/she definitely hates unicorns and nerd shirts. (27 March 2017)

 


Wolfgang Hauner – Chief Data Officer @ Munich Re

A data scientist is, from their very nature, interested in data and its underlying relationship and has the cognitive, methodical and technical skills to find these relationships, even in unstructured data. The essential prerequisites to achieve this are curiosity, a logical mind-set and a passion for learning, as well as an affinity for team interaction in the work place. (08 February 2017)

 


Dr. Florian Neukart – Principal Data Scientist @ Volkswagen Group of America

In my opinion, the most important trait seems to be driven by an irresistible urge to understand fundamental relations and things, whereby I summarize both an atom and a complex machine among “things”. People with this trait are usually persistent, can solve a new problem even with little practical experience, and strive for the necessary training or appropriate quantitative knowledge autodidactically. (08 February 2017)

Background idea:
That I am writing about atoms and complex machines has to do with the fact that I have been able to analyze the most varied data through my second job at the university, and that I am given a chance to making significant contributions to both machine learning and physics, is primarily rooted in curiosity. Mathematics, physics, neuroscience, computer science, etc. are the fundamentals that someone will acquire if she wants to understand. In the beginning, there is only curiosity… I hope this is not too out of the way, but I’ve done a lot of job interviews and worked with lots of smart people, and it has turned out that quantitative knowledge alone is not enough. If someone is not burning for understanding, she may be able to program a Convolutional Network from the ground but will not come up with new ideas.

 


Interview – Using Decision Science to forecast customer behaviour

Interview with Dr. Eva-Marie Müller-Stüler from KPMG about how to use Decision Science to forecast customer behaviour

Dr. Eva-Marie Müller-Stüler is Chief Data Scientist and Associate Director in Decision Science at KPMG LLP in London. She graduated as a mathematician at the Technical University of Munich with a year abroad in Tokyo, and completed her Doctorate at the Philipp University in Marburg.

linkedin-button xing-button

Read this article in German:
“Interview – Mit Data Science Kundenverhalten vorhersagen “

Data Science Blog: Ms Dr. Müller-Stüler, which path led you to the top of Analytics for KPMG?

I always enjoyed analytical questions, and have a great interest in people and finance. For me, understanding how people work and make decisions is incredibly exciting. In my Master’s and my PhD theses I had to analyse large amounts of data and had to program various algorithms. Now, combining a solid mathematical education with specific industry and business knowledge enables me to understand my clients’ businesses and to develop methods that disrupt the market and uncover new business strategies.

Data Science Blog: What kind of analytical solutions do you offer your clients? What benefits do you generate for them?

Our team focuses on Behaviour and Customer Science under a mantra and mission: “We understand human behaviour and we change it”. We look at all the data artefacts a person (for example, the customer or the employee) leaves behind and try to solve the question of how to change their behaviour or to predict future behaviour. With advanced analytics and data science we develop “always-on” forecasting models, which enable our clients to act in advance. This could be forecasting customer demand at a particular location, how it can be improved or influenced in the desired direction, or which kind of promotions work best for which customer. Also the challenge of predicting where, and with what product mix, a new store should be opened can be solved much more accurately with Predictive Analytics than by conventional methods.


Data Science Blog: What prerequisites must be fulfilled to ensure that predictive analyses work adequately for customer behaviour?

The data must, of course, have a certain quality and history to recognize trends and cycles. Often, however, one can also create an advantage by using additional new data sources. Experience and creativity are enormously important to understand what is possible and how to improve the quality of our work, or whether something only increases the noise.

Data Science Blog: What external data sources do you need to integrate? How do you handle unstructured data?

As far as external data sources are concerned, we are very spoiled here in England. We use about 10,000 different signals on average, and which vary depending on the question. These might include signals that show the composition of the population, local traffic information, the proximity of sights, hospitals, schools, crime rates and many more. The influence of each signal is also different for each problem. So, a high number of pick pocketing incidences can be a positive sign of the vibrancy of an area, and that people carry a lot of cash on average. For a fast food retailer with a presence in the city centre, for example, this could have a positive influence on a decision to invest in a new outlet in the area, in another area the opposite.

Data Science Blog: What possibilities does data science provide for forensics or fraud detection?

Every customer is surrounded by thousands of data signals and produces and transmits more by through his behaviour. This enables us to get a pretty good picture about the person online. As every kind of person also has a certain behavioural pattern (and this also applies to fraudsters) it is possible to recognise or predict these patterns in time.

Data Science Blog: What tools do you use in your work? When do you rely on proprietary software or on open source?

This depends on what stage we are in the process and the goal defined. We differentiate our team into different groups: Our Data Wranglers (who are responsible for extracting, generating and processing the data) work with other tools than our Data Modellers. Basically our tool kit covers the entire range of SQL Server, R, Python, but sometimes also Matlab or SAS. More and more, we are working with cloud-based solutions. Data visualization and dashboards in Qlik, Tableau or Alteryx are usually passed on to other teams.

Data Science Blog: What does your working day as a data scientist look like from after the morning café until the end of the evening?

My role is perhaps best described as the player’s coach. At the beginning of a project, it is primarily about working with the client to understand and develop the project. New ideas and methods have to be developed. During a project, I manage the teams and knowledge transfer; the review and the questioning of the models are my main tasks. In the end I do the final sign-off of the project. Since I often run several projects at different stages at the same time, it is guaranteed never boring.

Data Science Blog: Are good Data Scientists of your experience more likely to be consultant types or introvert nerds?

That depends upon what one is focused. A Data Visualizer or Data Artist reduces the information and visualise it in a great and understandable way. This requires creativity, a good understanding of business and safe handling of the tools.

The Data Analyst is more concerned with the “Slicing and Dicing” of data. The aim is to analyse the past and to recognize relationships. It is important to have good mathematical and statistical abilities in addition to the financial knowledge.

The Data Scientist is the most mathematical type. His job is to recognize deeper connections in the data and to make predictions. This involves the development of complicated models or Machine Learning Algorithms. Without a good mathematical education and programming skills it is unfortunately not possible to understand the risk of potential errors in full depth. The danger of drawing wrong conclusions or interpreting correlations counterfactually is very great. A simple example of this is that, in summer, when the weather is beautiful, more people eat ice cream and go swimming. Therefore, there is a strong correlation between eating ice and the number of drowned people, although eating ice cream does not lead to drowning. The influencing variable is the temperature. To minimise the risk for wrong conclusions I think it is important have worked and studied mathematics, data science, machine learning and statistics in depth – this usually means a PhD in science related subject.

Beyond that, business and industry knowledge is also important for a Data Scientist. His solutions must be relevant to the client and solve their problems or improve their processes. The best AI machine does not give any bank a competitive advantage if it predicts the sale of ice cream based on the weather. This may be 100% correct, but has no relevance for the client.

It is quite similar to other areas (e.g., medicine) too. There are many different areas, but for serious problems it is best to ask a specialist so that you do not draw wrong conclusions.

Data Science Blog: For all students who have soon finished their bachelor’s degree in computer science, mathematics, or economics, what would they advise these young ladies how to become good Data Scientists?

Never stop learning! The market is currently developing incredibly fast and has so many great areas to focus on. You should dive into it with passion, enthusiasm and creativity and have fun with the recognition of patterns and relationships. If you also surround yourself with interesting and inspiring people from whom you can learn more, I predict that you’ll do well.

This interview is also available in German: https://data-science-blog.com/de/blog/2016/11/10/interview-mit-advanced-analytics-kundenverhalten-verstehen/