Interview – There is no stand-alone strategy for AI, it must be part of the company-wide strategy

Ronny FehlingRonny Fehling is Partner and Associate Director for Artificial Intelligence as the Boston Consulting Group GAMMA. With more than 20 years of continually progressive experience in leading business and technology innovation, spearheading digital transformation, and aligning the corporate strategy with Artificial Intelligence he industry-leading organizations to grow their top-line and kick-start their digital transformation.

Ronny Fehling is furthermore speaker of the Predictive Analytics World for Industry 4.0 in May 2020.

Data Science Blog: Mr. Fehling, you are consulting companies and business leaders about AI and how to get started with it. AI as a definition is often misleading. How do you define AI?

This is a good question. I think there are two ways to answer this:

From a technical definition, I often see expressions about “simulation of human intelligence” and “acting like a human”. I find using these terms more often misleading rather than helpful. I studied AI back when it wasn’t yet “cool” and still middle of the AI winter. And yes, we have much more compute power and access to data, but we also think about data in a very different way. For me, I typically distinguish between machine learning, which uses algorithms and statistical methods to identify patterns in data, and AI, which for me attempts to interpret the data in a given context. So machine learning can help me identify and analyze frequency patterns in text and even predict the next word I will type based on my history. AI will help me identify ‘what’ I’m writing about – even if I don’t explicitly name it. It can tell me that when I’m asking “I’m looking for a place to stay” that I might want to see a list of hotels around me. In other words: machine learning can detect correlations and similar patterns, AI uses machine learning to generate insights.

I always wondered why top executives are so frequently asking about the definition of AI because at first it seemed to me not as relevant to the discussion on how to align AI with their corporate strategy. However, I started to realize that their question is ultimately about “What is AI and what can it do for me?”.

For me, AI can do three things really good, which humans cannot really do and previous approaches couldn’t cope with:

  1. Finding similar patterns in historical data. Imagine 20 years of data like maintenance or repair documents of a manufacturing plant. Although they describe work done on a multitude of products due to a multitude of possible problems, AI can use this to look for a very similar situation based on a current problem description. This can be used to identify a common root cause as well as a common solution approach, saving valuable time for the operation.
  2. Finding correlations across time or processes. This is often used in predictive maintenance use cases. Here, the AI tries to see what similar events happen typically at some time before a failure happen. This way, it can alert the operator much earlier about an impending failure, say due to a change in the vibration pattern of the machine.
  3. Finding an optimal solution path based on many constraints. There are many problems in the business world, where choosing the optimal path based on complex situations is critical. Let’s say that suddenly a severe weather warning at an airport forces an airline to have to change their scheduling because of a reduced airport capacity. Delays for some aircraft can cause disruptions because passengers or personnel not being able to connect anymore. Knowing which aircraft to delay, which to cancel, which to switch while causing the minimal amount of disruption to passengers, crew, maintenance and ground-crew is something AI can help with.

The key now is to link these fundamental capabilities with the business context of the company and how it can ultimately help transform.

Data Science Blog: Companies are still starting with their own company-wide data strategy. And now they are talking about AI strategies. Is that something which should be handled separately?

In my experience – both based on having seen the implementations of several corporate data strategies as well as my upbringing at Oracle – the data strategy and AI strategy are co-dependent and cannot be separated. Very often I hear from clients that they think they first need to bring their data in order before doing AI project. And yes, without good data access, AI cannot really work. In fact, most of the time spent on AI is spent on processing, cleansing, understanding and contextualizing the data. However, you cannot really know what data will be needed in which form without knowing what you want to use it for. This is why strategies that handle data and AI separately mostly fail and generate huge costs.

Data Science Blog: What are the important steps for developing a good data strategy? Is there something like a general approach?

In my eyes, the AI strategy defines the data strategy step by step as more use cases are implemented. Rather than focusing too quickly at how to get all corporate data into a data lake, it will be much more important to start creating a use-case, technology and data governance. This governance has to be established once the AI strategy is starting to mature to enable the scale up and productization. At the beginning is to find the (very few) use-cases that can serve as light house projects to demonstrate (1) value impact, (2) a way to go from MVP to Pilot, and (3) how to address the data challenge. This will then more naturally identify the elements of governance, data access and technology that are required.

Data Science Blog: What are the most common questions from business leaders to you regarding AI? Why do they hesitate to get started?

By far it the most common question I get is: how do I get started? The hesitations often come from multiple sources like: “We don’t have the talent in house to do AI”, “Our data is not good enough”, “We don’t know which use-case to start with”, “It’s not easy for us to embrace agile and failure culture because our products are mission critical”, “We don’t know how much value this can bring us”.

Data Science Blog: Most managers prefer to start small and with lower risk. They seem to postpone bigger ideas to a later stage, at least some milestones should be reached. Is that a good idea or should they think bigger?

AI is often associated (rightfully so) with a new way of working – agile and embracing failures. Similarly, there is also the perception of significant cost to starting with AI (talent, technology, data). These perceptions often lead managers wanting to start with several smaller ambition use-cases where failure isn’t that grave. Once they have proven itself somehow, they would then move on to bigger projects. The problem with this strategy is on the one side that you fragment your few precious AI resources on too many projects and at the same time you cannot really demonstrate an impact since the projects weren’t chosen based on their impact potential.

The AI pioneers typically were successful by “thinking big, starting small and scaling fast”. You start by assessing the value potential of a use-case, for example: my current OEE (Overall Equipment Efficiency) is at 65%. There is an addressable loss of 25% which would grow my top line by $X. With the help of AI experts, you then create a hypothesis of how you think you can reduce that loss. This might be by choosing one specific equipment and 50% of the addressable loss. This is now the measure against which you define your failure or non-failure criteria. Once you have proven an MVP that can solve this loss, you scale up by piloting it in real-life setting and then scaling it to all the equipment. At every step of this process, you have a failure criterion that is measured by the impact value.


Virtual Edition, 11-12 MAY, 2020

The premier machine learning
conference for industry 4.0

This year Predictive Analytics World for Industry 4.0 runs alongside Deep Learning World and Predictive Analytics World for Healthcare.

Interview – Predictive Maintenance and how it can unleash cost savings

Interview with Dr. Kai Goebel, Principal Scientist at PARC, a Xerox Company, about Predictive Maintenance and how it can unleash cost savings.

Dr. Kai Goebel is principal scientist as PARC with more than two decades experience in corporate and government research organizations. He is responsible for leading applied research on state awareness, prognostics and decision-making using data analytics, AI, hybrid methods and physics-base methods. He has also fielded numerous applications for Predictive Maintenance at General Electric, NASA, and PARC for uses as diverse as rocket launchpads, jet engines, and chemical plants.

Data Science Blog: Mr. Goebel, predictive maintenance is not just a hype since industrial companies are already trying to establish this use case of predictive analytics. What benefits do they really expect from it?

Predictive Maintenance is a good example for how value can be realized from analytics. The result of the analytics drives decisions about when to schedule maintenance in advance of an event that might cause unexpected shutdown of the process line. This is in contrast to an uninformed process where the decision is mostly reactive, that is, maintenance is scheduled because equipment has already failed. It is also in contrast to a time-based maintenance schedule. The benefits of Predictive Maintenance are immediately clear: one can avoid unexpected downtime, which can lead to substantial production loss. One can manage inventory better since lead times for equipment replacement can be managed well. One can also manage safety better since equipment health is understood and safety averse situations can potentially be avoided. Finally, maintenance operations will be inherently more efficient as they shift significant time from inspection to mitigation of.

Data Science Blog: What are the most critical success factors for implementing predictive maintenance?

Critical for success is to get the trust of the operator. To that end, it is imperative to understand the limitations of the analytics approach and to not make false performance promises. Often, success factors for implementation hinge on understanding the underlying process and the fault modes reasonably well. It is important to be able to recognize the difference between operational changes and abnormal conditions. It is equally important to recognize rare events reliably while keeping false positives in check.

Data Science Blog: What kind of algorithm does predictive maintenance work with? Do you differentiate between approaches based on classical machine learning and those based on deep learning?

Well, there is no one kind of algorithm that works for Predictive Mantenance everywhere. Instead, one should look at the plurality of all algorithms as tools in a toolbox. Then analyze the problem – how many examples for run-to-failure trajectories are there; what is the desired lead time to report on a problem; what is the acceptable false positive/false negative rate; what are the different fault modes; etc – and use the right kind of tool to do the job. Just because a particular approach (like the one you mentioned in your question) is all the hype right now does not mean it is the right tool for the problem. Sometimes, approaches from what you call “classical machine learning” actually work better. In fact, one should consider approaches even outside the machine learning domain, either as stand-alone approach as in a hybrid configuration. One may also have to invent new methods, for example to perform online learning of the dynamic changes that a system undergoes through its (long) life. In the end, a customer does not care about what approach one is using, only if it solves the problem.

Data Science Blog: There are several providers for predictive analytics software. Is it all about software tools? What makes the difference for having success?

Frequently, industrial partners lament that they have to spend a lot of effort in teaching a new software provider about the underlying industrial processes as well as the equipment and their fault modes. Others are tired of false promises that any kind of data (as long as you have massive amounts of it) can produce any kind of performance. If one does not physically sense a certain modality, no algorithmic magic can take place. In other words, it is not just all about the software. The difference for having success is understanding that there is no cookie cutter approach. And that realization means that one may have to role up the sleeves and to install new instrumentation.

Data Science Blog: What are coming trends? What do you think will be the main topic 2020 and 2021?

Predictive Maintenance is slowly evolving towards Prescriptive Maintenance. Here, one does not only seek to inform about an impending problem, but also what to do about it. Such an approach needs to integrate with the logistics element of an organization to find an optimal decision that trades off several objectives with regards to equipment uptime, process quality, repair shop loading, procurement lead time, maintainer availability, safety constraints, contractual obligations, etc.

Looking for the ‘aha moment’: An expert’s insights on process mining

Henny Selig is a specialist in process mining, with significant expertise in the implementation of process mining solutions and supporting customers with process analysis. As a Solution Owner at Signavio, Henny is also well versed in bringing Signavio Process Intelligence online for businesses of all shapes and sizes. In this interview, Henny shares her thoughts about the challenges and opportunities of process mining. 


Read this interview in German:

Im Interview mit Henny Selig zu Process Mining: “Für den Kunden sind solche Aha-Momente toll“

 


Henny, could you give a simple explanation of the concept of process mining?

Basically, process mining is a combination of data analysis and business process management. IT systems support almost every business process, meaning they leave behind digital traces. We extrapolate all the data from the IT systems connected to a particular process, then visualize and evaluate it with the help of data science technology.

In short, process mining builds a bridge between employees, process experts and management, allowing for a data-driven and fact-based approach to business process optimization. This helps avoid thinking in siloes, as well as enabling transparent design of handovers and process steps that cross departmental boundaries within an organization.

When a business starts to analyze their process data, what are the sorts of questions they ask? Do they have at least have some expectation about what process mining can offer?

That’s a really good question! There isn’t really a single good answer to it, as it is different for different companies. For example, there was one procurement manager, and we were presenting the complete data set to him, and it turned out there was an approval at one point, but it should have been at another. He was really surprised, but we weren’t, because we sat outside the process itself and were able to take a broader view. 

We also had different questions that the company hadn’t considered, things like what was the process flow if an order amount is below 1000 euros, and how often that occurs—just questions that seem clear to an outsider but often do not occur to process owners.

So do people typically just have an idea that something is wrong, or do they generally understand there is a specific problem in one area, and they want to dive deeper? 

There are those people who know that a process is running well, but they know a particular problem pops up repeatedly. Usually, even if people say they don’t have a particular focus or question, most of them actually do because they know their area. They already have some assumptions and ideas, but it is sometimes so deep in their mind they can’t actually articulate it.

Often, if you ask people directly how they do things, it can put pressure on them, even if that’s not the intention. If this happens, people may hide things without meaning to, because they already have a feeling that the process or workflow they are describing is not perfect, and they want to avoid blame. 

The approvals example I mentioned above is my favorite because it is so simple. We had a team who all said, over and over, “We don’t approve this type of request.” However, the data said they did–the team didn’t even know. 

We then talked to the manager, who was interested in totally different ideas, like all these risks, approvals, are they happening, how many times this, how many times that — the process flow in general. Just by having this conversation, we were able to remove the mismatch between management and the team, and that is before we even optimized the actual process itself. 

So are there other common issues or mismatches that people should be aware of when beginning their process mining initiative?

The one I often return to is that not every variation that is out of line with the target model is necessarily negative. Very few processes, apart from those that run entirely automatically, actually conform 100% to the intended process model—even when the environment is ideal. For this reason, there will always be exceptions requiring a different approach. This is the challenge in projects: finding out which variations are desirable, and where to make necessary exceptions.

So would you say that data-based process analysis is a team effort?

Absolutely! In every phase of a process mining project, all sorts of project members are included. IT makes the data available and helps with the interpretation of the data. Analysts then carry out the analysis and discuss the anomalies they find with IT, the process owners, and experts from the respective departments. Sometimes there are good reasons to explain why a process is behaving differently than expected. 

In this discussion, it is incredibly helpful to document the thought process of the team with technical means, such as Signavio Process Intelligence. In this way, it is possible to break down the analysis into individual processes and to bring the right person into the discussion at the right point without losing the thread of the discussion. Then, the next colleague who picks up the topic can then see the thread of the analysis and properly classify the results.

At the very least, we can provide some starting points. Helping people reach an “aha moment” is one of the best parts of my job!

To find out more about how process mining can help you understand and optimize your business processes, visit the Signavio Process Intelligence product page. If you would like to get a group effort started in your organization right now, why not sign up for a free 30-day trial with Signavio, today.

The Future of AI in Dental Technology

As we develop more advanced technology, we begin to learn that artificial intelligence can have more and more of an impact on our lives and industries that we have gotten used to being the same over the past decades. One of those industries is dentistry. In your lifetime, you’ve probably not seen many changes in technology, but a boom around artificial intelligence and technology has opened the door for AI in dental technologies.

How Can AI Help?

Though dentists take a lot of pride in their craft and career, most acknowledge that AI can do some things that they can’t do or would make their job easier if they didn’t have to do. AI can perform a number of both simple and advanced tasks. Let’s take a look at some areas that many in the dental industry feel that AI can be of assistance.

Repetitive, Menial Tasks

The most obvious area that AI can help out when it comes to dentistry is with repetitive and menial simple tasks. There are many administrative tasks in the dentistry industry that can be sped up and made more cost-effective with the use of AI. If we can train a computer to do some of these tasks, we may be able to free up more time for our dentists to focus on more important matters and improve their job performance as well. One primary use of AI is virtual consultations that offices like Philly Braces are offering. This saves patients time when they come in as the Doctor already knows what the next steps in their treatment will be.

Using AI to do some basic computer tasks is already being done on a small scale by some, but we have yet to see a very large scale implementation of this technology. We would expect that to happen soon, with how promising and cost-effective the technology has proven to be.

Reducing Misdiagnosis

One area that many think that AI can help a lot in is misdiagnosis. Though dentists do their best, there is still a nearly 20% misdiagnosis rate when reading x-rays in dentistry. We like to think that a human can read an x-ray better, but this may not be the case. AI technology can certainly be trained to read an x-ray and there have been some trials to suggest that they can do it better and identify key conditions that we often misread.

A world with AI diagnosis that is accurate and quicker will save time, money, and lead to better dental health among patients. It hasn’t yet come to fruition, but this seems to be the next major step for AI in dentistry.

Artificial Intelligence Assistants

Once it has been demonstrated that AI can perform a range of tasks that are useful to dentists, the next logical step is to combine those skills to make a fully-functional AI dental assistant. A machine like this has not yet been developed, but we can imagine that it would be an interface that could be spoken to similar to Alexa. The dentist would request vital information and other health history data from a patient or set of patients to assist in the treatment process. This would undoubtedly be a huge step forward and bring a lot of computing power into the average dentist office.

Conclusion

It’s clear that AI has a bright future in the dental industry and has already shown some of the essential skills that it can help with in order to provide more comprehensive and accurate care to dental patients. Some offices like Westwood Orthodontics already use AI in the form of a virtual consult to diagnose issues and provide treatment options before patients actually step foot in the office. Though not nearly all applications that AI can provide have been explored, we are well on our way to discovering the vast benefits of artificial intelligence for both patients and practices in the dental healthcare industry.


Lisa Gao, DDS, MS | Westwood Orthodontics 1033 Gayley Ave #106, Los Angeles California 90024, 310-870-1823

Interview – Customer Data Platform, more than CRM 2.0?

Interview with David M. Raab from the CDP Institute

David M. Raab is as a consultant specialized in marketing software and service vendor selection, marketing analytics and marketing technology assessment. Furthermore he is the founder of the Customer Data Platform Institute which is a vendor-neutral educational project to help marketers build a unified customer view that is available to all of their company systems.

Furthermore he is a Keynote-Speaker for the Predictive Analytics World Event 2019 in Berlin.

Data Science Blog: Mr. Raab, what exactly is a Customer Data Platform (CDP)? And where is the need for it?

The CDP Institute defines a Customer Data Platform as „packaged software that builds a unified, persistent customer database that is accessible by other systems“.  In plainer language, a CDP assembles customer data from all sources, combines it into customer profiles, and makes the profiles available for any use.  It’s important because customer data is collected in so many different systems today and must be unified to give customers the experience they expect.

Data Science Blog: Is it something like a CRM System 2.0? What Use Cases can be realized by a Customer Data Platform?

CRM systems are used to interact directly with customers, usually by telephone or in the field.  They work almost exclusively with data that is entered during those interactions.  This gives a very limited view of the customer since interactions through other channels such as order processing or Web sites are not included.  In fact, one common use case for CDP is to give CRM users a view of all customer interactions, typically by opening a window into the CDP database without needing to import the data into the CRM.  There are many other use cases for unified data, including customer segmentation, journey analysis, and personalization.  Anything that requires sharing data across different systems is a CDP use case.

Data Science Blog: When does a CDP make sense for a company? It is more relevant for retail and financial companies than for industrial companies, isn´t it?

CDP has been adopted most widely in retail and online media, where each customer has many interactions and there are many products to choose from.  This is a combination that can make good use of predictive modeling, which benefits greatly from having more complete data.  Financial services was slower to adopt, probably because they have fewer products but also because they already had pretty good customer data systems.  B2B has also been slow to adopt because so much of their customer relationship is handled by sales people.  We’ve more recently been seeing growth in additional sectors such as travel, healthcare, and education.  Those involve fewer transactions than retail but also rely on building strong customer relationships based on good data.

Data Science Blog: There are several providers for CDPs. Adobe, Tealium, Emarsys or Dynamic Yield, just to name some of them. Do they differ a lot between each other?

Yes they do.  All CDPs build the customer profiles I mentioned.  But some do more things, such as predictive modeling, message selection, and, increasingly, message delivery.  Of course they also vary in the industries they specialize in, regions they support, size of clients they work with, and many technical details.  This makes it hard to buy a CDP but also means buyers are more likely to find a system that fits their needs.

Data Science Blog: How established is the concept of the CDP in Europe in general? And how in comparison with the United States?

CDP is becoming more familiar in Europe but is not as well understood as in the U.S.  The European market spent a lot of money on Data Management Platforms (DMPs) which promised to do much of what a CDP does but were not able to because they do not store the level of detail that a CDP does.  Many DMPs also don’t work with personally identifiable data because the DMPs primarily support Web advertising, where many customers are anonymous.  The failures of DMPs have harmed CDPs because they have made buyers skeptical that any system can meet their needs, having already failed once.  But we are overcoming this as the market becomes better educated and more success stories are available.  What’s the same in Europe and the U.S. is that marketers face the same needs.  This will push European marketers towards CDPs as the best solution in many cases.

Data Science Blog: What are coming trends? What will be the main topic 2020?

We see many CDPs with broader functions for marketing execution: campaign management, personalization, and message delivery in particular.  This is because marketers would like to buy as few systems as possible, so they want broader scope in each systems.  We’re seeing expansion into new industries such as financial services, travel, telecommunications, healthcare, and education.  Perhaps most interesting will be the entry of Adobe, Salesforce, and Oracle, who have all promised CDP products late this year or early next year.  That will encourage many more people to consider buying CDPs.  We expect that market will expand quite rapidly, so current CDP vendors will be able to grow even as Adobe, Salesforce, and Oracle make new CDP sales.


You want to get in touch with Daniel M. Raab and understand more about the concept of a CDP? Meet him at the Predictive Analytics World 18th and 19th November 2019 in Berlin, Germany. As a Keynote-Speaker, he will introduce the concept of a Customer Data Platform in the light of Predictive Analytics. Click here to see the agenda of the event.

 


 

Interview – Knowledge Graphs and Semantic Technologies

“It’s incredibly empowering when data that is clear and understood – what we call ‘beautiful data’ – is available to the data workforce.”

Juan F. Sequeda is co-founder of Capsenta, a spin-off from his research, and Senior Director of Capsenta Labs. He is an expert on knowledge graphs, semantic web, semantic & graph data management and (ontology-based) data integration. In this interview Juan lets us know how SMEs can create value from data, what makes the Knowledge Graph so important and why CDOs and CIOs should use semantic technologies.

Data Science Blog: If you had to name five things that apply to SMEs as well as enterprises as they are on their journey through digital transformation: What are the most important steps to take in order to create value from data?

I would state four things:

  1. Focus on the business problem that needs to be solved instead of the technology.
  2. Getting value out of your data is a social-technical problem. Not everything can be solved by technology and automation. It is crucial to understand the social/human aspect of the problems.
  3. Avoid boiling the ocean. Be agile and iterate.
  4. Recall that it’s a marathon, not a sprint. Hence why you shouldn’t focus on boiling the ocean.

Data Science Blog: You help companies to make their company data meaningfully and thus increase their value. The magic word is the knowledge graph. What exactly is a Knowledge Graph?

Let’s recall that the term “knowledge graph”, that is being actively used today, was coined by Google in a 2012 blogpost. From an industry point of view, it’s a term that represents data integration, where not just entities but also relationships are first class citizens. In other words, it’s data integration based on graphs. That is why you see graph database companies use the term knowledge graph instead of data integration.

In the academic circle, there is a “debate” on what the term “knowledge graph” means. As academics, it’s clear that we should always strive to have well defined terms. Nevertheless, I find it ironic that academics are spending time debating on the definition of a term that appeared in a (marketing) blog post 7 years ago! I agree with Simeon Warner on this: “I care about putting more knowledge in my graph, instead of defining what is a knowledge graph”.

Whatever definition prevails, it should be open and inclusive.

On a final note, it is paramount that we remember our history in order to avoid reinventing the wheel. There is over half a century of research results that has led us to what we are calling Knowledge Graphs today. If you are interested, please check out our upcoming ISWC 2019 tutorial “Knowledge Graphs: How did we get here? A Half Day Tutorial on the History of Knowledge Graph’s Main Ideas“.

Data Science Blog: Speaking of Knowledge Graphs: According to SEMANTiCS 2019 Research and Innovation Chair Philippe Cudre-Mauroux the next generation of knowledge graphs will capture more detailed information. Towards which directions are you steering with gra.fo?

Gra.fo is a knowledge graph schema (i.e ontology) collaborative modeling tool combined with google doc style features such as real-time collaboration, comments, history and search.

Designing a knowledge graph schema is just the first step. You have to do something with it! The next step is to map the knowledge graph schema to underlying data sources in order to integrate data.

We are driving Gra.fo to also be a mapping management system. We recently released our first mapping features. You now have the ability to import existing R2RML mapping. The next step will be to create the mappings between relational databases and the schema all within Gra.fo. Furthermore, we will extend to support mappings from different types of sources.

Finally, there are so many features that our users are requesting. We are working on those and will also offer an API in order to empower users to develop their own apps and features.

Data Science Blog: At Capsenta, you are changing the way enterprises model, govern and integrate data. Put in brief, how can you explain the benefits of using semantic technologies and knowledge technologies to a CDO or CIO? Which clients could you serve and how did you help them?

Business users need to answer critical business questions quickly and accurately. However, the frequent bottleneck is the lack of understanding of the large and complex enterprise databases. Additionally, the IT experts who do understand are not always available. The ultimate goal is to empower business users to access the data in the way they think of their domain.

This is where Knowledge Graphs come into play.

At Capsenta, we use our Knowledge Graph technology to bridge this conceptualization gap between the complex and inscrutable data sources and the business intelligence and data analytic tools that domain experts use to answer critical business questions. Our goal is to deliver beautiful data so the business users and data scientist can run with the data.

We are helping large scale enterprises in e-commerce, oil & gas and life science industries to generate beautiful data.

Data Science Blog: What are reasons for which Knowledge Graphs should be part of any corporate strategy?

Graphs are very easy for people to understand and express the complex relationships between concepts. Bubbles and lines between them (i.e. a graph!) is what domain experts draw on the whiteboard all the time. We have even had C-level executives look at a Knowledge Graph and immediately see how it expresses a portion of their business and even offer suggestions for additional richness. Imagine that, C-level executives participating in an ontology engineering session because they understand the graph.

This is in sharp contrast to the data itself, which is almost always very difficult to understand and overwhelming in scope. Critical business value is available in a subset of this data. A Knowledge Graph bridges the conceptual gap between a critical portion of the inscrutable data itself and the business user’s view of their world.

It’s incredibly empowering when data that is clear and understood – what we call “beautiful data” – is available to the data workforce.

Data Science Blog: Data-driven process analyzes require interdisciplinary knowledge. What advice would you give to a process manager who wants to familiarize her-/himself with the topic?

Domain experts/business users frequently use multiple words/phrases to mean the same thing and also a specific phrase can mean different things to different people. Also, the domain experts/business users speak a very different language than the IT database owners.

How can the business have clear, accurate answers when there’s inconsistency in what people mean and are thinking?

This is the social problem of getting everyone on the same page. We’ve seen Knowledge Graphs dramatically help with this problem. The exercise of getting people to agree upon what they mean and encoding it in an intuitive Knowledge Graph is very powerful.

The Knowledge Graph also brings the IT stakeholders into the process by clarifying exactly what data or, typically, complex calculations of data is the actual, accurate value for each and every business concept and relationship expressed in the Knowledge Graph.

It is crucial to avoid boiling the ocean. That is why we have designed a pay-as-you-go methodology to start small and provide value as quickly and accurately as possible. Ideally, the team has available what we call a “Knowledge Engineer”. This is someone who can effectively speak with the business users/domain experts and also nerd out with the database folks.

About SEMANTiCS Conference

SEMANTiCS is an established knowledge hub where technology professionals, industry experts, researchers and decision makers can learn about new technologies, innovations and enterprise implementations in the fields of Linked Data and Semantic AI. Founded in 2005 the SEMANTiCS is the only European conference at the intersection of research and industry.

This year’s event is hosted by the Semantic Web Company, FIZ Karlsruhe – Leibniz Institute for Information Infrastructure GmbH, Fachhochschule St. Pölten Forschungs GmbH, KILT Competence Center am Institut für Angewandte Informatik e.V. and Vrije Universiteit Amsterdam.

Interview: Does Business Intelligence benefit from Cloud Data Warehousing?

Interview with Ross Perez, Senior Director, Marketing EMEA at Snowflake

Read this article in German:
“Profitiert Business Intelligence vom Data Warehouse in der Cloud?”

Does Business Intelligence benefit from Cloud Data Warehousing?

Ross Perez is the Senior Director, Marketing EMEA at Snowflake. He leads the Snowflake marketing team in EMEA and is charged with starting the discussion about analytics, data, and cloud data warehousing across EMEA. Before Snowflake, Ross was a product marketer at Tableau Software where he founded the Iron Viz Championship, the world’s largest and longest running data visualization competition.

Data Science Blog: Ross, Business Intelligence (BI) is not really a new trend. In 2019/2020, making data available for the whole company should not be a big thing anymore. Would you agree?

BI is definitely an old trend, reporting has been around for 50 years. People are accustomed to seeing statistics and data for the company at large, and even their business units. However, using BI to deliver analytics to everyone in the organization and encouraging them to make decisions based on data for their specific area is relatively new. In a lot of the companies Snowflake works with, there is a huge new group of people who have recently received access to self-service BI and visualization tools like Tableau, Looker and Sigma, and they are just starting to find answers to their questions.

Data Science Blog: Up until today, BI was just about delivering dashboards for reporting to the business. The data warehouse (DWH) was something like the backend. Today we have increased demand for data transparency. How should companies deal with this demand?

Because more people in more departments are wanting access to data more frequently, the demand on backend systems like the data warehouse is skyrocketing. In many cases, companies have data warehouses that weren’t built to cope with this concurrent demand and that means that the experience is slow. End users have to wait a long time for their reports. That is where Snowflake comes in: since we can use the power of the cloud to spin up resources on demand, we can serve any number of concurrent users. Snowflake can also house unlimited amounts of data, of both structured and semi-structured formats.

Data Science Blog: Would you say the DWH is the key driver for becoming a data-driven organization? What else should be considered here?

Absolutely. Without having all of your data in a single, highly elastic, and flexible data warehouse, it can be a huge challenge to actually deliver insight to people in the organization.

Data Science Blog: So much for the theory, now let’s talk about specific use cases. In general, it matters a lot whether you are storing and analyzing e.g. financial data or machine data. What do we have to consider for both purposes?

Financial data and machine data do look very different, and often come in different formats. For instance, financial data is often in a standard relational format. Data like this needs to be able to be easily queried with standard SQL, something that many Hadoop and noSQL tools were unable to provide. Luckily, Snowflake is an ansi-standard SQL data warehouse so it can be used with this type of data quite seamlessly.

On the other hand, machine data is often semi-structured or even completely unstructured. This type of data is becoming significantly more common with the rise of IoT, but traditional data warehouses were very bad at dealing with it since they were optimized for relational data. Semi-structured data like JSON, Avro, XML, Orc and Parquet can be loaded into Snowflake for analysis quite seamlessly in its native format. This is important, because you don’t want to have to flatten the data to get any use from it.

Both types of data are important, and Snowflake is really the first data warehouse that can work with them both seamlessly.

Data Science Blog: Back to the common business use case: Creating sales or purchase reports for the business managers, based on data from ERP-systems such as Microsoft or SAP. Which architecture for the DWH could be the right one? How many and which database layers do you see as necessary?

The type of report largely does not matter, because in all cases you want a data warehouse that can support all of your data and serve all of your users. Ideally, you also want to be able to turn it off and on depending on demand. That means that you need a cloud-based architecture… and specifically Snowflake’s innovative architecture that separates storage and compute, making it possible to pay for exactly what you use.

Data Science Blog: Where would you implement the main part of the business logic for the report? In the DWH or in the reporting tool? Does it matter which reporting tool we choose?

The great thing is that you can choose either. Snowflake, as an ansi-Standard SQL data warehouse, can support a high degree of data modeling and business logic. But you can also utilize partners like Looker and Sigma who specialize in data modeling for BI. We think it’s best that the customer chooses what is right for them.

Data Science Blog: Snowflake enables organizations to store and manage their data in the cloud. Does it mean companies lose control over their storage and data management?

Customers have complete control over their data, and in fact Snowflake cannot see, alter or change any aspect of their data. The benefit of a cloud solution is that customers don’t have to manage the infrastructure or the tuning – they decide how they want to store and analyze their data and Snowflake takes care of the rest.

Data Science Blog: How big is the effort for smaller and medium sized companies to set up a DWH in the cloud? Does this have to be an expensive long-term project in every case?

The nice thing about Snowflake is that you can get started with a free trial in a few minutes. Now, moving from a traditional data warehouse to Snowflake can take some time, depending on the legacy technology that you are using. But Snowflake itself is quite easy to set up and very much compatible with historical tools making it relatively easy to move over.

Interview – The Importance of Machine Learning for the Data Driven Business

To become more data-driven, organizations must mature their analytics and automate more of their decision making processes for innovation and differentiation. Data science seems like the right approach, yet is a new and fast moving field that seems to have as many dead ends as it has high ways to value. Cloudera Fast Forward Labs, led by Hilary Mason, shows companies the way.

Alice Albrecht is a research engineer at Cloudera Fast Forward Labs.  She spends her days researching the latest and greatest in machine learning and artificial intelligence and bringing that knowledge to working prototypes and delivering concrete advice for clients.  Prior to joining Fast Forward Labs, Alice worked in both finance and technology companies as a practicing data scientist, data science leader, and – most recently – a data product manager.  In addition to teaching machines to do cool things, Alice is passionate about mentoring and helping others grow in their careers.  Alice holds a PhD from Yale in cognitive neuroscience where she studied how humans summarize sensory information from the world around them and the neural substrates that underlie those summaries.

Read this article in German:
“Interview – Die Bedeutung von Machine Learning für das Data Driven Business“

Data Science Blog: Ms. Albrecht, you are a well-known keynote speaker for data science and artificial intelligence. While data science has arrived business already, deep learning seems to be the new trend. Is artificial intelligence for business already normal business or is it an overrated hype?

I’d say it isn’t either of those two options.  Data science is now widely adopted but companies still struggle to integrate this new discipline into their existing businesses.  As for deep learning, it really depends on the company that’s looking into using this technique.  I wouldn’t say that deep learning is by any means part of business as usual- nor should it be.  It’s a tool like any other and building a capacity for using a tool without clearly defined business needs is a recipe for disaster.

Data Science Blog: Just to make sure what we are talking about: What are the differences and overlaps between data analytics, data science, machine learning, deep learning and artificial intelligence?

Here at Cloudera Fast Forward Labs, we like to think of data analytics as collecting data and counting things (mostly for quick charts and reports).  Data science solves business problems by counting cleverly and predicting things with the data that’s collected.  Machine learning is about solving problems with new kinds of feedback loops that improve with more data.  Deep learning is a particular type of machine learning and is not itself a separate concept or type of tool.  Artificial intelligence taps into something more complicated than what we’re seeing today – it’s much broader than training machines to repetitively do very specialized tasks or solve very narrow problems.

Data Science Blog: And how can we add the context to big data?

From a theoretical perspective, data science has been around for decades. The building blocks for modern day machine learning, deep learning and artificial intelligence are based on mathematical theorems  that go back to the 1940’s and 1950’s. The challenge was that at the time, compute power and data storage capacity were simply too expensive for the approaches to be implemented. Today that’s all changed.. Not only has the cost of data storage dropped considerably, open source technology like Apache Hadoop has made it possible to store any volume of data at costs approaching zero. Compute power, even highly specialised chip architectures, are now also available on demand and only for the time organisations need them through public and private cloud solutions. The decreased cost of both data storage and compute power, together with a growing list of tools and resources readily available via the open source community allows companies of any size to benefit from data (no matter that size of that data).

Data Science Blog: What are the challenges for organizations in getting started with data science?

I see two big challenges when getting started with data science.  One is ensuring that you have organizational alignment around exactly what type of work data scientists will deliver (and timing for those projects).  The second hurdle is around ensuring that you have the right data in place before you start hiring data scientists. This can be tricky if you don’t have in-house expertise in this area, so sometimes it’s better to hire a data engineer or a data strategist (or director of data science) before you ever get started building out a data science team.

Data Science Blog: There are many discussions about how to build a data-driven business. Is it just about using data science to get a better understanding of customer behavior?

No, being data driven doesn’t just mean better understanding your customers (though that is one way that data science can help in an organization).  Aside from building an organization that relies on data and analytics to help them make decisions (about customer behavior or otherwise), being a data-driven business means that data is powering your core products.

Data Science Blog: The number of technologies, tools and frameworks is increasing. For organizations this also means increasing complexity. Do companies need to stay always up-to-date or could it be an advice to wait and imitate pioneers later?

While it’s not critical (or advisable) for organizations to adopt every new advancement that comes along, it is critical for them to stay abreast of emerging frameworks.  If a business waits to see what others are doing, and therefore don’t invest in understanding how new advancements can affect their particular business, they’ve likely already missed the boat.

Data Science Blog: Global players have big budgets just for doing research and setting up data labs. Middle-sized companies need to see the break even point soon. How can we accelerate the value generation of data science?

Having a team that is highly focused on a specific set of projects that are well-scoped and aligned to the business makes all the difference.  Data science and machine learning don’t have to sacrifice doing research and being innovative in order to produce value.  The biggest difference is that smaller teams will have to be more aware of how their choice of project fits into emerging frameworks and their particular acute and near term business needs.

Data Science Blog: How does Cloudera Fast Forward Labs help other organizations to accelerate their start with machine learning?

We advise organizations, based on their particular needs, on what the latest advancements are in machine learning and data science, how to build and structure their data teams to develop the capabilities they need to meet their goals, and how to quickly implement custom forward-looking solutions using their own data and in-house expertise.

Data Science Blog: Finally, a question for our younger readers who are looking for a career as a data expert: What makes a good data scientist? Do you like to work with introverted coding nerds or the data loving business experts?

A good data scientists should be deeply curious and have a love for the ways in which data can lead to new discoveries and power the next generation of products.  We expect the people who thrive in this field to come from a variety of backgrounds and experiences.

Interview – Python as productive data science environment

Miroslav Šedivý is a Senior Software Architect at UBIMET GmbH, using Python to make the sun shine and the wind blow. He is an enthusiast of both human and programming languages and found Python as his language of choice to setup very productive environments. Mr. Šedivý was born in Czechoslovakia, studied in France and is now living in Germany. Furthermore, he helps in the organization of the events PyCon.DE and Polyglot Gathering.


On 26th June 2018 he will explain at the Python@DWX conference why “Lifelong Text Hackers Use Vim and Python”. Insert the promotion code PY18science to unlock your 10% discount on all tickets. More info and tickets on python-con.com.


Data Science Blog: Mr. Šedivý, how did you find the way to Python as your favorite programming language?

Apart from traditional languages taught at school (Basic, Pascal, C, Java), some twenty years ago I learned Perl to hack a dynamic web site and used it to automate my daily tasks. Later I used it professionally for scientific calculations in the production. This was later replaced by Python, its newer versions and more advanced libraries. Nowadays Python has almost completely replaced Perl as my principal language and I use Perl just to hack some command line filters and to impress colleagues.

Data Science Blog: Python is one of the most popular programming language for data scientists. This is remarkable as it is originally not designed for doing data science with it. What made it a competitor to languages like R or Julia?

Python is the most powerful programming language that is still legible. This appeals to data scientists who can enter each line interactively, and immediately see what happens, because each line actually does something. They can inspect their data easily and build automating systems to process their data transparently.

Data Science Blog: Is there anything you could do better with another programming language?

Sometimes I’m playing with some functional languages that would allow me to write code that is easier to test and parallelize.

Data Science Blog: Which libraries are the most important ones for your daily business?

The whole Pandas ecosystem with Numpy and Scipy. Matplotlib for plots, PyTables and Psycopg2 for storage. I’m also importing a few async libs for webservices and similar network-based software.

I also enjoy discovering the world of Unicode and Timezones – both of them are the spots where the programmers absolutely have to obey the chaotic reality of the outside world.

Data Science Blog: Which editor do you use? And how to set it up as a productive environment?

I tried several editors and IDEs, but always came back to Vi or Vim. This is an extremely powerful editor that is around since over forty years, which was probably before most of today’s active developers learned to type. I’m using it for all text editing tasks, which I’m actually going to show in my talk at DWX [Lifelong Text Hackers Use Vim and Python]. Steep learning curve is not an argument against a tool you can grok during your entire career.

Data Science Blog: In your opinion: For all developers and data scientists, who are used to Java, Scala, R oder Perl, is Python easy to learn? Could it be too late to switch for somebody?

Python is a great general language that can be learned rapidly to a usable level. It’s different from the aforementioned languages. I remember my switching process from Perl to Python over ten years ago with a book “Perl to Python Migration”, which forced me to switch my way of thinking. From the question “Why do I have to import ‘re’ for regular expressions if Perl uses them natively?” to “Actually, I can solve this problem without regular expressions.”.

What makes a good Data Scientist? Answered by leading Data Officers!

What makes a good Data Scientist? A question I got asked recently a lot by data science newbies as well as long-established CIOs and my answer ist probably not what you think:
In my opinion is a good Data Scientist somebody with, at least, a good knowledge of computer programming, statistics and the ability of understanding the customer´s business. Above all stands a strong interest in finding value in distributed data sources.

Debatable? Maybe. That’s why I forwarded this question to five other leading Data Scientists and Chief Data Officers in Germany, let’s have a look on their answers to this question and create your own idea of what a good Data Scientist might be:


Dr. Andreas Braun – Head of Global Data & Analytics @ Allianz SE

A data scientist connects thorough analytical and methodological understanding  with a technical hands-on/ engineering mentality.
Data scientists bridge between analytics, tech, and business. “New methods”, such as machine learning, AI, deep learning etc. are crucial and are continuously challenged and improved. (14 February 2017)


Dr. Helmut Linde – Head of Data Science @ SAP SE

The ideal data scientist is a thought leader who creates value from analytics, starting from a vision for improved business processes and an algorithmic concept, down to the technical realization in productive software. (09 February 2017)


Klaas Bollhoefer – Chief Data Scientist @ The unbelievable Machine Company

For me a data scientist thinks ahead, thinks about and thinks in-between. He/she is a motivated, open-minded, enthusiastic and unconventional problem solver and tinkerer. Being a team player and a lone wolf are two sides of the same coin and he/she definitely hates unicorns and nerd shirts. (27 March 2017)

 


Wolfgang Hauner – Chief Data Officer @ Munich Re

A data scientist is, from their very nature, interested in data and its underlying relationship and has the cognitive, methodical and technical skills to find these relationships, even in unstructured data. The essential prerequisites to achieve this are curiosity, a logical mind-set and a passion for learning, as well as an affinity for team interaction in the work place. (08 February 2017)

 


Dr. Florian Neukart – Principal Data Scientist @ Volkswagen Group of America

In my opinion, the most important trait seems to be driven by an irresistible urge to understand fundamental relations and things, whereby I summarize both an atom and a complex machine among “things”. People with this trait are usually persistent, can solve a new problem even with little practical experience, and strive for the necessary training or appropriate quantitative knowledge autodidactically. (08 February 2017)

Background idea:
That I am writing about atoms and complex machines has to do with the fact that I have been able to analyze the most varied data through my second job at the university, and that I am given a chance to making significant contributions to both machine learning and physics, is primarily rooted in curiosity. Mathematics, physics, neuroscience, computer science, etc. are the fundamentals that someone will acquire if she wants to understand. In the beginning, there is only curiosity… I hope this is not too out of the way, but I’ve done a lot of job interviews and worked with lots of smart people, and it has turned out that quantitative knowledge alone is not enough. If someone is not burning for understanding, she may be able to program a Convolutional Network from the ground but will not come up with new ideas.