AI Role Analysis in Cybersecurity Sector

Cybersecurity as the name suggests is the process of safeguarding networks and programs from digital attacks. In today’s times, the world sustains on internet-connected systems that carry humungous data that is highly sensitive. Cyberthreats are on the rise with unscrupulous hackers taking over the entire industry by storm, with their unethical practices. This not only calls for more intense cyber security laws, but also the vigilance policies of the corporates, big and small, government as well as non-government; needs to be revisited.

With such huge responsibility being leveraged over the cyber-industry, more and more cyber-security enthusiasts are showing keen interest in the industry and its practices. In order to further the process of secured internet systems for all, unlike data sciences and other industries; the Cybersecurity industry has seen a workforce rattling its grey muscle with every surge they experience in cyber threats. Talking of AI impressions in Cybersecurity is still in its nascent stages of deployment as humans are capable of more; when assisted with the right set of tools.

Automatically detecting unknown workstations, servers, code repositories, and other hardware and software on the network are some of the tasks that could be easily managed by AI professionals, which were conducted manually by Cybersecurity folks. This leaves room for cybersecurity officials to focus on more urgent and critical tasks that need their urgent attention. Artificial intelligence can definitely do the leg work of processing and analyzing data in order to help inform human decision-making.

AI in cyber security is a powerful security tool for businesses. It is rapidly gaining its due share of trust among businesses for scaling cybersecurity. Statista, in a recent post, listed that in 2019, approximately 83% of organizations based in the US consider that without AI, their organization fails to deal with cyberattacks. AI-cyber security solutions can react faster to cyber security threats with more accuracy than any human. It can also free up cyber security professionals to focus on more critical tasks in the organization.

CHALLENGES FACED BY AI IN CYBER SECURITY

As it is said, “It takes a thief to catch a thief”. Being in its experimental stages, its cost could be an uninviting factor for many businesses. To counter the threats posed by cybercriminals, organizations ought to level up their internet security battle. Attacks backed by the organized crime syndicate with intentions to dismantle the online operations and damage the economy are the major threats this industry face today. AI is still mostly experimental and, in its infancy, hackers will find it much easy to carry out speedier, more advanced attacks. New-age automation-driven practices are sure to safeguard the crumbling internet security scenarios.

AI IN CYBER SECURITY AS A BOON

There are several advantageous reasons to embrace AI in cybersecurity. Some notable pros are listed below:

  •  Ability to process large volumes of data
    AI automates the creation of ML algorithms that can detect a wide range of cybersecurity threats emerging from spam emails, malicious websites, or shared files.
  • Greater adaptability
    Artificial intelligence is easily adaptable to contemporary IT trends with the ever-changing dynamics of the data available to businesses across sectors.
  • Early detection of novel cybersecurity risks
    AI-powered cybersecurity solutions can eliminate or mitigate the advanced hacking techniques to more extraordinary lengths.
  • Offers complete, real-time cybersecurity solutions
    Due to AI’s adaptive quality, artificial intelligence-driven cyber solutions can help businesses eliminate the added expenses of IT security professionals.
  • Wards off spam, phishing, and redundant computing procedures
    AI easily identifies suspicious and malicious emails to alert and protect your enterprise.

AI IN CYBERSECURITY AS A BANE

Alongside the advantages listed above, AI-powered cybersecurity solutions present a few drawbacks and challenges, such as:

  • AI benefits hackers
    Hackers can easily sneak into the data networks that are rendered vulnerable to exploitation.
  • Breach of privacy
    Stealing log-in details of the users and using them to commit cybercrimes, are deemed sensitive issues to the privacy of an entire organization.
  • Higher cost for talents
    The cost of creating an efficient talent pool is very high as AI-based technologies are in the nascent stage.
  • More data, more problems
    Entrusting our sensitive data to a third-party enterprise may lead to privacy violations.

AI-HUMAN MERGER IS THE SOLUTION

AI professionals backed with the best AI certifications in the world assist corporations of all sizes to leverage the maximum benefits of the AI skills that they bring along, for the larger benefit of the organization. Cybersecurity teams and AI systems cannot work in isolation. This communion is a huge step forward to leveraging maximum benefits for secured cybersecurity applications for organizations. Hence, this makes AI in cybersecurity a much-coveted aspect to render its offerings in the long run.

Marketing Attribution Models

Why do we need attribution?

Attributionis the process of distributing the value of a purchase between the various channels, used in the funnel chain. It allows you to determine the role of each channel in profit. It is used to assess the effectiveness of campaigns, to identify more priority sources. The competent choice of the model makes it possible to optimally distribute the advertising budget. As a result, the business gets more profit and less expenses.

What models of attribution exist

The choice of the appropriate model is an important issue, because depending on the business objectives, it is better to fit something different. For example, for companies that have long been present in the industry, the priority is to know which sources contribute to the purchase. Recognition is the importance for brands entering the market. Thus, incorrect prioritization of sources may cause a decrease in efficiency. Below are the models that are widely used in the market. Each of them is guided by its own logic, it is better suited for different businesses.

First Interaction (First Click)

The value is given to the first touch. It is suitable only for several purposes and does not make it possible to evaluate the role of each component in making a purchase. It is chosen by brands who want to increase awareness and reach.

Advantages

It does not require knowledge of programming, so the introduction of a business is not difficult. A great option that effectively assesses campaigns, aimed at creating awareness and demand for new products.

Disadvantages

It limits the ability to analyze comprehensively all channels that is used to promote a brand. It gives value to the first interaction channel, ignoring the rest.

Who is suitable for?

Suitable for those who use the promotion to increase awareness, the formation of a positive image. Also allows you to find the most effective source.

Last Interaction (Last Click)

It gives value to the last channel with which the consumer interacted before making the purchase. It does not take into account the actions that the user has done up to this point, what marketing activities he encountered on the way to conversion.

Advantages

The tool is widely used in the market, it is not difficult. It solves the problem of small advertising campaigns, where is no more than 3 sources.

Disadvantages

There is no way to track how other channels have affected the acquisition.

Who is suitable for?

It is suitable for business models that have a short purchase cycle. This may be souvenirs, seasonal offers, etc.

Last Non-Direct Click

It is the default in Google Analytics. 100% of the  conversion value gives the last channel that interacted with the buyer before the conversion. However, if this source is Direct, then assumptions are counted.

Suppose a person came from an email list, bookmarked a product, because at that time it was not possible to place an order. After a while he comes back and makes a purchase. In this case, email as a channel for attracting users would be underestimated without this model.

Who is suitable for?

It is perfect for beginners who are afraid of making a mistake in the assessment. Because it allows you to form a general idea of ​​the effectiveness of all the involved channels.

Linear model attribution (Linear model)

The value of the conversion is divided in equal parts between all available channels.

Linear model attribution (Linear model)

Advantages

More advanced model than previous ones, however, characterized by simplicity. It takes into account all the visits before the acquisition.

Disadvantages

Not suitable for reallocating the budget between the channels. This is due to the fact that the effectiveness of sources may differ significantly and evenly divide – it is not the best idea. 

Who is suitable for?

It is performing well for businesses operating in the B2B sector, which plays a great importance to maintain contact with the customer during the entire cycle of the funnel.

Taking into account the interaction duration (Time Decay)

A special feature of the model is the distribution of the value of the purchase between the available channels by increment. Thus, the source, that is at the beginning of the chain, is given the least value, the channel at the end deserves the greatest value.  

Advantages

Value is shared between all channel. The highest value is given to the source that pushed the user to make a purchase.

Disadvantages

There is no fair assessment of the effectiveness of the channels, that have made efforts to obtain the desired result.

Who is suitable for?

It is ideal for evaluating the effectiveness of advertising campaigns with a limited duration.

Position-Based or U-Shaped

40% receive 2 channels, which led the user and pushed him to purchase. 20% share among themselves the intermediate sources that participated in the chain.

Advantages

Most of the value is divided equally between the key channels – the fact that attracted the user and closed the deal..

Disadvantages

Underestimated intermediate channels.It happens that they make it possible to more effectively promote the user chain.. Because they allow you to subscribe to the newsletter or start following the visitor for price reduction, etc.

Who is suitable for?

Interesting for businesses that focus on attracting new audiences, as well as pushing existing customers to buy.

Cons of standard attribution models

According to statistics, only 44% of foreign experts use attribution on the last interaction. Speaking about the domestic market, we can announce the numbers are much higher. However, only 18% of marketers use more complex models. There is also evidence which demonstrates that 72.4% of those who use attribution based on the last interaction, they use it not because of efficiency, but because it is simple.

What leads to a similar state of affairs?

Experts do not understand the effectiveness. Ignorance of how more complex models work leads to a lack of understanding of the real benefits for the business.

Attribution management is distributed among several employees. In view of this, different models can be used simultaneously. This approach greatly distorts the data obtained, not allowing an objective assessment of the effect of channels.

No comprehensive data storage. Information is stored in different places and does not take into account other channels. Using the analytics of the advertising office, it is impossible to work with customers in retail outlets.

You may find ways to eliminate these moments and attribution will work for the benefit of the business.

What algorithmic attribution models exist

Using one channel, there is no need to enable complex models. Attribution will be enough for the last interaction. It has everything to evaluate the effectiveness of the campaign, determine the profitability, understand the benefits for the business.

Moreover, if the number of channels increases significantly, and goals are already far beyond recognition, it will be better to give preference to more complex models. They allow you to collect all the information in one place, open up limitless monitoring capabilities, make it clear how one channel affects the other and which bundles work better together.

Below are the well-known and widely used today algorithmic attribution models.

Data-Driven Attribution

A model that allows you to track all the way that the consumer has done before making a purchase. It objectively evaluates each channel and does not take into account the position of the source in the funnel. It demonstrates how a certain interaction affected the outcome. Data-Driven attribution model is used in Google Analytics 360.

With it, you can work efficiently with channels that are underestimated in simpler models. It gives the opportunity to distribute the advertising budget correctly.

Attribution based on Markov’s Chains (Markov Chains)

Markov’s chain has been used for a long time to predict weather, matches, etc. The model allows you to find out, how the lack of a channel will affect sales. Its advantage is the ability to assess the impact of the source on the conversion, to find out which channel brings the best results.

A great option for companies that store data in one service. To implement requires knowledge of programming. It has one drawback in the form of underestimating the first channel in the chain. 

OWOX BI Attribution

OWOX BI Attribution helps you assess the mutual influence of channels on encouraging a customer through the funnel and achieving a conversion.

What information can be processed:

  • Upload user data from Google Analytics using flexible built-in tools.
  • Process information from various advertising services.
  • Integrate the model with CRM systems.

This approach makes it possible not to lose sight of any channel. Analyze the complex impact of marketing tools, correctly distributing the advertising budget.

The model uses CRM information, which makes it possible to do end-to-end analytics. Each user is assigned an identifier, so no matter what device he came from, you can track the chain of actions and understand that it is him. This allows you to see the overall effect of each channel on the conversion.

Advantages

Provides an integrated approach to assessing the effectiveness of channels, allows you to identify consumers, even with different devices, view all visits. It helps to determine where the user came from, what prompted him to do so. With it, you can control the execution of orders in CRM, to estimate the margin. To evaluate in combination with other models in order to determine the highest priority advertising campaigns that bring the most profit.

Disadvantages

It is impossible to objectively evaluate the first step of the chain.

Who is suitable for?

Suitable for all businesses that aim to account for each step of the chain and the qualitative assessment of all advertising channels.

Conclusion

The above-mentioned Ad Roll study shows that 70% of marketing managers find it difficult to use the results obtained from attribution. Moreover, there will be no result without it.

To obtain a realistic assessment of the effectiveness of marketing activities, do the following:

  • Determine priority KPIs.
  • Appoint a person responsible for evaluating advertising campaigns.
  • Define a user funnel chain.
  • Keep track of all data, online and offline. 
  • Make a diagnosis of incoming data.
  • Find the best attribution model for your business.
  • Use the data to make decisions.

The Power of Analyzing Processes

Are you thinking BIG enough? Over the past few years, the quality of discussion regarding a ‘process’ and its interfaces between different departments has developed radically. Organizations increasingly reject guesswork, individual assessments, or blame-shifting and instead focus on objective facts: the display of throughput times, process variants, and their optimization.

But while data can hold valuable insights into business, users, customer bases, and markets, companies are sometimes unsure how best to analyze and harness their data. In fact, the problem isn’t usually a lack of data; it’s a breakdown in leveraging useful data. Being unsure how to interpret, explore, and analyze processes can paralyze any go-live, leading to a failure in the efficient interaction of processes and business operations. Without robust data analysis, your business could be losing money, talent, and even clients.

After all, analyzing processes is about letting data tell its true story for improved understanding.

The “as-is” processes

Analyzing the as-is current state helps organizations document, track, and optimize processes for better performance, greater efficiency, and improved outcomes. By contextualizing data, we gain the ability to navigate and organize processes to negate bottlenecks, set business preferences, and plan an optimized route through process mining initiatives. This focus can help across an entire organization, or on one or more specific processes or trends within a department or team.

There are several vital goals/motivations for implementing current state analysis, including:

  • Saving money and improving ROI;
  • Improving existing processes or creating new processes;
  • Increasing customer satisfaction and journeys;
  • Improving business coordination and organizational responsiveness;
  • Complying with new regulatory standards;
  • Adapting methods following a merger or acquisition.

The “to-be” processes

Simply put, if as-is maps where your processes are, to-be maps where you want them to… be. To-be process mapping documents what you want the process to look like, and by using the as-is diagram, you can work with stakeholders to identify developments and improvements of the current process, then outline those changes on your to-be roadmap.

This analysis can help you make optimal decisions for your business and innovative OpEx imperatives. For instance, at leading data companies like Google and Amazon, data is used in such a way that the analysis results make the decisions! Just think of the power Recommendation Engines, PageRank, and Demand Forecasting Systems have over the content we see. To achieve this, advanced techniques of machine learning and statistical modeling are applied, resulting in mechanically improved results from the data. Interestingly, because these techniques reference large-scale data sets and reflect analysis and results in real-time, they are applied to areas that extend beyond human decision-making.

Also, by analyzing and continuously monitoring qualitative and quantitative data, we gain insights across potential risks and ongoing improvement opportunities, too. The powerful combination of process discovery, process analysis, and conformance checking supports a collaborative approach to process improvement, giving you game-changing insights into your business. For example:

  • Which incidents would I like to detect and act upon proactively?
  • Where would task prioritization help improve overall performance?
  • Where do I know that increased transparency would help the company?
  • How can I utilize processes in place of gut feeling/experience?

Further, as the economic environment continues to change rapidly, and modern organizations keep adopting process-based approaches to ensure they are achieving their business goals, process analysis naturally becomes the perfect template for any company.

With this, process mining technology can help modern businesses manage process challenges beyond the boundaries of implementation. We can evaluate the proof of concept (PoC) for any proposed improvements, and extract relevant information from a homogenous data set. Of course, process modeling and business process management (BPM) are available to solve the potentially tricky integration phase.

Process mining and analysis initiatives

Process mining and discovery initiatives can also provide critical insights throughout the automation and any Robotic Process Automation (RPA) journey, from defining the strategy to continuous improvement and innovation. Data-based process mining can even extend process analysis across teams and individuals, decreasing incident resolution times, and subsequently improving working habits via the discovery and validation of automation opportunities.

A further example of where process mining and strategic process analysis/alignment is already paying dividends is IT incident management. Here, “incident” is an unplanned interruption to an IT service, which may be complete unavailability or merely a reduction in quality. The goal of the incident management process is to restore regular service operation as quickly as possible and to minimize the impact on business operations. Incident management is a critical process in Information Technology Library (ITIL).

Process mining can also further drive improvement in as-is incident management processes as well as exceptional and unwanted process steps, by increasing visibility and transparency across IT processes. Process mining will swiftly analyze the different working habits across teams and individuals, decreasing incident resolution times, and subsequently improving customer impact cases.

Positive and practical experiences with process mining across industries have also led to the further dynamic development of tools, use cases, and the end-user community. Even with very experienced process owners, the visualization of processes can skyrocket improvement via new ideas and discussion.

However, the potential performance gains are more extensive, with the benefits of using process mining for incident management, also including:

  • Finding out how escalation rules are working and how the escalation is done;
  • Calculating incident management KPIs, including SLA (%);
  • Discovering root causes for process problems;
  • Understanding the effect of the opening interface (email, web form, phone, etc.);
  • Calculating the cost of the incident process;
  • Aligning the incident management system with your incident management process.

Robotic Process Automation (RPA)

Robotic process automation (RPA) provides a virtual workforce to automatize manual, repetitive, and error-prone tasks. However, successful process automation requires specific knowledge about the intended (and potential) benefits, effective training of the robots, and continuous monitoring of their performance and processes.

With this, process mining supports organizations throughout the lifecycle of RPA initiatives by monitoring and benchmarking robots to ensure sustainable benefits. These insights are especially valuable for process miners and managers with a particular interest in process automation. By unlocking the experiences with process mining, a company better understands what is needed today, for tomorrow’s process initiatives.

To further upgrade the impact of robot-led automation, there is also a need for a solid understanding of legacy systems, and an overview of automation opportunities. Process mining tools provide key insights throughout the entire RPA journey, from defining the strategy to continuous improvement and innovation.

Benefits of process mining and analysis within the RPA lifecycle include:

  1. Overviews of processes within the company, based on specific criteria;
  2. Identification of processes suitable for RPA implementation during the preparation phase;
  3. Mining the optimal process flow/process path;
  4. Understanding the extent to which RPA can be implemented in legacy processes and systems;
  5. Monitoring and analysis of RPA performance during the transition/handover of customization;
  6. Monitoring and continuous improvement of RPA in the post-implementation phase.

The process of better business understanding

Every organization is different and brings with it a variety of process-related questions. Yet some patterns are usually repeated. For example, customers who introduce data supported process analysis as part of business transformation initiatives will typically face challenges in harmonizing processes from fragmented sectors and regional locations. Here it helps enormously to base actions on data and statistics from the respective processes, instead of relying on the instincts and estimations of individuals.

With this, process analysis which is supported by data, enables a fact-based discussion, and builds a bridge between employees, process experts and management. This helps avoid siloed thinking, as well as allowing the transparent design of handovers and process steps which cross departmental boundaries within an organization.

In other words, to unlock future success and transformation, we must be processing… today.

Find out more about process mining with Signavio Process Intelligence, and see how it can help your organization uncover the hidden value of process, generate fresh ideas, and save time and money.

Process Analytics – Data Analysis for Process Audit & Improvement

Process Mining: Innovative data analysis for process optimization and audit

Step-by-Step: New ways to detect compliance violations with Process Analytics

In the course of the advancing digitization, an enormous upheaval of everyday work is currently taking place to ensure the complete recording of all steps in IT systems. In addition, companies are increasingly confronted with increasingly demanding regulatory requirements on their IT systems.


Read this article in German:
“Process Mining: Innovative Analyse von Datenspuren für Audit und Forensik “


The unstoppable trend towards a connected world will further increase the possibilities of process transparency, but many processes in the company area are already covered by one or more IT systems. Each employee, as well as any automated process, leaves many data traces in IT backend systems, from which processes can be replicated retroactively or in real time. These include both obvious processes, such as the entry of a recorded purchase order or invoice, as well as partially hidden processes, such as the modification of certain entries or deletion of these business objects.

1 Understanding Process Analytics

Process Analytics is a data-driven methodology of the actual process analysis, which originates in forensics. In the wake of the increasing importance of computer crime, it became necessary to identify and analyze the data traces that potential criminals left behind in IT systems in order to reconstruct the event as much as possible.

With the trend towards Big Data Analytics, Process Analytics has not only received new data bases, but has also been further developed as an analytical method. In addition, the visualization enables the analyst or the report recipient to have a deeper understanding of even more complex business processes.

While conventional process analysis primarily involves employee interviews and monitoring of the employees at the desk in order to determine actual processes, Process Analytics is a leading method, which is purely fact-based and thus objectively approaching the processes. It is not the employees who are asked, but the IT systems, which not only store all the business objects recorded in a table-oriented manner, but also all process activities. Every IT system for enterprise purposes log all relevant activities of the whole business process, in the background and invisible to the users, such as orders, invoices or customer orders, with a time stamp.

2 The right choice of the processes to analyze

Today almost every company works with at least one ERP system. As other systems are often used, it is clear which processes can not be analyzed: Those processes, which are still carried out exclusively on paper and in the minds of the employees, which are typical decision-making processes at the strategic level and not logged in IT systems.

Operational processes, however, are generally recorded almost seamlessly in IT systems. Furthermore, almost all operational decisions are recorded by status flags in datasets.

The operational processes, which can be reconstructed and analyzed with Process Mining very well and which are of equal interest from the point of view of compliance, include for example:

– Procurement

– Logistics / Transport

– Sales / Ordering

– Warranty / Claim Management

– Human Resource Management

Process Analytics enables the greatest possible transparency across all business processes, regardless of the sector and the department. Typical case IDs are, for example, sales order number, procurement order number, customer or material numbers.

3 Selection of relevant IT systems

In principle, every IT system used in the company should be examined with regard to the relevance for the process to be analyzed. As a rule, only the ERP system (SAP ERP or others) is relevant for the analysis of the purchasing processes. However, for other process areas there might be other IT systems interesting too, for example separate accounting systems, a CRM or a MES system, which must then also be included.

Occasionally, external data should also be integrated if they provide important process information from externally stored data sources – for example, data from logistics partners.

4 Data Preparation

Before the start of the data-driven process analysis, the data directly or indirectly indicating process activities must be identified, extracted and processed in the data sources. The data are stored in database tables and server logs and are collected via a data warehousing procedure and converted into a process protocol or – also called – event log.

The event log is usually a very large and wide table which, in addition to the actual process activities, also contains parameters which can be used to filter cases and activities. The benefit of this filter option is, for example, to show only process flows where special product groups, prices, quantities, volumes, departments or employee groups are involved.

5 Analysis Execution

The actual inspection is done visually and thus intuitively with an interactive process flow diagram, which represents the actual processes as they could be extracted from the IT systems. The event log generated by the data preparation is loaded into a data visualization software (e.g. Celonis PM Software), which displays this log by using the case IDs and time stamps and transforms this information in a graphical process network. The process flows are therefore not modeled by human “process thinkers”, as is the case with the target processes, but show the real process flows given by the IT systems. Process Mining means, that our enterprise databases “talk” about their view of the process.

The process flows are visualized and statistically evaluated so that concrete statements can be made about the process performance and risk estimations relevant to compliance.

6 Deviation from target processes

The possibility of intuitive filtering of the process presentation also enables an analysis of all deviation of our real process from the desired target process sequences.

The deviation of the actual processes from the target processes is usually underestimated even by IT-affine managers – with Process Analytics all deviations and the general process complexity can now be investigated.

6 Detection of process control violations

The implementation of process controls is an integral part of a professional internal control system (ICS), but the actual observance of these controls is often not proven. Process Analytics allows circumventing the dual control principle or the detection of functional separation conflicts. In addition, the deliberate removal of internal control mechanisms by executives or the incorrect configuration of the IT systems are clearly visible.

7 Detection of previously unknown behavioral patterns

After checking compliance with existing controls, Process Analytics continues to be used to recognize previously unknown patterns in process networks, which point to risks or even concrete fraud cases and are not detected by any control due to their previously unknown nature. In particular, the complexity of everyday process interlacing, which is often underestimated as already mentioned, only reveals fraud scenarios that would previously not have been conceivable.

8 Reporting – also possible in real time

As a highly effective audit analysis, Process Analytics is already an iterative test at intervals of three to twelve months. After the initial implementation, compliance violations, weak or even ineffective controls, and even cases of fraud, are detected reliably. The findings can be used in the aftermath to stop the weaknesses. A further implementation of the analysis after a waiting period makes it possible to assess the effectiveness of the measures taken.

In some application scenarios, the seamless integration of the process analysis with the visual dashboard to the IT system landscape is recommended so that processes can be monitored in near real-time. This connection can also be supplemented by notification systems, so that decision makers and auditors are automatically informed about the latest process bottlenecks or violations via SMS or e-mail.

Fazit

Process Analytics is, in the course of the digitalization, the highly effective methodology from the area of ​​Big Data Analysis for detecting compliance-relevant events throughout the company and also providing visual support for forensic data analysis. Since this is a method, and not a software, an expansion of the IT system landscape, especially for entry, is not absolutely necessary, but can be carried out by internal or external employees at regular intervals.

Establish a Collaborative Culture – Process Mining Rule 4 of 4

This is article no. 4 of the four-part article series Privacy, Security and Ethics in Process Mining.

Read this article in German:
Datenschutz, Sicherheit und Ethik beim Process Mining – Regel 4 von 4

Perhaps the most important ingredient in creating a responsible process mining environment is to establish a collaborative culture within your organization. Process mining can make the flaws in your processes very transparent, much more transparent than some people may be comfortable with. Therefore, you should include change management professionals, for example, Lean practitioners who know how to encourage people to tell each other “the truth”, in your team.

Furthermore, be careful how you communicate the goals of your process mining project and involve relevant stakeholders in a way that ensures their perspective is heard. The goal is to create an atmosphere, where people are not blamed for their mistakes (which only leads to them hiding what they do and working against you) but where everyone is on board with the goals of the project and where the analysis and process improvement is a joint effort.

Do:

  • Make sure that you verify the data quality before going into the data analysis, ideally by involving a domain expert already in the data validation step. This way, you can build trust among the process managers that the data reflects what is actually happening and ensure that you have the right understanding of what the data represents.
  • Work in an iterative way and present your findings as a starting point for discussion in each iteration. Give people the chance to explain why certain things are happening and let them ask additional questions (to be picked up in the next iteration). This will help to improve the quality and relevance of your analysis as well as increase the buy-in of the process stakeholders in the final results of the project.

Don’t:

  • Jump to conclusions. You can never assume that you know everything about the process. For example, slower teams may be handling the difficult cases, people may deviate from the process for good reasons, and you may not see everything in the data (for example, there might be steps that are performed outside of the system). By consistently using your observations as a starting point for discussion, and by allowing people to join in the interpretation, you can start building trust and the collaborative culture that process mining needs to thrive.
  • Force any conclusions that you expect, or would like to have, by misrepresenting the data (or by stating things that are not actually supported by the data). Instead, keep track of the steps that you have taken in the data preparation and in your process mining analysis. If there are any doubts about the validity or questions about the basis of your analysis, you can always go back and show, for example, which filters have been applied to the data to come to the particular process view that you are presenting.

Consider Anonymization – Process Mining Rule 3 of 4

This is article no. 3 of the four-part article series Privacy, Security and Ethics in Process Mining.

Read this article in German:
Datenschutz, Sicherheit und Ethik beim Process Mining – Regel 3 von 4

If you have sensitive information in your data set, instead of removing it you can also consider the use of anonymization. When you anonymize a set of values, then the actual values (for example, the employee names “Mary Jones”, “Fred Smith”, etc.) will be replaced by another value (for example, “Resource 1”, “Resource 2”, etc.).

If the same original value appears multiple times in the data set, then it will be replaced with the same replacement value (“Mary Jones” will always be replaced by “Resource 1”). This way, anonymization allows you to obfuscate the original data but it preserves the patterns in the data set for your analysis. For example, you will still be able to analyze the workload distribution across all employees without seeing the actual names.

Some process mining tools (Disco and ProM) include anonymization functionality. This means that you can import your data into the process mining tool and select which data fields should be anonymized. For example, you can choose to anonymize just the Case IDs, the resource name, attribute values, or the timestamps. Then you export the anonymized data set and you can distribute it among your team for further analysis.

Do:

  • Determine which data fields are sensitive and need to be anonymized (see also the list of common process mining attributes and how they are impacted if anonymized).
  • Keep in mind that despite the anonymization certain information may still be identifiable. For example, there may be just one patient having a very rare disease, or the birthday information of your customer combined with their place of birth may narrow down the set of possible people so much that the data is not anonymous anymore.

Don’t:

  • Anonymize the data before you have cleaned your data, because after the anonymization the data cleaning may not be possible anymore. For example, imagine that slightly different customer category names are used in different regions but they actually mean the same. You would like to merge these different names in a data cleaning step. However, after you have anonymized the names as “Category 1”, “Category 2”, etc. the data cleaning cannot be done anymore.
  • Anonymize fields that do not need to be anonymized. While anonymization can help to preserve patterns in your data, you can easily lose relevant information. For example, if you anonymize the Case ID in your incident management process, then you cannot look up the ticket number of the incident in the service desk system anymore. By establishing a collaborative culture around your process mining initiative (see guideline No. 4) and by working in a responsible, goal-oriented way, you can often work openly with the original data that you have within your team.

Clarify Goal of the Analysis – Process Mining Rule 1 of 4

This is article no. 1 of the four-part article series Privacy, Security and Ethics in Process Mining.

Read this article in German:
Datenschutz, Sicherheit und Ethik beim Process Mining – Regel 1 von 4

Clarify Goal of the Analysis

The good news is that in most situations Process Mining does not need to evaluate personal information, because it usually focuses on the internal organizational processes rather than, for example, on customer profiles. Furthermore, you are investigating the overall process patterns. For example, a process miner is typically looking for ways to organize the process in a smarter way to avoid unnecessary idle times rather than trying to make people work faster.

However, as soon as you would like to better understand the performance of a particular process, you often need to know more about other case attributes that could explain variations in process behaviours or performance. And people might become worried about where this will lead them.

Therefore, already at the very beginning of the process mining project, you should think about the goal of the analysis. Be clear about how the results will be used. Think about what problem are you trying to solve and what data you need to solve this problem.

Do:

  • Check whether there are legal restrictions regarding the data. For example, in Germany employee-related data cannot be used and typically simply would not be extracted in the first place. If your project relates to analyzing customer data, make sure you understand the restrictions and consider anonymization options (see guideline No. 3).
  • Consider establishing an ethical charter that states the goal of the project, including what will and what will not be done based on the analysis. For example, you can clearly state that the goal is not to evaluate the performance of the employees. Communicate to the people who are responsible for extracting the data what these goals are and ask for their assistance to prepare the data accordingly.

Don’t:

  • Start out with a fuzzy idea and simply extract all the data you can get. Instead, think about what problem are you trying to solve? And what data do you actually need to solve this problem? Your project should focus on business goals that can get the support of the process managers you work with (see guideline No. 4).
  • Make your first project too big. Instead, focus on one process with a clear goal. If you make the scope of your project too big, people might block it or work against you while they do not yet even understand what process mining can do.

Privacy, Security and Ethics in Process Mining – Article Series

When I moved to the Netherlands 12 years ago and started grocery shopping at one of the local supermarket chains, Albert Heijn, I initially resisted getting their Bonus card (a loyalty card for discounts), because I did not want the company to track my purchases. I felt that using this information would help them to manipulate me by arranging or advertising products in a way that would make me buy more than I wanted to. It simply felt wrong.

Read this article in German:
Datenschutz, Sicherheit und Ethik beim Process Mining – Artikelserie

The truth is that no data analysis technique is intrinsically good or bad. It is always in the hands of the people using the technology to make it productive and constructive. For example, while supermarkets could use the information tracked through the loyalty cards of their customers to make sure that we have to take the longest route through the store to get our typical items (passing by as many other products as possible), they can also use this information to make the shopping experience more pleasant, and to offer more products that we like.

Most companies have started to use data analysis techniques to analyze their data in one way or the other. These data analyses can bring enormous opportunities for the companies and for their customers, but with the increased use of data science the question of ethics and responsible use also grows more dominant. Initiatives like the Responsible Data Science seminar series [1] take on this topic by raising awareness and encouraging researchers to develop algorithms that have concepts like fairness, accuracy, confidentiality, and transparency built in (see Wil van der Aalst’s presentation on Responsible Data Science at Process Mining Camp 2016).

Process Mining can provide you with amazing insights about your processes, and fuel your improvement initiatives with inspiration and enthusiasm, if you approach it in the right way. But how can you ensure that you use process mining responsibly? What should you pay attention to when you introduce process mining in your own organization?

In this article series, we provide you four guidelines that you can follow to prepare your process mining analysis in a responsible way:

Part 1 of 4: Clarify the Goal of the Analysis

Part 2 of 4: Responsible Handling of Data

Part 3 of 4: Consider Anonymization

Part 4 of 4: Establish a collaborative Culture

Acknowledgements

We would like to thank Frank van Geffen and Léonard Studer, who initiated the first discussions in the workgroup around responsible process mining in 2015. Furthermore, we would like to thank Moe Wynn, Felix Mannhardt and Wil van der Aalst for their feedback on earlier versions of this article.