Process Mining Camp 2022

Pack your bags, get your provisions, and plan your trip — Just a few more weeks until we get together at this year’s Process Mining Camp on Thursday, 23 June in Eindhoven, the Netherlands.

You can find the camp website with the detailed program here. And of course you should register now to get one of our limited early bird tickets!

While we are in the final stretches of preparing this year’s camp, here is what you can expect.

Practice talks: Listen and learn

Our honest and relatable practice talks are the heart and soul of Process Mining Camp. Here are the speakers who will share their experiences at this this year’s camp.

Get to know your fellow process miners

In the afternoon, we get interactive — Join us for a discussion roundtable and connect to the community on a deeper level.

In small groups of up to eight people, you will talk about process mining topics such as customer journeys, auditing, Lean Six Sigma, the business case for process mining, data transformations, and security, privacy and ethics.

The goal is not to solve all the world’s problems but to share openly and learn from each other. In the interaction with other process miners who have similar backgrounds as you, you can discuss challenges and ideas that deserve further attention.

At the end of the roundtable, each group will share their main insights with the rest of the community, so that we can all benefit.

Talk to us

Lieke Vermeulen – © Lieke.net

For the very first time at camp, Rudi and Anne will run a process mining clinic. Do you have a data set that defies all your efforts? Questions that you always wanted to get answered? Process mining problems that leave you scratching your head?

Bring your laptop and show them to us! We will unpack the issue together and dig into our experiences to give you expert advice.

The clinic will be available during all the breaks as well as in parallel to the discussion roundtables.

Join the community and sign up now!

Dive into process mining for a whole day, and find out what others in the community are up to. We take care of food and drinks during camp. And if you sign up before Friday 3 June 12:00 CEST, not only can you benefit from our early bird rate — you’ll also get your very own camp t-shirt!

All the breaks, lunch, dinner, and coffee will be outside. Other parts of the camp program will also take place outdoors (learn more about our Corona measures here). We expect this year to be the most summer-campy camp ever. We will even have a sort-of campfire in the form of a BBQ at the end of the day.

Don’t miss Process Mining Camp 2022, and sign up now!

We can’t wait to see you in Eindhoven on 23 June.

— Your friends from Fluxicon

Deep Learning World – Virtual Edition 2020!

DEEP LEARNING WORLD 2020

Virtual Edition, May 11-12, 2020

The premier conference covering the
commercial deployment of deep learning

Deep Learning is no longer the cool new discipline. Instead it has become another tool in the toolbox of the data scientist – but a very important one! Without RNN, CNN etc. many applications that make our daily life better or help us to improve our business wouldn’t be possible. Take for example the German Federal State NRW: they are using neural networks to detect child pornography. Other organizations use it to detect cancer, translate text or inspect machines. It’s also important to understand how Deep Learning sits alongside traditional machine learning methods. As an expert you should know when and how to apply different methods for different applications. At the Deep Learning World conference, you will learn from other practitioners why they decided for a deep, transfer or reinforcement learning approach, what the analytical and technical but also organisational and economic challenges were and how they solved them. Take this opportunity and visit the two-day event to broaden your knowledge, deepen your understanding and discuss your questions with other Deep Learning experts – see you virtually in May 2020!

Why should you participate?

We will provide a live-streamed virtual version of deep learning on 11-12 May, 2020: you will be able to attend sessions and to interact and connect with the speakers and fellow members of the data science community including sponsors and exhibitors from your home or your office.

What about the workshops?

The workshops will also be held virtually on the planned date:
13 May, 2020.

Don’t have a ticket yet?

It‘s not too late to join the data science community.
Register by 10 May to receive access to the livestream and recordings.

REGISTER HERE

We’re looking forward to see you – virtually!

This year the Deep Learning World runs alongside with the Predictive Analytics World for Healthcare and Predictive Analytics World for Industry 4.0.

Predictive Analytics World for Industry 4.0

Difficult times call for creative measures

Predictive Analytics World for Industry 4.0 will go virtual and you still have time to join us!

What do you have in store for me?

We will provide a live-streamed virtual version of PAW Industry 4.0 Munich 2020 on 11-12 May, 2020: you will be able to attend sessions and to interact and connect with the speakers and fellow members of the data science community including sponsors and exhibitors from your home or your office.

What about the workshops?

The workshops will also be held virtually on the planned date:
13 May, 2020.

Get a complimentary virtual sneak preview!

If you would like to join us for a virtual sneak preview of the workshop „Data Thinking“ on Thursday, April 16, so you can familiarise yourself with the quality of the virtual edition of both conference and workshops and how the interaction with speakers and attendees works, please send a request to registration@risingmedia.com.

Don’t have a ticket yet?

It‘s not too late to join the data science community.
Register by 10 May to receive access to the livestream and recordings.

REGISTER HERE

We’re looking forward to see you – virtually!

This year Predictive Analytics World for Industry 4.0 runs alongside Deep Learning World and Predictive Analytics World for Healthcare.

c/o data science – in care of data science

Are you looking for a platform where you can personally exchange ideas with other data scientists and data geeks and share and discuss challenges?

c/o data science is designed for YOU – be part of the premiere on November 12, 2019 at the Basecamp in Bonn! You can expect deep dive talks, hack sessions and a bar camp as well as live demos, code to go and lots of time for networking – create your own program!

Join us and share your passion with peers!

And who’s running the camp?

SIGS DATACOM is an international and vendor-independent company for further education in information technology. SIGS DATACOM is a leading provider of further education in the fields of software architecture and engineering, data and insights as well as artificial intelligence. SIGS DATACOM offers high-quality specialist information to software architects, IT project managers/managers, experienced programmers, developers and business intelligence/analytics professionals, project managers and consultants as well as AI professionals and data scientists. The c/o data science is a consistent addition to our previous offering in the area of continuing education and specialist information.

All information: https://www.co-datascience.de/

Datum 12.11.2019

Organizer: Sigz Datacom GmbH

Venue:

Basecamp Bonn

In der Raste 1

Bonn, 53129 Germany

 

https://www.co-datascience.de/

Cost: zwsichen 30-300€

 

DATANOMIQ MeetUp: Interactive Data Exploration and GUI’s in JupyterNotebooks

After our first successful collaboration Meetup with Mister Spex, we straightly continue with our next partner: VW Digital Labs!

Join us on Wednesday, October 9 for our DATANOMIQ Data Science Meetup at VW Digital Labs and get inspired.

When:
Wednesday, October 9, time TBA

Where:
VW Digital Labs
Stralauer Allee 7, 10245 Berlin

 

AGENDA
18:30 doors open
19:00 Interactive Data Exploration and GUI’s in JupyterNotebooks – Christopher Kipp.
– using ipywidgets to get basic UI components and connet them
– qgrid to make Dataframes interactive (sortable, filterable, …)
– building interactive visualisations with bqplot

19:20 Q&A

10 minute break

19:40 second presentation
20:00 Q&A

20:15 networking

 

FREE ENTRY, snacks and drinks sponsored by VW digital labs.

Make sure to get your ticket: https://www.eventbrite.de/e/datanomiq-meetup-interactive-data-exploration-and-guis-in-jupyternotebook-tickets-72931655545

Entrance only with registration.

 

Join our MeetUp group: https://www.meetup.com/de-DE/DATANOMIQ-Data-Science-Berlin/

Interview – Knowledge Graphs and Semantic Technologies

“It’s incredibly empowering when data that is clear and understood – what we call ‘beautiful data’ – is available to the data workforce.”

Juan F. Sequeda is co-founder of Capsenta, a spin-off from his research, and Senior Director of Capsenta Labs. He is an expert on knowledge graphs, semantic web, semantic & graph data management and (ontology-based) data integration. In this interview Juan lets us know how SMEs can create value from data, what makes the Knowledge Graph so important and why CDOs and CIOs should use semantic technologies.

Data Science Blog: If you had to name five things that apply to SMEs as well as enterprises as they are on their journey through digital transformation: What are the most important steps to take in order to create value from data?

I would state four things:

  1. Focus on the business problem that needs to be solved instead of the technology.
  2. Getting value out of your data is a social-technical problem. Not everything can be solved by technology and automation. It is crucial to understand the social/human aspect of the problems.
  3. Avoid boiling the ocean. Be agile and iterate.
  4. Recall that it’s a marathon, not a sprint. Hence why you shouldn’t focus on boiling the ocean.

Data Science Blog: You help companies to make their company data meaningfully and thus increase their value. The magic word is the knowledge graph. What exactly is a Knowledge Graph?

Let’s recall that the term “knowledge graph”, that is being actively used today, was coined by Google in a 2012 blogpost. From an industry point of view, it’s a term that represents data integration, where not just entities but also relationships are first class citizens. In other words, it’s data integration based on graphs. That is why you see graph database companies use the term knowledge graph instead of data integration.

In the academic circle, there is a “debate” on what the term “knowledge graph” means. As academics, it’s clear that we should always strive to have well defined terms. Nevertheless, I find it ironic that academics are spending time debating on the definition of a term that appeared in a (marketing) blog post 7 years ago! I agree with Simeon Warner on this: “I care about putting more knowledge in my graph, instead of defining what is a knowledge graph”.

Whatever definition prevails, it should be open and inclusive.

On a final note, it is paramount that we remember our history in order to avoid reinventing the wheel. There is over half a century of research results that has led us to what we are calling Knowledge Graphs today. If you are interested, please check out our upcoming ISWC 2019 tutorial “Knowledge Graphs: How did we get here? A Half Day Tutorial on the History of Knowledge Graph’s Main Ideas“.

Data Science Blog: Speaking of Knowledge Graphs: According to SEMANTiCS 2019 Research and Innovation Chair Philippe Cudre-Mauroux the next generation of knowledge graphs will capture more detailed information. Towards which directions are you steering with gra.fo?

Gra.fo is a knowledge graph schema (i.e ontology) collaborative modeling tool combined with google doc style features such as real-time collaboration, comments, history and search.

Designing a knowledge graph schema is just the first step. You have to do something with it! The next step is to map the knowledge graph schema to underlying data sources in order to integrate data.

We are driving Gra.fo to also be a mapping management system. We recently released our first mapping features. You now have the ability to import existing R2RML mapping. The next step will be to create the mappings between relational databases and the schema all within Gra.fo. Furthermore, we will extend to support mappings from different types of sources.

Finally, there are so many features that our users are requesting. We are working on those and will also offer an API in order to empower users to develop their own apps and features.

Data Science Blog: At Capsenta, you are changing the way enterprises model, govern and integrate data. Put in brief, how can you explain the benefits of using semantic technologies and knowledge technologies to a CDO or CIO? Which clients could you serve and how did you help them?

Business users need to answer critical business questions quickly and accurately. However, the frequent bottleneck is the lack of understanding of the large and complex enterprise databases. Additionally, the IT experts who do understand are not always available. The ultimate goal is to empower business users to access the data in the way they think of their domain.

This is where Knowledge Graphs come into play.

At Capsenta, we use our Knowledge Graph technology to bridge this conceptualization gap between the complex and inscrutable data sources and the business intelligence and data analytic tools that domain experts use to answer critical business questions. Our goal is to deliver beautiful data so the business users and data scientist can run with the data.

We are helping large scale enterprises in e-commerce, oil & gas and life science industries to generate beautiful data.

Data Science Blog: What are reasons for which Knowledge Graphs should be part of any corporate strategy?

Graphs are very easy for people to understand and express the complex relationships between concepts. Bubbles and lines between them (i.e. a graph!) is what domain experts draw on the whiteboard all the time. We have even had C-level executives look at a Knowledge Graph and immediately see how it expresses a portion of their business and even offer suggestions for additional richness. Imagine that, C-level executives participating in an ontology engineering session because they understand the graph.

This is in sharp contrast to the data itself, which is almost always very difficult to understand and overwhelming in scope. Critical business value is available in a subset of this data. A Knowledge Graph bridges the conceptual gap between a critical portion of the inscrutable data itself and the business user’s view of their world.

It’s incredibly empowering when data that is clear and understood – what we call “beautiful data” – is available to the data workforce.

Data Science Blog: Data-driven process analyzes require interdisciplinary knowledge. What advice would you give to a process manager who wants to familiarize her-/himself with the topic?

Domain experts/business users frequently use multiple words/phrases to mean the same thing and also a specific phrase can mean different things to different people. Also, the domain experts/business users speak a very different language than the IT database owners.

How can the business have clear, accurate answers when there’s inconsistency in what people mean and are thinking?

This is the social problem of getting everyone on the same page. We’ve seen Knowledge Graphs dramatically help with this problem. The exercise of getting people to agree upon what they mean and encoding it in an intuitive Knowledge Graph is very powerful.

The Knowledge Graph also brings the IT stakeholders into the process by clarifying exactly what data or, typically, complex calculations of data is the actual, accurate value for each and every business concept and relationship expressed in the Knowledge Graph.

It is crucial to avoid boiling the ocean. That is why we have designed a pay-as-you-go methodology to start small and provide value as quickly and accurately as possible. Ideally, the team has available what we call a “Knowledge Engineer”. This is someone who can effectively speak with the business users/domain experts and also nerd out with the database folks.

About SEMANTiCS Conference

SEMANTiCS is an established knowledge hub where technology professionals, industry experts, researchers and decision makers can learn about new technologies, innovations and enterprise implementations in the fields of Linked Data and Semantic AI. Founded in 2005 the SEMANTiCS is the only European conference at the intersection of research and industry.

This year’s event is hosted by the Semantic Web Company, FIZ Karlsruhe – Leibniz Institute for Information Infrastructure GmbH, Fachhochschule St. Pölten Forschungs GmbH, KILT Competence Center am Institut für Angewandte Informatik e.V. and Vrije Universiteit Amsterdam.