How To Perform High-Quality Data Science Job Assessments in 4 Steps

In 2009, Google Chief Economist Hal Varian said to the McKinsey Quarterly that “the sexy job in the next 10 years will be statisticians.” At the time, it was hard to believe. But more than a decade later, we can’t get around the importance of data. Where once oil ruled the world, data is now catching up—quickly. That calls for more and better data scientists. In this article, we’ll explain to you how to find them.

Why is it so hard to find good data scientists?

The demand for data scientist roles has increased by 650 percent since 2012, and that number will continue to grow as the amount of data—and power it holds—grows steadily, too.

But unsurprisingly, there hasn’t been an increase of 650 percent in available data scientists on the job market. Even though the job is a lot sexier—and better paid—than ten years ago, many employers are still struggling to fill their empty seats with talented data scientists.  McKinsey predicted that there would be a shortage of between 140,000 and 190,000 people with analytical skills in the U.S. alone in 2018, and even in 2022 good data scientists, data analysts, forecasting analysts, modelling analysts, machine learning scientists, are hard to find.  Add to that another 1.5 million managers who will also need to at least understand how data analysis drives decision-making, and you can see how employers can be in a bit of a pickle.

Why thoroughly screening data scientists is still crucial

Even though demand is growing much faster than the number of data scientists, companies can’t simply settle for the first data lover who’s available from Monday to Friday. It’s no longer the company with the most data that wins the game. The ones who are taking the lead are the ones that are able to get the most out of data. They can pull valuable information that helps with decision-making and innovation out of even the smallest pieces of data—and they’re right, over and over again. This is why it’s vital to check if applicants have the skills you need to derive valuable input out of data. You’ll be basing a lot of business decisions on what these data scientists tell you, so best make sure they’re right.

But what makes someone a great data scientist? Some people turn their life around and go from being a maths teacher to following a 12-week data science boot camp or online data science course and quickly get the hang of it—others are top of their class, but aren’t confident enough data scientists to inform your business on its next big move. The truth is that the skills a valuable data scientist has, will have to develop over the years. It’s not just the data literacy, hard skills and the brain for maths—they’ll also need to be able to present and communicate their findings the right way.

Finding the right data scientists using a data science job assessment

So, you’ll want to choose your data scientists carefully, but how do you do that? Resumes and portfolios might seem impressive, but how do you actually find out if someone has the skills you’re looking for—especially if you don’t have anyone on board yet that knows what to ask. The easiest and most effective thing to do, is to screen candidates early in the process, using a data science test that’s been created by a real-life expert. This will ensure that relevant questions are being asked, and you get a clear idea of who’s worth going through the hiring process with — and who isn’t. In this article, we’ll walk you through four steps that will help you set up a data science job assessment that is of real value to your hiring managers. Let’s get started.

Step 1: Choose the right platform

You could, of course, draw up an online survey and create a test in there to send out to all applicants, but these might be hard to ‘grade’—although you’ll develop a tremendous respect for teachers along the way. In many cases, it’s better to choose a dedicated platform that has tests available, and will help you swift through the results effortlessly.

Before you start looking for platforms, make a list of absolute needs that you won’t compromise on. Ask yourself at least the following questions:

  • What types of tests are you looking for? Only hard skills, or also soft skills? If you need both, look for a platform that offers both—mixing and matching can be time-consuming.
  • Will there be tests readily available, or are you looking for a platform that allows you to create your own tests?
  • Does the platform have experience with companies like yours?
  • How are the tests presented to candidates, and how do you want the test results presented to your hiring managers?
  • And last but not least: what are you willing to spend on a job assessment platform? Do they charge per candidate, a flat fee, or would you prefer an annual subscription?

Once you’ve chosen a platform that is right for you, the fun can begin.

Step 2: Start with a hard skills assessment

For roles like data scientists, you’ll be initially focusing on whether they possess the right hard skills. Depending on the specific role, you can test core data science topics such as:

Statistics

You’re expecting your future data scientist to be fluent in statistics. Depending on the level you’re hiring at, you might want to throw in a few questions that quickly test how fast someone can see through the woods in a mess of statistics, and if they can interpret them the right way.

Machine learning

For some more senior roles, machine learning is becoming increasingly important in the world of data science. If this is the case for the role you’re hiring for, test to see if someone knows how to use data to feed it to machine learning and build awesome products.

Neural networks

A big part of data science is knowing how to work with neural networks. Neural networks are a way to solve problems through trial and error, based on human and animal brains. It’s incredibly helpful if your data scientist’s brain can use them.

Deep learning

Deep learning is a subfield of machine learning that can be necessary in specific data science roles. It works more closely to the way the human brain makes decisions, so this will require a specific set of test questions.

Collecting data

All that data has to come from somewhere, right? Your data scientists should not only be able to read and process data, but also know where and how to get the most valuable input. For this, include some questions about data extraction, data transformation, and data loading. This can also include tests on Excel and querying languages like SQL.

Storing data

Databases should look nothing like the average teenage bedroom. Meaning that they should be nice and tidy, making it easier to extract valuable information from them. Since data isn’t just numbers, but can be anything from video to reviews, it’s crucial that you hire a data scientist who knows how to store this correctly.

Analyzing and modeling data

Data wrangling, data exploration, analysis, and modeling need in-depth understanding of math and programming, but luckily, even data scientists get some help.

Data scientists use analytical tools like Apache Spark, D3.js Python, and many, many more to analyze all that data. If you’re using a specific one in your company and want your data scientists to be able to hit the ground running, quickly test if they’re actually able to use the tools they list on their resume.

Visualizing and presenting data

At the end of the day, data scientists will have to be able to communicate their findings to other departments with people who are less data-savvy. For this, they often use tools that help them visualize data to explain it in a more easy-to-grasp way.

Test if your next data scientist is able to do that with a quick check on their skills in tools like Tableau, PowerBI, Plotly, Bokeh, or whichever one you use.

Step 3: Continue with a soft skill assessment

Your friendly neighborhood data scientist should not only be a math genius, they should possess the right soft skills too. If they’re impossible to work with, you won’t reap the benefits of their skill set. Productivity will suffer, and team morale might also take a hit. Here are some soft skills to test your candidates on:

  • Business-oriented: ultimately, your data scientist will be fueling your decision-making process. This means they’ll have to have a good head for business, on top of simply understanding the numbers.
  • Communication skills: sure, everyone in your company preferably has some of these, but since data scientists play such an important role in decision-making, you’ll want them to be able to express themselves well—and listen to what you’re asking from them.
  • Teamwork: your data scientists shouldn’t be on a little island somewhere in the company. The more they integrate with other departments, the easier it is for them to determine what your business needs from them.
  • Critical thinking skills: this one’s pretty self-explanatory, but the more critical your data scientist, the more reassurance you’ll have that data is correctly interpreted.
  • Creativity: data is less dry than it seems. From data storage to finding connections and problem-solving: it all requires some form of creative thinking.

Step 4: Follow up on the test results

If you want to make the most of your data science job assessment, it shouldn’t just be a test to see who goes through to the next round. For the candidates that ‘pass’, you can customize the questions in their follow-up interview based on the strengths and weaknesses they showed in their test. Because the test they took says a lot, but at the same time—it’s just a snapshot. Did they score remarkably high on certain skills? Ask them how they got to be so experienced in that, and what projects contributed most to that.

Did you notice that they struggled with questions about X? Ask how they are planning to improve on that and how they make sure this doesn’t impact the quality of their work for the time being—are they calling in help from a peer, or do they simply take more time to figure things out?

These types of follow-up questions steer a job interview in a much more real-life direction: it’s not a generic set of questions that any company could ask any employee, but a real conversation between you and the candidate, in which you can evaluate if they fit in the future of the company—and if your company fits in theirs.

Ready to start the hiring process?

With these tips, we’re sure you’ll get some extra reassurance that your next hire will be a great fit—not just based on their previous experience and a couple of interviews. If you want, you can keep reading about data science jobs—or simply start hiring. Good luck!

Mainframe Modernization: Making It Happen

In the fast-paced world of technology and business, it can be hard to keep up with what’s new. What’s new today can be obsolete in a few weeks, and adapting to this ever-changing landscape can become a challenge if an organization isn’t well prepared or equipped. Modernization of systems doesn’t necessarily mean transitioning to an entirely new system or platform; often, all it takes is actual modernization of existing tools to help them adapt to new business demands and requirements.

The mainframe is one system that has stood the test of time. A number of naysayers taut the system as “legacy” or obsolete, but the fact that mainframes handle 68% of the world’s production IT workloads indicate otherwise. Mainframes are proof that the latest isn’t always the greatest, standing firm as one of the foundations of business systems in today’s most successful businesses around the world. What some don’t realize is that the race toward digital transformation is not reliant on the system or platform an organization has in place; digital transformation initiatives rise and fall depending on how they approach data. Regardless of the platform used, data analysts who work with irrelevant or stale data are prone to achieve false or misleading results. Access to real-time data is key, and data gathered days or hours—even minutes—ago isn’t a current representation of the current situation. This can lead to an organization acting on miscalculations and opportunities that no longer exist. Actionable insights need to come from real-time data to ensure that your organization can make sound business decisions in a timely manner.

The Old vs. the New

Conventional methodologies have kept mainframe data and real-time data separate due to issues with accessibility. Most businesses traditionally use Extract, Transform, and Load (ETL) processes for data analysis, a logistically complex and time-consuming process that’s prone to errors and stale data because it’s performed only periodically. This can lead to hours or even weeks of delay that’s simply unacceptable in today’s always-connected, always-on digital business landscape. Today’s businesses depend largely on real-time business intelligence—and access to it—to get a competitive edge.

In light of this perceived separation between mainframes and real-time data analytics, data scientists have found that the creation of analytic models can be too slow at times due to the conventional process of offloading data from the mainframe to other platforms for analysis. Organizations should move away from ETL processes and find ways to make real-time data analytics from the mainframe quicker and more efficient for their business. Mainframe modernization is key in making mainframe systems work with modern solutions because it allows for data virtualization, integrating all disparate enterprise data into a logical data layer. This layer manages the unified data and provides centralized governance while delivering the required data in real-time to business users.

Depending on the industry, mainframe modernization can optimize key business processes like order processing, payment gateways, and internal business operations queries. Mainframes are known for performing high-volume transaction processing, and these transactions can make or break a business. Managed in real-time, it will help organizations battle fraud and manage business risks as they arise, or even before they do. The data gathered can also help paint a more accurate representation of who a company’s customers are, allowing them to better plan resources and come up with more personalized initiatives.

Making IT Happen

Mainframe modernization is a major undertaking that presents a host of options for every organization. These options will vary depending on a number of factors, including business size, tenure, and industry. The following, however, are a few of the key considerations in modernization.

  • Look for quick wins
    As all businesses know by now, time is of the essence in every undertaking, even mainframe modernization. Its success is dependent on how quickly it can deliver the desired results.
  • Automate migration to avoid disruption
    Accelerating modernization efforts means leveraging modern tools API’s. The platforms available today are designed to minimize the effects of the modernization process if not avoid disruption completely.
  • Focus on total cost of ownership (TCO)
    It’s a mistake to view the initial cost of modernization at face value. Amore accurate view of costs involves a focus on the total cost of ownership. Calculating the TCO, or the purchase costs plus operation costs, will help minimize it even before modernization initiatives commence.
  • Don’t just leave everything to IT
    The modern IT team is one that includes everyone in the organization. Mainframe modernization is more a business initiative than an IT concern, and as such, should involve decision makers and business leaders. System integrations and updates remain the responsibility of IT specialists, but choosing the appropriate modernization approach and ensuring that the initiative succeeds should be a responsibility shared by the entire organization.
  • Create business value
    Mainframe modernization isn’t simply the implementation of technology upgrades or migration to a new system; it should also be an opportunity to combine the old with the new. Improve existing business processes or create new ones accordingly while capturing institutional knowledge from mainframe systems to gain a competitive edge.

Options abound when it comes to mainframe modernization, but that doesn’t mean that you should apply them all or choose the latest and greatest. Choosing the right approach to modernization entails re-examining your business and its goals and deciding which solution will take you there—and take you there fast. There exists an “imaginary” gap between digital innovators and mainframes because of the challenges and costs in data accessibility and system availability. The goal of mainframe modernization is to bridge this gap in the best, and fastest, way possible.

Business Intelligence – 5 Tips for better Reporting & Visualization

Data and BI Analysts often concentrate on learning a BI Tool, but the main thing to do is learn how to create good data visualization!

BI reporting has become an indispensable part of any company. In Business Intelligence, companies sometimes have to choose between tools such as PowerBI, QlikSense, Tableau, MikroStrategy, Looker or DataStudio (and others). Even if each of these tools has its own strengths and weaknesses, good reporting depends less on the respective tool but much more on the analyst and his skills in structured and appropriate visualization and text design.

Based on our experience at DATANOMIQ and the book “Storytelling with data” (see footnote in the pdf), we have created an infographic that conveys five tips for better design of BI reports – with self-reflective clarification.

Direct link to the PDF: https://data-science-blog.com/en/wp-content/uploads/sites/4/2021/11/Infographic_Data_Visualization_Infographic_DATANOMIQ.pdf

About DATANOMIQ

DATANOMIQ is a platform-independent consulting- and service-partner for Business Intelligence and Data Science. We are opening up multiple possibilities for the first time in all areas of the value chain through Big Data and Artificial Intelligence. We rely on the best minds and the most comprehensive method and technology portfolio for the use of data for business optimization.

Contact

DATANOMIQ GmbH
Franklinstr. 11
D-10587 Berlin
I: www.datanomiq.de
E: info@datanomiq.de

How Microsoft Azure Is Impacting Financial Companies

Microsoft Azure has taken a large chunk of the cloud marketplace, transforming companies with the speed and security of the cloud. Microsoft has over the years used Azure to cushion companies against risk, deal with fraud and differentiate their customer experience. 

With Microsoft Cloud App Security, customers experience 75% automatic threat elimination because of increased visibility and automated threat protection. With all these and more amazing benefits of using Azure, its market share is bound to increase even more over the coming years.

https://www.flickr.com/photos/91869083@N05/8493934839/in/photolist-dWzCUp-efhrzk-29k3oWh-9zALPj-9zALPh-9aXgpG-91z6Eo-6pABZ8-2htjpWP-Wrr2UG-aNxVLK-4z3omV-2kEyM6k-9GvMhf-Rf9aM7-4z7CQJ-aS8oqx-ekXUoo-9aU3wz-9aXjnw-aS8HTZ-LPgq61-2kjSEYf-2hamKDd-2h6JfeX-2h7gxoF-Fx6eAM-pQ6Ken-fbNckF-2iMRZSS-2hTUA6v-b8ayve-b8awer-dZwwJ7-2i3mmqV-e1dGQz-2dZwNg6-b8aoSH-b8arkc-6ztgDn-b8asCZ-efwZLM-b8atnM-b8attr-2kGQugq-2iowpX5-6zbcAC-dAQCVY-b8aoq8-517Jxq

Image Source

Financial companies have not been left behind by the Azure bandwagon. The financial industry is using Microsoft Azure to enhance its core functionsinvest money by making informed decisions, and minimize risk while maximizing returns. 

Azure facilitates these core functions by helping with the storage of huge amounts of data—  some dating back to decades ago—, data retrieval and data security. 

It also helps financial companies to keep up with regulatory compliance.

Microsoft Azure is not the only cloud services provider. But here’s why it is the most outstanding when it comes to helping financial companies achieve their business goals.

Azure Offers Hybrid and Multi-Cloud Computing for Financial Companies

The financial services industry is extremely dynamic. Organizations offering financial services have to constantly test the market and come up with new and innovative products and services. 

They are also often under pressure to extend their services across borders. Remember they have to do all of this while at the same time managing their existing customers, containing their risk, and dealing with fraud.

Financial regulations also keep changing. As financial companies increasingly embrace new technology for their services— including intelligent cloud computing— and they have to comply with industry regulations. They cannot afford to leave loopholes as they take on their journey with the cloud.

The financial services industry is highly competitive and keeps up with modernity. These companies have had to resort to the dynamic hybrid, multi-cloud computing, and public cloud strategies to keep up with the trend.

This is how a hybrid cloud model worksit enables existing on-premises applications to be extended through a connection to the public cloud. 

This allows financial companies to enjoy the speed, elasticity, and scale of the public cloud without necessarily having to remodel their entire applications. These organizations are afforded the flexibility of deciding what parts of their application remains in an existing data center and which one resides in the cloud.

Cloud computing with Azure allows financial organizations to operate more efficiently by providing end-to-end protection to information, allowing the digitization of financial services, and providing data security. 

Data security is particularly important to financial firms because they are often targeted by fraudsters and cyber threats. They, therefore, need to protect crucial information which they achieve by authenticating their data centers using Azure.

Here’s why financial companies cannot think of doing without Azure’s hybrid cloud computing even for just a day.

https://unsplash.com/collections/28744506/work?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

Photo by Windows on Unsplash

  • The ability to expand their geographic reach

Azure enables financial companies to establish data centers in new locations to meet globally growing demand. This allows them to open and explore new markets. They can then use Azure DevOps pipelines to maintain their data factories and keep everything consistent.

  • Consistent Infrastructure management

The hybrid cloud model promotes a consistent approach to infrastructure management across all locations, whether it is on-premises, public cloud, or the edge.

  • Increased Elasticity

Financial firms and banks utilizing Azure services can respond with great agility to transactional changes or changes in demand by provisioning or de-provisioning as the situation at hand demands. 

In cases where the organization requires high computation such as complex risk modeling, a hybrid strategy allows it to expand its capacity beyond its data center without overwhelming its servers.

  • Flexibility

A hybrid strategy allows financial organizations to choose cloud services that fall within their budget, match their needs, and suit their features.

  • Data security and enhanced regulatory compliance

Hybrid and multi-cloud strategies are a superb alternative for strictly on-premises strategies when one considers resiliency, data portability, and data security.

  • Reduces CapEx Expenses

Managing on-premises infrastructure is expensive. Financial companies utilizing Azure do not need to spend large amounts of money setting them up and managing them. 

With the increased elasticity of the hybrid system, financial organizations only pay for the resources they actually use, at a relatively lower cost.

Financial Organizations Have Access to an Analytics Platform

As we mentioned earlier, financial companies have the core function of making financial decisions in order to invest money and gain maximum returns at the least possible risk. 

Having been entrusted with their customers’ assets, the best way to ensure success in making profits is by using an analytics system.

Getting the form of analytics that helps with solving this investment problem is the kind of headache that does not go away by taking a tablet of ibuprofen and a glass of waterintegrating data is not an easy task. Besides, building a custom analytics solution from scratch is quite expensive.

Luckily for financial companies, Azure has a dedicated analytics platform for the financial services industry. It is custom-made just for these types of organizations. 

Their system is quite intuitive and easy to use. Companies not only get to save the resources they would have otherwise used to build a custom solution, but they get to learn about their investment risks and get instant results at cloud speed. 

They can mitigate against negatively impactful market occurrences and gain profits even when operating in adverse market conditions.

https://unsplash.com/collections/28744506/work?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

Image by Headway on Unsplash

Financial Companies Get Advanced Data Management

Good analytics goes hand-in-hand with a great data management system. Financial companies need to have good data, create an organized data warehouse, and have a secure data storage system.

In addition to storing your data, Microsoft Azure ensures your storage can be optimized to support advanced applications, for example, machine learning and forecasting. 

Azure even allows you to compress and store documents for long periods of time when you write the data to Microsoft Azure Blob Storage. These documents can be retrieved anytime when the need arises for auditors’, regulators’, and lawyers’ perusal. 

Conclusion

Microsoft has over time managed to gain the trust of many industries, the financial services industry inclusive. Using its cloud computing giant, Azure, it has empowered these companies to carry out their functions efficiently and at the lowest cost and risk possible.

Azure’s hybrid cloud computing strategy has made financial operations flexible, opened doors for financial companies to establish their services in multiple locations, and provided them with consistent infrastructure management, among many other benefits.

With their futuristic model and commitment to growth, it’s only prudent to assume that Microsoft Azure will continue carrying the mantle as the best cloud services provider in the financial services industry.

How to make a toy English-German translator with multi-head attention heat maps: the overall architecture of Transformer

If you have been patient enough to read the former articles of this article series Instructions on Transformer for people outside NLP field, but with examples of NLP, you should have already learned a great deal of Transformer model, and I hope you gained a solid foundation of learning theoretical sides on this algorithm.

This article is going to focus more on practical implementation of a transformer model. We use codes in the Tensorflow official tutorial. They are maintained well by Google, and I think it is the best practice to use widely known codes.

The figure below shows what I have explained in the articles so far. Depending on your level of understanding, you can go back to my former articles. If you are familiar with NLP with deep learning, you can start with the third article.

1 The datasets

I think this article series appears to be on NLP, and I do believe that learning Transformer through NLP examples is very effective. But I cannot delve into effective techniques of processing corpus in each language. Thus we are going to use a library named BPEmb. This library enables you to encode any sentences in various languages into lists of integers. And conversely you can decode lists of integers to the language. Thanks to this library, we do not have to do simplification of alphabets, such as getting rid of Umlaut.

*Actually, I am studying in computer vision field, so my codes would look elementary to those in NLP fields.

The official Tensorflow tutorial makes a Portuguese-English translator, but in article we are going to make an English-German translator. Basically, only the codes below are my original. As I said, this is not an article on NLP, so all you have to know is that at every iteration you get a batch of (64, 41) sized tensor as the source sentences, and a batch of (64, 42) tensor as corresponding target sentences. 41, 42 are respectively the maximum lengths of the input or target sentences, and when input sentences are shorter than them, the rest positions are zero padded, as you can see in the codes below.

*If you just replace datasets and modules for encoding, you can make translators of other pairs of languages.

We are going to train a seq2seq-like Transformer model of converting those list of integers, thus a mapping from a vector to another vector. But each word, or integer is encoded as an embedding vector, so virtually the Transformer model is going to learn a mapping from sequence data to another sequence data. Let’s formulate this into a bit more mathematics-like way: when we get a pair of sequence data \boldsymbol{X} = (\boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau _x)}) and \boldsymbol{Y} = (\boldsymbol{y}^{(1)}, \dots, \boldsymbol{y}^{(\tau _y)}), where \boldsymbol{x}^{(t)} \in \mathbb{R}^{|\mathcal{V}_{\mathcal{X}}|}, \boldsymbol{x}^{(t)} \in \mathbb{R}^{|\mathcal{V}_{\mathcal{Y}}|}, respectively from English and German corpus, then we learn a mapping f: \boldsymbol{X} \to \boldsymbol{Y}.

*In this implementation the vocabulary sizes are both 10002. Thus |\mathcal{V}_{\mathcal{X}}|=|\mathcal{V}_{\mathcal{Y}}|=10002

2 The whole architecture

This article series has covered most of components of Transformer model, but you might not understand how seq2seq-like models can be constructed with them. It is very effective to understand how transformer is constructed by actually reading or writing codes, and in this article we are finally going to construct the whole architecture of a Transforme translator, following the Tensorflow official tutorial. At the end of this article, you would be able to make a toy English-German translator.

The implementation is mainly composed of 4 classes, EncoderLayer(), Encoder(), DecoderLayer(), and Decoder() class. The inclusion relations of the classes are displayed in the figure below.

To be more exact in a seq2seq-like model with Transformer, the encoder and the decoder are connected like in the figure below. The encoder part keeps converting input sentences in the original language through N layers. The decoder part also keeps converting the inputs in the target languages, also through N layers, but it receives the output of the final layer of the Encoder at every layer.

You can see how the Encoder() class and the Decoder() class are combined in Transformer in the codes below. If you have used Tensorflow or Pytorch to some extent, the codes below should not be that hard to read.

3 The encoder

*From now on “sentences” do not mean only the input tokens in natural language, but also the reweighted and concatenated “values,” which I repeatedly explained in explained in the former articles. By the end of this section, you will see that Transformer repeatedly converts sentences layer by layer, remaining the shape of the original sentence.

I have explained multi-head attention mechanism in the third article, precisely, and I explained positional encoding and masked multi-head attention in the last article. Thus if you have read them and have ever written some codes in Tensorflow or Pytorch, I think the codes of Transformer in the official Tensorflow tutorial is not so hard to read. What is more, you do not use CNNs or RNNs in this implementation. Basically all you need is linear transformations. First of all let’s see how the EncoderLayer() and the Encoder() classes are implemented in the codes below.

You might be confused what “Feed Forward” means in  this article or the original paper on Transformer. The original paper says this layer is calculated as FFN(x) = max(0, xW_1 + b_1)W_2 +b_2. In short you stack two fully connected layers and activate it with a ReLU function. Let’s see how point_wise_feed_forward_network() function works in the implementation with some simple codes. As you can see from the number of parameters in each layer of the position wise feed forward neural network, the network does not depend on the length of the sentences.

From the number of parameters of the position-wise feed forward neural networks, you can see that you share the same parameters over all the positions of the sentences. That means in the figure above, you use the same densely connected layers at all the positions, in single layer. But you also have to keep it in mind that parameters for position-wise feed-forward networks change from layer to layer. That is also true of “Layer” parts in Transformer model, including the output part of the decoder: there are no learnable parameters which cover over different positions of tokens. These facts lead to one very important feature of Transformer: the number of parameters does not depend on the length of input or target sentences. You can offset the influences of the length of sentences with multi-head attention mechanisms. Also in the decoder part, you can keep the shape of sentences, or reweighted values, layer by layer, which is expected to enhance calculation efficiency of Transformer models.

4, The decoder

The structures of DecoderLayer() and the Decoder() classes are quite similar to those of EncoderLayer() and the Encoder() classes, so if you understand the last section, you would not find it hard to understand the codes below. What you have to care additionally in this section is inter-language multi-head attention mechanism. In the third article I was repeatedly explaining multi-head self attention mechanism, taking the input sentence “Anthony Hopkins admired Michael Bay as a great director.” as an example. However, as I explained in the second article, usually in attention mechanism, you compare sentences with the same meaning in two languages. Thus the decoder part of Transformer model has not only self-attention multi-head attention mechanism of the target sentence, but also an inter-language multi-head attention mechanism. That means, In case of translating from English to German, you compare the sentence “Anthony Hopkins hat Michael Bay als einen großartigen Regisseur bewundert.” with the sentence itself in masked multi-head attention mechanism (, just as I repeatedly explained in the third article). On the other hand, you compare “Anthony Hopkins hat Michael Bay als einen großartigen Regisseur bewundert.” with “Anthony Hopkins admired Michael Bay as a great director.” in the inter-language multi-head attention mechanism (, just as you can see in the figure above).

*The “inter-language multi-head attention mechanism” is my original way to call it.

I briefly mentioned how you calculate the inter-language multi-head attention mechanism in the end of the third article, with some simple codes, but let’s see that again, with more straightforward figures. If you understand my explanation on multi-head attention mechanism in the third article, the inter-language multi-head attention mechanism is nothing difficult to understand. In the multi-head attention mechanism in encoder layers, “queries”, “keys”, and “values” come from the same sentence in English, but in case of inter-language one, only “keys” and “values” come from the original sentence, and “queries” come from the target sentence. You compare “queries” in German with the “keys” in the original sentence in English, and you re-weight the sentence in English. You use the re-weighted English sentence in the decoder part, and you do not need look-ahead mask in this inter-language multi-head attention mechanism.

Just as well as multi-head self-attention, you can calculate inter-language multi-head attention mechanism as follows: softmax(\frac{\boldsymbol{Q} \boldsymbol{K} ^T}{\sqrt{d}_k}). In the example above, the resulting multi-head attention map is a 10 \times 9 matrix like in the figure below.

Once you keep the points above in you mind, the implementation of the decoder part should not be that hard.

5 Masking tokens in practice

I explained masked-multi-head attention mechanism in the last article, and the ideas itself is not so difficult. However in practice this is implemented in a little tricky way. You might have realized that the size of input matrices is fixed so that it fits the longest sentence. That means, when the maximum length of the input sentences is 41, even if the sentences in a batch have less than 41 tokens, you sample (64, 41) sized tensor as a batch every time (The 64 is a batch size). Let “Anthony Hopkins admired Michael Bay as a great director.”, which has 9 tokens in total, be an input. We have been considering calculating (9, 9) sized attention maps or (10, 9) sized attention maps, but in practice you use (41, 41) or (42, 41) sized ones. When it comes to calculating self attentions in the encoder part, you zero pad self attention maps with encoder padding masks, like in the figure below. The black dots denote the zero valued elements.

As you can see in the codes below, encode padding masks are quite simple. You just multiply the padding masks with -1e9 and add them to attention maps and apply a softmax function. Thereby you can zero-pad the columns in the positions/columns where you added -1e9 to.

I explained look ahead mask in the last article, and in practice you combine normal padding masks and look ahead masks like in the figure below. You can see that you can compare each token with only its previous tokens. For example you can compare “als” only with “Anthony”, “Hopkins”, “hat”, “Michael”, “Bay”, “als”, not with “einen”, “großartigen”, “Regisseur” or “bewundert.”

Decoder padding masks are almost the same as encoder one. You have to keep it in mind that you zero pad positions which surpassed the length of the source input sentence.

6 Decoding process

In the last section we have seen that we can zero-pad columns, but still the rows are redundant. However I guess that is not a big problem because you decode the final output in the direction of the rows of attention maps. Once you decode <end> token, you stop decoding. The redundant rows would not affect the decoding anymore.

This decoding process is similar to that of seq2seq models with RNNs, and that is why you need to hide future tokens in the self-multi-head attention mechanism in the decoder. You share the same densely connected layers followed by a softmax function, at all the time steps of decoding. Transformer has to learn how to decode only based on the words which have appeared so far.

According to the original paper, “We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.” After these explanations, I think you understand the part more clearly.

The codes blow is for the decoding part. You can see that you first start decoding an output sentence with a sentence composed of only <start>, and you decide which word to decoded, step by step.

*It easy to imagine that this decoding procedure is not the best. In reality you have to consider some possibilities of decoding, and you can do that with beam search decoding.

After training this English-German translator for 30 epochs you can translate relatively simple English sentences into German. I displayed some results below, with heat maps of multi-head attention. Each colored attention maps corresponds to each head of multi-head attention. The examples below are all from the fourth (last) layer, but you can visualize maps in any layers. When it comes to look ahead attention, naturally only the lower triangular part of the maps is activated.

This article series has not covered some important topics machine translation, for example how to calculate translation errors. Actually there are many other fascinating topics related to machine translation. For example beam search decoding, which consider some decoding possibilities, or other topics like how to handle proper nouns such as “Anthony” or “Hopkins.” But this article series is not on NLP. I hope you could effectively learn the architecture of Transformer model with examples of languages so far. And also I have not explained some details of training the network, but I will not cover that because I think that depends on tasks. The next article is going to be the last one of this series, and I hope you can see how Transformer is applied in computer vision fields, in a more “linguistic” manner.

But anyway we have finally made it. In this article series we have seen that one of the earliest computers was invented to break Enigma. And today we can quickly make a more or less accurate translator on our desk. With Transformer models, you can even translate deadly funny jokes into German.

*You can train a translator with this code.

*After training a translator, you can translate English sentences into German with this code.

[References]

[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, “Attention Is All You Need” (2017)

[2] “Transformer model for language understanding,” Tensorflow Core
https://www.tensorflow.org/overview

[3] Jay Alammar, “The Illustrated Transformer,”
http://jalammar.github.io/illustrated-transformer/

[4] “Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 14 – Transformers and Self-Attention,” stanfordonline, (2019)
https://www.youtube.com/watch?v=5vcj8kSwBCY

[5]Tsuboi Yuuta, Unno Yuuya, Suzuki Jun, “Machine Learning Professional Series: Natural Language Processing with Deep Learning,” (2017), pp. 91-94
坪井祐太、海野裕也、鈴木潤 著, 「機械学習プロフェッショナルシリーズ 深層学習による自然言語処理」, (2017), pp. 191-193

* I make study materials on machine learning, sponsored by DATANOMIQ. I do my best to make my content as straightforward but as precise as possible. I include all of my reference sources. If you notice any mistakes in my materials, including grammatical errors, please let me know (email: yasuto.tamura@datanomiq.de). And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.

5 AI Tricks to Grow Your Online Sales

The way people shop is currently changing. This only means that online stores need optimization to stay competitive and answer to the needs of customers. In this post, we’ll bring up the five ways in which you can use artificial intelligence technology in an online store to grow your revenues. Let’s begin!

1. Personalization with AI

Opening the list of AI trends that are certainly worth covering deals with a step up in personalization. Did you know that according to the results of a survey that was held by Accenture, more than 90% of shoppers are likelier to buy things from those stores and brands that propose suitable product recommendations?

This is exactly where artificial intelligence can give you a big hand. Such progressive technology analyzes the behavior of your consumers individually, keeping in mind their browsing and purchasing history. After collecting all the data, AI draws the necessary conclusions and offers those product recommendations that the user might like.

Look at the example below with the block has a carousel of neat product options. Obviously, this “move” can give a big boost to the average cart sizes.

Screenshot taken on the official Reebok website

Screenshot taken on the official Reebok website

2. Smarter Search Options

With the rise of the popularity of AI voice assistants and the leap in technology in general, the way people look for things on the web has changed. Everything is moving towards saving time and getting faster better results.

One of such trends deals with embracing the text to speech and image search technology. Did you notice how many search bars have “microphone icons” for talking out your request?

On a similar note, numerous sites have made a big jump forward after incorporating search by picture. In this case, uploaded photos get analyzed by artificial intelligence technology. The system studies what’s depicted on the image and cross-checks it with the products sold in the store. In several seconds the user is provided with a selection of similar products.

Without any doubt, this greatly helps users find what they were looking for faster. As you might have guessed, this is a time-saving feature. In essence, this omits the necessity to open dozens of product pages on multiple sites when seeking out a liked item that they’ve taken a screenshot or photo of.

Check out how such a feature works on the official Amazon website by taking a look at the screenshots of StyleSnap provided below.

Screenshot taken on the official Amazon StyleSnap website

Screenshot taken on the official Amazon StyleSnap website

3. Assisting Clients via Chatbots

The next point on the list is devoted to AI chatbots. This feature can be a real magic wand with client support which is also beneficial for online sales.

Real customer support specialists usually aren’t available 24/7. And keeping in mind that most requests are on repetitive topics, having a chatbot instantly handle many of the questions is a neat way to “unload” the work of humans.

Such chatbots use machine learning to get better at understanding and processing client queries. How do they work? They’re “taught” via scripts and scenario schemes. Therefore, the more data you supply them with, the more matters they’ll be able to cover.

Case in point, there’s such a chat available on the official Victoria’s Secret website. If the user launches the Digital Assistant, the messenger bot starts the conversation. Based on the selected topic the user selects from the options, the bot defines what will be discussed.

Screenshot taken on the official Victoria’s Secret website

Screenshot taken on the official Victoria’s Secret website

4. Determining Top-Selling Product Combos

A similar AI use case for boosting online revenues to the one mentioned in the first point, it becomes much easier to cross-sell products when artificial intelligence “cracks” the actual top matches. Based on the findings by Sumo, you can boost your revenues by 10 to 30% if you upsell wisely!

The product database of online stores gets larger by the month, making it harder to know for good which items go well together and complement each other. With AI on your analytics team, you don’t have to scratch your head guessing which products people are likely to additionally buy along with the item they’re browsing at the moment. This work on singling out data can be done for you.

As seen on the screenshot from the official MAC Cosmetics website, the upselling section on the product page presents supplement items in a carousel. Thus, the chance of these products getting added to the shopping cart increases (if you compare it to the situation when the client would search the site and find these products by himself).

Screenshot taken on the official MAC Cosmetics website

Screenshot taken on the official MAC Cosmetics website

5. “Try It On” with a Camera

The fifth AI technology in this list is virtual try on that borrowed the power of augmented reality technology in the world of sales.

Especially for fields like cosmetics or accessories, it is important to find ways to help clients to make up their minds and encourage them to buy an item without testing it physically. If you want, you can play around with such real-time functionality and put on makeup using your camera on the official Maybelline New York site.

Consumers, ultimately, become happier because this solution omits frustration and unneeded doubts. With everything evident and clear, people don’t have the need to take a shot in the dark what will be a good match, they can see it.

Screenshot taken on the official Maybelline New York website

Screenshot taken on the official Maybelline New York website

In Closing

To conclude everything stated in this article, artificial intelligence is a big crunch point. Incorporating various AI-powered features into an online retail store can be a neat advancement leading to a visible growth in conversions.

Five ways Data Science is used in Fintech

Data science experts process and act upon data that digital resources produce. In the fintech world, data comes from mobile apps, transactions, conversations and financial standings. With this data for fintech, experts can improve the experience and success of businesses and customers alike.

Apps like PayPal, Venmo and Cash App have led the way for other fintech organizations, big and small, to grow. In fact, roughly 65% of Americans are already using digital banking in some capacity, whether it’s an app or online service. This growth, in turn, brings benefits. From personalization to integrating robotic advisors, here are five ways data scientists help fintech brands.

1. Personalization

Finance is one of the most personal industries out there as it deals with your private accounts and data. To match this uniqueness, fintechs can use data science for personalization. That way, customer service caters to individual needs.

As the fintech company gathers data from individual transactions, communications, behavior and interests, data scientists can then use said data to curate a better experience for the customer. They can advertise products and services that the customer may need to help with savings, for instance.

Contis is one example of a fintech that has integrated personalization into its services. Customers receive specific recommendations to create an efficient experience.

2. Fundraising

Fundraising had an interesting year in 2020. Amid racial justice protests and movements, crowdfunding took off on fintechs like GoFundMe and Kickstarter. These platforms helped provide funding for those who needed it. From here, data scientists can use fundraising in unique ways.

They can help raise money by targeting people who have donated in the past, or who are likely to donate based on spending habits. This data provides a more well-rounded fundraising campaign.

Then, once they do have donors, they can again use data to segment contributors by interest, demographic or engagement history. This segmentation helps advertise in a more personal, interest-specific way.

3. Fraud Detection

Cybercriminals thrive on an abundance of digital interactions. With the rise in digital banking — and the pandemic-driven shift to technology — fintechs could potentially see high rates of fraud. In fact, by the end of 2020, the United States saw about $11 billion in lost funds from credit card fraud alone.

Data for fintech brands will help address and prevent fraud like this in the future. As customers produce data from their transactions and interactions, it provides a better picture of their behavior. If there’s deviance, the data then shows potential fraud may be occurring.

If fraud does occur, data scientists can then use that instance to learn and properly recognize how data behaves during cybercriminal activity.

4. Robo-Advisors

With more people using fintech services, employees have a lot on their hands. They must properly address the customers’ needs and provide solutions. However, in the online world, employees are now getting some robotic assistance.

Robo-advisors use machine learning algorithms to interact with customers online or on mobile apps. They ask questions, understand the problems and provide solutions. They also collect data like customer goals and financial plans, which they can report back to data scientists for analysis.

Overall, roughly 75% and 46% of large and small banks, respectively, are implementing artificial intelligence to some degree. This data-driven revolution is one to keep your eye on.

5. Blockchain Governance

Blockchain governance is a somewhat newer way that experts can use data for fintech services. The blockchain is commonly known for its support of cryptocurrency services. Though crypto assets like Bitcoin and Ethereum are on the rise, the blockchain itself is still getting its footing.

Now, fintechs like PayPal are offering crypto services, which means data scientists will be able to expand what’s possible for digital banking. As customers transfer crypto funds, data scientists can monitor their activity and get a better handle on the data that exists on the blockchain. From there, they can provide personalization and prevent fraud in the same ways as they would with standard digital banking.

A Changing Landscape

As data scientists continue to help fintech services grow, you’ll notice each of these five areas begins to become more common. Some, like personalization and fraud detection, are already key focuses for fintech companies. However, alongside robo-advisor, fundraising and blockchain, they all have room to grow through the use of data science.

Data Security for Data Scientists & Co. – Infographic

Data becomes information and information becomes knowledge. For this reason, companies are nowadays also evaluated with regard to their data and their data quality. Furthermore, data is also the material that is needed for management decisions and artificial intelligence. For this reason, IT Security is very important and special consulting and auditing companies offer their own services specifically for the security of IT systems.

However, every Data Scientist, Data Analyst and Data Engineer rarely only works with open data, but rather intensively with customer data. Therefore, every expert for the storage and analysis of data should at least have a basic knowledge of Data Security and work according to certain principles in order to guarantee the security of the data and the legality of the data processing.

There are a number of rules and principles for data security that must be observed. Some of them – in our opinion the most important ones – we from DATANOMIQ have summarized in an infographic for Data Scientists, Data Analysts and Data Engineers. You can download the infographic here: DataSecurity_Infographic

Data Security for Data Scientists, Data Analysts and Data Engineers

Data Security for Data Scientists, Data Analysts and Data Engineers

Download Infographic as PDF

Infographic - Data Security for Data Scientists, Data Analysts and Data Engineers

Infographic – Data Security for Data Scientists, Data Analysts and Data Engineers

In-memory Caching in Finance

Big data has been gradually creeping into a number of industries through the years, and it seems there are no exceptions when it comes to what type of business it plans to affect. Businesses, understandably, are scrambling to catch up to new technological developments and innovations in the areas of data processing, storage, and analytics. Companies are in a race to discover how they can make big data work for them and bring them closer to their business goals. On the other hand, consumers are more concerned than ever about data privacy and security, taking every step to minimize the data they provide to the companies whose services they use. In today’s ever-connected, always online landscape, however, every company and consumer engages with data in one way or another, even if indirectly so.

Despite the reluctance of consumers to share data with businesses and online financial service providers, it is actually in their best interest to do so. It ensures that they are provided the best experience possible, using historical data, browsing histories, and previous purchases. This is why it is also vital for businesses to find ways to maximize the use of data so they can provide the best customer experience each time. Even the more traditional industries like finance have gradually been exploring the benefits they can gain from big data. Big data in the financial services industry refers to complex sets of data that can help provide solutions to the business challenges financial institutions and banking companies have faced through the years. Considered today as a business imperative, data management is increasingly leveraged in finance to enhance processes, their organization, and the industry in general.

How Caching Can Boost Performance in Finance

In computing, caching is a method used to manage frequently accessed data saved in a system’s main memory (RAM). By using RAM, this method allows quick access to data without placing too much load on the main data stores. Caching also addresses the problems of high latency, network congestion, and high concurrency. Batch jobs are also done faster because request run times are reduced—from hours to minutes and from minutes to mere seconds. This is especially important today, when a host of online services are available and accessible to users. A delay of even a few seconds can lead to lost business, making both speed and performance critical factors to business success. Scalability is another aspect that caching can help improve by allowing finance applications to scale elastically. Elastic scalability ensures that a business is equipped to handle usage peaks without impacting performance and with the minimum required effort.

Below are the main benefits of big data and in-memory caching to financial services:

  • Big data analytics integration with financial models
    Predictive modeling can be improved significantly with big data analytics so it can better estimate business outcomes. Proper management of data helps improve algorithmic understanding so the business can make more accurate predictions and mitigate inherent risks related to financial trading and other financial services.
    Predictive modeling can be improved significantly with big data analytics so it can better estimate business outcomes. Proper management of data helps improve algorithmic understanding so the business can make more accurate predictions and mitigate inherent risks related to financial trading and other financial services.
  • Real-time stock market insights
    As data volumes grow, data management becomes a vital factor to business success. Stock markets and investors around the globe now rely on advanced algorithms to find patterns in data that will help enable computers to make human-like decisions and predictions. Working in conjunction with algorithmic trading, big data can help provide optimized insights to maximize portfolio returns. Caching can consequently make the process smoother by making access to needed data easier, quicker, and more efficient.
  • Customer analytics
    Understanding customer needs and preferences is the heart and soul of data management, and, ultimately, it is the goal of transforming complex datasets into actionable insights. In banking and finance, big data initiatives focus on customer analytics and providing the best customer experience possible. By focusing on the customer, companies are able to Ieverage new technologies and channels to anticipate future behaviors and enhance products and services accordingly. By building meaningful customer relationships, it becomes easier to create customer-centric financial products and seize market opportunities.
  • Fraud detection and risk management
    In the finance industry, risk is the primary focus of big data analytics. It helps in identifying fraud and mitigating operational risk while ensuring regulatory compliance and maintaining data integrity. In this aspect, an in-memory cache can help provide real-time data that can help in identifying fraudulent activities and the vulnerabilities that caused them so that they can be avoided in the future.

What Does This Mean for the Finance Industry?

Big data is set to be a disruptor in the finance sector, with 70% of companies citing big data as a critical factor of the business. In 2015 alone, financial service providers spent $6.4 billion on data-related applications, with this spending predicted to increase at a rate of 26% per year. The ability to anticipate risk and pre-empt potential problems are arguably the main reasons why the finance industry in general is leaning toward a more data-centric and customer-focused model. Data analysis is also not limited to customer data; getting an overview of business processes helps managers make informed operational and long-term decisions that can bring the company closer to its objectives. The challenge is taking a strategic approach to data management, choosing and analyzing the right data, and transforming it into useful, actionable insights.

How Data Science Is Helping to Detect Child Abuse

Image Source: pixabay.com

There is no good way to begin a conversation about child abuse or neglect. It is a sad and oftentimes sickening topic. But the fact of the matter is it exists in our world today and frequently goes unnoticed or unreported, leaving many children and young adults to suffer. Of the nearly 3.6 million events that do get reported, there are rarely enough resources to go around for thorough follow up investigations.

This means that at least some objective decisions have to be made by professionals in the field. These individuals must assess reports, review, and ultimately decide which cases to prioritize for investigation on a higher level, which ones are probably nothing, and which ones are worrisome but don’t quite meet the definition of abuse or neglect.

Overall, it can be a challenging job that wears on a person, and one that many think is highly based on subjective information and bias. Because of this, numerous data researchers have worked to develop risk assessment models that can help these professionals discover hidden patterns and/or biases and make more informed decisions.

Defining the Work Space

Defining exactly what falls under the category of child abuse or neglect can be a surprisingly sticky topic. Broadly, it means anything that causes lasting physical or mental harm to children and young adults or negligence that could potentially harm or threaten a child’s wellbeing. What exactly constitutes child abuse can depend upon the state you live in.

Ultimately, a lot of the defining aspects are gray. Does spanking count as physical abuse or is the line drawn when it becomes hitting with a closed fist? Likewise, are parents negligent if they must leave their kids home alone to go to work? Does living in poverty automatically make people bad parents because there may not always be enough food in the house?

Sadly, many of these questions have become more prevalent during the COVID-19 pandemic. With many families cooped up at home together, not going to work or to school, kids who live in violent households are more likely to be abused and fewer people are seeing the children regularly to observe and report signs of abuse. Unfortunately, limited statistical data is available at this point, but with so many people having lost jobs, especially amongst families that may have already been teetering on the edge of poverty, situations that could be defined as neglectful are thought to be exploding in prevalence.

Identifying Patterns

The idea of using data science to help determine the risk of abuse and neglect that many children face can be seen by many as a powerful means of tackling a difficult issue. Much like many other aspects of our world today — data has become a very useful and highly valued commodity that can work to help us understand some of the deeper or hidden patterns.

That is exactly what has been incorporated in Allegheny County, Pennsylvania. The algorithm that was developed assesses the “risk factor” for each maltreatment allegation that is made in the county. The system takes into account several factors including mental health and drug treatment services, criminal histories, past calls, and more. All of this ultimately adds up to helping employees take into account how at-risk a child may actually be and whether or not the case will be prioritized for further investigation. Generalized reports indicate that the program works well, but that even it can ‘learn’ to make decisions based on bias.

In this situation, the goal of the program isn’t to take all of the power away from the employees but rather to work as a tool to help them make a sounder decision. Some risk factors will automatically be referred to a case handler for further investigation, but most will allow for the case assessor to weigh the algorithm with research and other information that may not be well accounted for in the model. If there are significant differences between the case assessor’s conclusion and the model’s conclusion, a supervisor reviews the information and makes the final decision.

Dealing with Bias

Of course, using a model such as this one can be a double-edged sword. Certain things are difficult to account for. For instance, if parents take financial advantage of their children by using their clean Social SecuNumbers to open credit cards and other credit accounts, their children can then be saddled with poor credit they did not create. Because this type of financial abuse is difficult to prove, it can be difficult for young adults to repair their credit later in life or hold their parents or guardians responsible. And the algorithm may struggle to recognize this as abuse. But it can also take out some of the bias that many of the call takers can inadvertently have when they are assessing the risk level of certain cases. A difference in conclusion from the model can force them to take a second look at the hard facts.

But there is a flip side. All models are created based on some level of personal decisions from the algorithm designer — those decisions can carry into biases that get carried into the model’s outputs. For example, the Alleghany County model has come under significant scrutiny for being biased against people who are living in poverty and using government programs to get by.

Because some of the major components the model uses to assess risk are public data, statistically, more people who rely on government programs are likely to be flagged regardless of how serious a call that comes in appears. Whereas families in the upper and middle class with private health insurance, drug treatment centers, and food security may be less likely to be picked up.

***

Big data can play a profound role in our lives and has the potential to be a powerful tool in helping identify and address child abuse cases. The available models can aid caseworkers in prioritizing risk assessments for further investigation and make a difference in the lives of children that are facing unacceptable situations. Being aware of and working to address any biases in models is an ongoing issue and those that aid in child abuse detection are no exception. Ultimately, if used correctly, this can be a powerful tool.