Posts

AI Platforms – A Comprehensive Guide

A comprehensive guide compiled to introduce readers to AI platforms, their types, and benefits. A concluding section to discuss AI platform selection strategy with Attri’s Best of Breed approach to build AI platforms. 

Don’t you think that this century is really fortunate? In my opinion, the answer is yes; we witnessed technological transformations and their miracles that created substantial changes in our lifestyle. While talking about these life-changing technological revolutions, AI or artificial intelligence deserves a front seat due to its incredible contribution and capabilities. Now everyone knows AI has limitless potential simply from creating funny faces in mobile to taking informed and intelligent business decisions. In the last 50 years, we have progressed by leaps and bounds to give machines the ability to understand, help and mimic us.

Artificial intelligence enables machines to imitate human intelligence across a variety of domains ranging from problem-solving and reasoning to General Intelligence and in-depth knowledge representation. With tremendous progress in AI, another enabler came into existence and received attention—AI platforms. AI-platform is a layer that integrates all the tools and processes required to build, deploy and monitor ML models. In this article, we shall go through the various aspects of AI platforms covering a range of topics like AI Platform types, the benefits such platforms entail, selection strategy in detail as well as a brief look into Attri’s industry contribution with an Open AI Platform.

Diving Deeper With AI Platforms

The AI Platform acts as a layer over your current AI infrastructure and integrates all the tools and processes required to develop ML models. It provides you the flexibility to integrate all your ML models under a single roof. With this flexibility, you can create and deploy several ML models over the platform. Further, you can even monitor these models to confirm that they are serving their intended purpose. AI platform makes your AI adoption easy by attaining the following requirements–

  • Use of vast data to develop ML solutions.
  • Ensure transparency and reproducibility within a project
  • Accelerate collaboration and governance within teams
  • Ensure scalability for ever-growing machine learning demands

An ideal AI platform should ensure the following features for better addressing different challenges.

  • Seamless access control: Ensure robust access control to team members in order to conquer the challenge of centralized data access with AI projects.
  • Excellent monitoring: Integrate top-notch observability practices while developing ML models.
  • Data and technology-agnostic integration: Seamless experience to enterprises with infrastructure set up responsibility handed over to platform providers
  • All-inclusive Platform: Single platform to facilitate all underlying tasks from data preparation to model deployment
  • Continuous Improvement: Ability to produce and deploy models as a reproducible package and thereby integrate changes with models that are already in production
  • Rapid Processing: Faster data preparation and powerful visual interfaces

AI Platform Classification

With loads of AI platform providers available in the market, AI platform classification becomes a tough job, as it requires thinking separately on each platform’s offerings, its features, and cost factors. Also, you need to check whether AI solutions are open source AI platforms or proprietary offerings.

We have decided to present an AI platform classification based on its striking features and offerings. With this, we have classified AI platforms across three main classes—

  • AI cloud-based platforms
  • AI conversational platforms
  • No code AI platforms

Cloud based AI Platforms

All major cloud providers offer cloud-based AI platforms to boost businesses with AI capabilities. With cloud AI platforms, enterprises can leverage cloud providers’ matchless technical expertise to overcome affordability and data requirement challenges associated with AI implementation. Cloud-based AI offerings benefit businesses with economic AI solutions, defined and pre-packaged services, lower risks, and modern technology.

Amazon Web Services

AWS offers a comprehensive set of AI solutions to conquer major hurdles in the AI adoption journey of businesses. AWS has been recognized as the topmost cloud AI partner with its broad capable portfolio. AWS pre-trained models cater to diverse use cases like forecasting, recommendations, computer vision, language interpretation, customer engagement, and safety for deploying ML models at scale. Amazon also provides text analytics, NLP, chatbots, and document analysis solutions. Fully managed AWS packages amplify your experience with minimum resource requirements and wizard-based friendly model development experience. Hence, AWS is one of the top cloud AI partners that cater to your AI adoption needs.

Google cloud

 The Google Cloud Platform (GCP) is a Google offering for cloud-driven computing services devised to support multiple use cases such as hosting containerized applications, massive-scale data analytics platforms, and even applying ML and AI for business use cases. Google AI Platform is a Google Cloud offering that helps build, deploy and manage machine learning models in the cloud.

Google leverages enterprise AI experience through its consumer-facing products. Google helps improve customer satisfaction through Contact Center AI. Google offering DialogFlow CX is used to create advanced chatbots that handle customer messaging, response, and voice recognition. Digiflow is applied to create virtual agents for messaging services, mobile apps, and IoT devices.

Google’s Cloud Vision API is beneficial to recognize objects, logos, and landmarks within content or images. Google provides Natural Language API to bring more clarity in content classification, entities, syntax, and sentiments. Further, Google speech API helps in converting audio to text and recognizing 110 languages.

Google’s Cloud ML services facilitate better decision-making with end-to-end ML solutions. Google offers an all-inclusive ML development platform that enables effective decision-making backed by explainable AI, continuous evaluation, data labeling, pipelines, training, and what-if tool. This platform is based on the TensorFlow framework and it enables building predictive models for various scenarios.

Kubeflow is a Cloud-Native and open-source platform that helps you build portable ML pipelines that can be executed on-premises or on the cloud. With this, you can access Google technologies like TPUs, TensorFlow, and TFX tools as you deploy your ML models in production.

For expert ML developers, Google provides an Open Source AI platform with TensorFlow models that are trained for various scenarios. It offers an excellent prediction service using trained models.

Microsoft Azure

Similar to Amazon Web Services, Microsoft Azure ML capabilities are based on its real-time and live applications. Azure provides superior machine learning capabilities to develop, train, and deploy machine learning models through Azure Machine Learning, Azure Databricks, and ONNX.

  • Azure Machine Learning

A Python-based ML service to facilitate automated machine learning.

  • ONNX

An open-source model format enables machine learning through various frameworks and hardware platforms of the user’s choice.

  • Azure Cognitive Search

Formerly known as Azure Search,this is the only cloud search service that allows built-in AI capabilities to explore content effectively at scale. Microsoft empowers the user with cognitive search services like text analytics, translation, document analytics, custom vision, and Azure Machine Learning solutions.

IBM Cloud

IBM has brought Watson studio a data analysis application to accelerate innovation and ML-centric practices in business.  IBM Cloud AI Platform offers 170 services with more emphasis on data-speech conversions and analytics. Watson Studio offers an all-inclusive suite to work with data and train, build and deploy ML models.

An innovative giant IBM also brought AI based learning platform recently to aid academic stakeholder like students, researchers and teachers.

AI Conversational Platforms

Conversational AI opens new doors for automated conversations between an enterprise and its customers. These conversations include messaging or voice-based communication platforms to enable text or audio-based conversation.

Conversational platforms leverage your customer experience with a range of applications such as follow-up, guidance, or the resolution of customer queries and round-the-clock support. These platforms are beneficial to drive more leads, increase conversions by cross-selling and upselling, promotional efforts, customer research, queries resolution and customer feedback handling, etc.

AI technology helps systems to mimic human conversations to a certain level and with great accuracy. An AI offering- Natural Language processing is used to shape these conversations by understanding intent, text, speech, and languages.

Intelligent Virtual Assistants

The intelligent virtual assistants represent an advanced level of Conversational AI and their discussion is incomplete without a mention to Siri and Alexa. Most popular intelligent virtual assistants include Siri by Apple, Alexa by Amazon, Google Assistant, and Bixby by Samsung. While Alexa performs as a voice assistant for the home, Siri and Bixby stand as mobile assistants with numerous operations support like navigation, text-to-speech, response to weather, quick reply, and address search.

SAP Conversational AI

SAP Conversational AI is one of the leading conversational AI platforms. With its friendly UI and multiple versioning, it offers a better experience of mimicking human conversations. SAP Conversational AI Platform uses NLP to facilitate developing chatbot that works more humanely and serves your customers 24*7. Its striking features include—

  • Simple integration
  • NLP capabilities
  • Analytics tools to help you
  • Multi-language support

Clinc

A powerful self-learning Conversational AI Platform enriched with NLP capabilities and machine learning. It secures top position in the Conversational AI Platform list due to its learning from previous conversations and improving responses over time. Its feature set include—

  • No technical expertise required
  • Self-learning abilities
  • NLP capabilities

Kore.ai

An enterprise-grade Conversational AI Platform to cater to your consumer as well as staff needs. It helps to build a virtual chatbot for any suitable platform without compromising the safety and security standards. Its major features cover—

  • The high degree of customization for chatbots
  • Comprehensive analytics with FAQs and alerts
  • Simple integration with ML models and channels
  • Flexible deployment
  • Supported with a multi-pronged NLP engine

Mindmeld

It is an excellent option as a Deep-Domain Conversational AI Platform with NLP capabilities. It can be used for both text-based and voice-based virtual assistants. This platform effectively caters to multiple industries and their numerous use cases. Check its striking features list—

  • Open-source platform
  • NLP capabilities
  • Supports discovering on-demand video or music
  • Quick chat-based transactions

No Code AI Platforms

As discussed above, AI platform classification necessitates platform considerations from various perspectives. We are introducing another category of AI platforms—No Code AI Platforms. The motivation behind introducing these platforms is to encourage enterprise AI adoption while keeping AI implementation costs low and minimizing dependencies on skilled professionals. Many IT giants are now offering no-code AI Platforms to enterprises for their AI adoption.

Google ML Kit

Google ML Kit comes with Android and iOS and it facilitates the integration of functions with lesser codes or with minimum knowledge of machine learning algorithms. This open source AI Platform supports different features such as text recognition, face detection, and landmark recognition.

RapidMiner Studio

RapidMiner Studio enables powerful data analytics with drag and drop features. Rapidminer Studio allows easy integration with databases, warehouses, social media for easy data access by authorized persons.

ML Platform Selection Strategy

Having discussed so many types of ML platforms, their features, and offerings, the next question is–how to select the best ML Platform for an enterprise AI adoption. Well, to answer this Million-Dollar question, we need to consider a few key aspects, such as

  • Who will use and benefit from the AI Platform? It is required to find out AI platform users here, the data science team, analytics team, developers, and how the platform will benefit each stakeholder.
  • The next aspect is to explore the skill levels of AI platform users, are they competent to handle ML development and analytics requirements with years of experience
  • Proficiency of users with programming languages
  • The next point in finalizing the AI platform strategy is to conclude code-first or code-free approaches to streamline AI workflows. This aspect can be studied by thinking about different attributes such as data preparation ease, feature engineering automation, ML algorithms, Model Deployment ease, and platform integration aspects.

Once you come up with answers to these queries, you will be able to finalize the best AI Platform Selection strategy for your enterprise. It can be a unique cloud platform, or even it can be a hybrid solution with a “best-of-breed” approach.

All-in-one platform strategy involves getting one end-to-end platform for the entire AI project lifecycle from raw data prep to ETL to building and operationalizing models followed by monitoring and governance of systems.

The best-of-breed approach allows using the preferred and custom tools for each phase of the lifecycle and aligning these tools together to build a customized platform solution for AI adoption.

This approach offers an excellent AI platform solution for organizations looking for flexible, inexpensive, change-oriented AI solutions and having a DIY spirit. With this mix-and-match approach, you can combine APIs offered by different cloud platforms and deliver AI solutions that cater to your AI use cases. Organizations using the best-of-breed approach are more comfortable with technology shifts with their abilities to use, adopt and swap out tools as requirement changes.

Business Process AI Transformation Simplified With Attri’s Open AI Platform

At Attri, we provide AI platform solutions to diverse industry verticals. With our flagship Open AI Platform, we heighten your AI adoption experience with a rich array of platform features like—

  • Customizable best-of-breed architecture
  • Utilize existing infrastructure
  • AI as a platform solution
  • Reduced effort in migrating to a new technology
  • Centralized Monitoring and Governance
  • Explainable and Responsible AI

We help you achieve your business process transformation goals with our unique AI offerings such as Open AI Platform  and Open AI solutions.

Our AI platform assures multiple benefits to your enterprise while keeping AI adoptions costs low and ensuring faster AI implementations. We can summarize the benefits of Attri Open AI Platform as under–

No efforts in reinventing complete AI suites

Attri’s AI Platform integrates multiple AI services and eliminates the need for reinventing complete AI suites. The platform delights enterprises with scalability, the ability to reuse current infrastructure, and customizable architecture.

Accelerated Go To Market

Attri’s Open AI Platform ensures accelerated GTM with a sincere approach to testing, reviewing, and finalizing reference templates for different industries.

No vendor lock-in

With Open AI Platform, we bring client-friendly policies such as no vendor lock-in and flexibility to choose their preferred tools and technology.

High reliability

We keep our AI Platform highly reliable with a comprehensive testing approach. We also meet the growing requirements of enterprises by ensuring high scalability with our open AI platform.

Get connected with us for your enterprise AI adoption requirements.

Know more about our Open AI Platform…

How to make a toy English-German translator with multi-head attention heat maps: the overall architecture of Transformer

If you have been patient enough to read the former articles of this article series Instructions on Transformer for people outside NLP field, but with examples of NLP, you should have already learned a great deal of Transformer model, and I hope you gained a solid foundation of learning theoretical sides on this algorithm.

This article is going to focus more on practical implementation of a transformer model. We use codes in the Tensorflow official tutorial. They are maintained well by Google, and I think it is the best practice to use widely known codes.

The figure below shows what I have explained in the articles so far. Depending on your level of understanding, you can go back to my former articles. If you are familiar with NLP with deep learning, you can start with the third article.

1 The datasets

I think this article series appears to be on NLP, and I do believe that learning Transformer through NLP examples is very effective. But I cannot delve into effective techniques of processing corpus in each language. Thus we are going to use a library named BPEmb. This library enables you to encode any sentences in various languages into lists of integers. And conversely you can decode lists of integers to the language. Thanks to this library, we do not have to do simplification of alphabets, such as getting rid of Umlaut.

*Actually, I am studying in computer vision field, so my codes would look elementary to those in NLP fields.

The official Tensorflow tutorial makes a Portuguese-English translator, but in article we are going to make an English-German translator. Basically, only the codes below are my original. As I said, this is not an article on NLP, so all you have to know is that at every iteration you get a batch of (64, 41) sized tensor as the source sentences, and a batch of (64, 42) tensor as corresponding target sentences. 41, 42 are respectively the maximum lengths of the input or target sentences, and when input sentences are shorter than them, the rest positions are zero padded, as you can see in the codes below.

*If you just replace datasets and modules for encoding, you can make translators of other pairs of languages.

We are going to train a seq2seq-like Transformer model of converting those list of integers, thus a mapping from a vector to another vector. But each word, or integer is encoded as an embedding vector, so virtually the Transformer model is going to learn a mapping from sequence data to another sequence data. Let’s formulate this into a bit more mathematics-like way: when we get a pair of sequence data \boldsymbol{X} = (\boldsymbol{x}^{(1)}, \dots, \boldsymbol{x}^{(\tau _x)}) and \boldsymbol{Y} = (\boldsymbol{y}^{(1)}, \dots, \boldsymbol{y}^{(\tau _y)}), where \boldsymbol{x}^{(t)} \in \mathbb{R}^{|\mathcal{V}_{\mathcal{X}}|}, \boldsymbol{x}^{(t)} \in \mathbb{R}^{|\mathcal{V}_{\mathcal{Y}}|}, respectively from English and German corpus, then we learn a mapping f: \boldsymbol{X} \to \boldsymbol{Y}.

*In this implementation the vocabulary sizes are both 10002. Thus |\mathcal{V}_{\mathcal{X}}|=|\mathcal{V}_{\mathcal{Y}}|=10002

2 The whole architecture

This article series has covered most of components of Transformer model, but you might not understand how seq2seq-like models can be constructed with them. It is very effective to understand how transformer is constructed by actually reading or writing codes, and in this article we are finally going to construct the whole architecture of a Transforme translator, following the Tensorflow official tutorial. At the end of this article, you would be able to make a toy English-German translator.

The implementation is mainly composed of 4 classes, EncoderLayer(), Encoder(), DecoderLayer(), and Decoder() class. The inclusion relations of the classes are displayed in the figure below.

To be more exact in a seq2seq-like model with Transformer, the encoder and the decoder are connected like in the figure below. The encoder part keeps converting input sentences in the original language through N layers. The decoder part also keeps converting the inputs in the target languages, also through N layers, but it receives the output of the final layer of the Encoder at every layer.

You can see how the Encoder() class and the Decoder() class are combined in Transformer in the codes below. If you have used Tensorflow or Pytorch to some extent, the codes below should not be that hard to read.

3 The encoder

*From now on “sentences” do not mean only the input tokens in natural language, but also the reweighted and concatenated “values,” which I repeatedly explained in explained in the former articles. By the end of this section, you will see that Transformer repeatedly converts sentences layer by layer, remaining the shape of the original sentence.

I have explained multi-head attention mechanism in the third article, precisely, and I explained positional encoding and masked multi-head attention in the last article. Thus if you have read them and have ever written some codes in Tensorflow or Pytorch, I think the codes of Transformer in the official Tensorflow tutorial is not so hard to read. What is more, you do not use CNNs or RNNs in this implementation. Basically all you need is linear transformations. First of all let’s see how the EncoderLayer() and the Encoder() classes are implemented in the codes below.

You might be confused what “Feed Forward” means in  this article or the original paper on Transformer. The original paper says this layer is calculated as FFN(x) = max(0, xW_1 + b_1)W_2 +b_2. In short you stack two fully connected layers and activate it with a ReLU function. Let’s see how point_wise_feed_forward_network() function works in the implementation with some simple codes. As you can see from the number of parameters in each layer of the position wise feed forward neural network, the network does not depend on the length of the sentences.

From the number of parameters of the position-wise feed forward neural networks, you can see that you share the same parameters over all the positions of the sentences. That means in the figure above, you use the same densely connected layers at all the positions, in single layer. But you also have to keep it in mind that parameters for position-wise feed-forward networks change from layer to layer. That is also true of “Layer” parts in Transformer model, including the output part of the decoder: there are no learnable parameters which cover over different positions of tokens. These facts lead to one very important feature of Transformer: the number of parameters does not depend on the length of input or target sentences. You can offset the influences of the length of sentences with multi-head attention mechanisms. Also in the decoder part, you can keep the shape of sentences, or reweighted values, layer by layer, which is expected to enhance calculation efficiency of Transformer models.

4, The decoder

The structures of DecoderLayer() and the Decoder() classes are quite similar to those of EncoderLayer() and the Encoder() classes, so if you understand the last section, you would not find it hard to understand the codes below. What you have to care additionally in this section is inter-language multi-head attention mechanism. In the third article I was repeatedly explaining multi-head self attention mechanism, taking the input sentence “Anthony Hopkins admired Michael Bay as a great director.” as an example. However, as I explained in the second article, usually in attention mechanism, you compare sentences with the same meaning in two languages. Thus the decoder part of Transformer model has not only self-attention multi-head attention mechanism of the target sentence, but also an inter-language multi-head attention mechanism. That means, In case of translating from English to German, you compare the sentence “Anthony Hopkins hat Michael Bay als einen großartigen Regisseur bewundert.” with the sentence itself in masked multi-head attention mechanism (, just as I repeatedly explained in the third article). On the other hand, you compare “Anthony Hopkins hat Michael Bay als einen großartigen Regisseur bewundert.” with “Anthony Hopkins admired Michael Bay as a great director.” in the inter-language multi-head attention mechanism (, just as you can see in the figure above).

*The “inter-language multi-head attention mechanism” is my original way to call it.

I briefly mentioned how you calculate the inter-language multi-head attention mechanism in the end of the third article, with some simple codes, but let’s see that again, with more straightforward figures. If you understand my explanation on multi-head attention mechanism in the third article, the inter-language multi-head attention mechanism is nothing difficult to understand. In the multi-head attention mechanism in encoder layers, “queries”, “keys”, and “values” come from the same sentence in English, but in case of inter-language one, only “keys” and “values” come from the original sentence, and “queries” come from the target sentence. You compare “queries” in German with the “keys” in the original sentence in English, and you re-weight the sentence in English. You use the re-weighted English sentence in the decoder part, and you do not need look-ahead mask in this inter-language multi-head attention mechanism.

Just as well as multi-head self-attention, you can calculate inter-language multi-head attention mechanism as follows: softmax(\frac{\boldsymbol{Q} \boldsymbol{K} ^T}{\sqrt{d}_k}). In the example above, the resulting multi-head attention map is a 10 \times 9 matrix like in the figure below.

Once you keep the points above in you mind, the implementation of the decoder part should not be that hard.

5 Masking tokens in practice

I explained masked-multi-head attention mechanism in the last article, and the ideas itself is not so difficult. However in practice this is implemented in a little tricky way. You might have realized that the size of input matrices is fixed so that it fits the longest sentence. That means, when the maximum length of the input sentences is 41, even if the sentences in a batch have less than 41 tokens, you sample (64, 41) sized tensor as a batch every time (The 64 is a batch size). Let “Anthony Hopkins admired Michael Bay as a great director.”, which has 9 tokens in total, be an input. We have been considering calculating (9, 9) sized attention maps or (10, 9) sized attention maps, but in practice you use (41, 41) or (42, 41) sized ones. When it comes to calculating self attentions in the encoder part, you zero pad self attention maps with encoder padding masks, like in the figure below. The black dots denote the zero valued elements.

As you can see in the codes below, encode padding masks are quite simple. You just multiply the padding masks with -1e9 and add them to attention maps and apply a softmax function. Thereby you can zero-pad the columns in the positions/columns where you added -1e9 to.

I explained look ahead mask in the last article, and in practice you combine normal padding masks and look ahead masks like in the figure below. You can see that you can compare each token with only its previous tokens. For example you can compare “als” only with “Anthony”, “Hopkins”, “hat”, “Michael”, “Bay”, “als”, not with “einen”, “großartigen”, “Regisseur” or “bewundert.”

Decoder padding masks are almost the same as encoder one. You have to keep it in mind that you zero pad positions which surpassed the length of the source input sentence.

6 Decoding process

In the last section we have seen that we can zero-pad columns, but still the rows are redundant. However I guess that is not a big problem because you decode the final output in the direction of the rows of attention maps. Once you decode <end> token, you stop decoding. The redundant rows would not affect the decoding anymore.

This decoding process is similar to that of seq2seq models with RNNs, and that is why you need to hide future tokens in the self-multi-head attention mechanism in the decoder. You share the same densely connected layers followed by a softmax function, at all the time steps of decoding. Transformer has to learn how to decode only based on the words which have appeared so far.

According to the original paper, “We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.” After these explanations, I think you understand the part more clearly.

The codes blow is for the decoding part. You can see that you first start decoding an output sentence with a sentence composed of only <start>, and you decide which word to decoded, step by step.

*It easy to imagine that this decoding procedure is not the best. In reality you have to consider some possibilities of decoding, and you can do that with beam search decoding.

After training this English-German translator for 30 epochs you can translate relatively simple English sentences into German. I displayed some results below, with heat maps of multi-head attention. Each colored attention maps corresponds to each head of multi-head attention. The examples below are all from the fourth (last) layer, but you can visualize maps in any layers. When it comes to look ahead attention, naturally only the lower triangular part of the maps is activated.

This article series has not covered some important topics machine translation, for example how to calculate translation errors. Actually there are many other fascinating topics related to machine translation. For example beam search decoding, which consider some decoding possibilities, or other topics like how to handle proper nouns such as “Anthony” or “Hopkins.” But this article series is not on NLP. I hope you could effectively learn the architecture of Transformer model with examples of languages so far. And also I have not explained some details of training the network, but I will not cover that because I think that depends on tasks. The next article is going to be the last one of this series, and I hope you can see how Transformer is applied in computer vision fields, in a more “linguistic” manner.

But anyway we have finally made it. In this article series we have seen that one of the earliest computers was invented to break Enigma. And today we can quickly make a more or less accurate translator on our desk. With Transformer models, you can even translate deadly funny jokes into German.

*You can train a translator with this code.

*After training a translator, you can translate English sentences into German with this code.

[References]

[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, “Attention Is All You Need” (2017)

[2] “Transformer model for language understanding,” Tensorflow Core
https://www.tensorflow.org/overview

[3] Jay Alammar, “The Illustrated Transformer,”
http://jalammar.github.io/illustrated-transformer/

[4] “Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 14 – Transformers and Self-Attention,” stanfordonline, (2019)
https://www.youtube.com/watch?v=5vcj8kSwBCY

[5]Tsuboi Yuuta, Unno Yuuya, Suzuki Jun, “Machine Learning Professional Series: Natural Language Processing with Deep Learning,” (2017), pp. 91-94
坪井祐太、海野裕也、鈴木潤 著, 「機械学習プロフェッショナルシリーズ 深層学習による自然言語処理」, (2017), pp. 191-193

* I make study materials on machine learning, sponsored by DATANOMIQ. I do my best to make my content as straightforward but as precise as possible. I include all of my reference sources. If you notice any mistakes in my materials, including grammatical errors, please let me know (email: yasuto.tamura@datanomiq.de). And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.

A quick primer on TensorFlow – Google’s machine learning workhorse

Introducing Google Brains‘ TensorFlow™

This week started with major news for the machine learning and data science community: the Google Brain Team announced the open sourcing of TensorFlow, their numerical library for tensor network computations. This software is actively developed (and used!) within Google and builds on many of Google’s large scale neural network applications such as automatic image labeling and captioning as well as the speech recognition in Google’s apps.

TensorFlow in bullet points

Here are the main features:

  • Supports deep neural networks – and much more machine learning approaches
  • Highly scalable across many machines and huge data sets
  • Runs on desktops, servers, in cloud and even mobile devices
  • Computation can run on CPUs, GPUs or both
  • All this flexibility is covered by a single API making the execution very streamlined
  • Available interfaces: C++ and Python. More will follow (Java, R, Lua, Go…)
  • Comes with many tools helping to build and visualize the data flow networks
  • Includes a powerful gradient based optimizer with auto-differentiation
  • Extensible with C++
  • Usable for commercial applications – released under Apache Software Licence 2.0

Tensor, what? Tensor, why?

„Numerical library for tensor network computations“ maybe doesn’t sound too exciting, but let’s  consider the implications.

Application of tensors and their networks is a relatively new (but fast evolving) approach in machine learning. Tensors, if you recall your algebra classes, are simply n-dimensional data arrays (so a scalar is a 0th order tensor, a vector is 1st order, and a matrix a 2nd order matrix).

A simple practical example of is color image’s RGB layers (essentially three 2D matrices combined into a 3rd order tensor). Or a more business minded example – if your data source generates a table (a 2D array) every hour, you can look at the full data set as a 3rd order tensor – time being the extra dimension.

Tensor networks then represent “data flow graphs”, where the edges are your multi-dimensional data sets and nodes are the mathematical operations on this data.

Example of of a data flow graph with multiple nodes (data operations). Notice how the execution of nodes is asynchronous. This allows incredible scalability across many machines. Image Source.

Looking at your data through the tensor formalism gives you a lot of powerful tools that were already developed for tensor algebra, allowing fast, complex computations.  

Tensor networks are also a natural fit for computations done on graphical processing units (GPUs) as they are built exactly for the purpose of very fast numerical operations on such a data – speeding up your calculations significantly compared to standard CPU execution!

The importance of flexible architecture & scaling

The data flow graph approach has also further advantages. Most notably, you can split the design of your data flows (i.e. data cleaning, processing, transformations, model building etc.) from its execution. You first build up the graph of your data flow and then you send it to for execution: either on the CPUs of your machines (and it can be your laptop just as well as cluster) or GPUs or a combination. This happens through a single interface that hides all the complexities from you.

Since the execution is asynchronous it scales across many machines and can deal with huge amounts of data.

You can count on the Google guys to build tools not only for academic use, but also heavy-duty operations in the industry!

Is this just another deep learning library?

TensorFlow is of course not the first library to embrace the tensor formalism and GPU execution. The nearest comparisons (and competitors) are Theano, Torch and CGT (Caffe to a limited degree).

While there are significant overlaps between the libraries, TensorFlow tries to provide a broader framework. It is not only a deep learning library – the Data Flow Graphs can incorporate any data processing/analysis applications. It also comes with a very powerful gradient based optimizer with automatic calculations of derivatives offering huge flexibility.

Given this broad vision the closest competitor is probably Theano (while Caffe and the existing Theano wrappers have a narrower focus on deep learning). TensorFlow’s distinguishing feature is that by design its focus is on large, scalable architectures with a complete flexibility in the hardware, best suited for industry/operational use, whereas the other libraries have more academic pedigrees.

Initial analyses also indicate that TensorFlow should bring also performance improvements compared to Theano, although no comprehensive benchmarks have yet been published.

As the other packages are out already for a while, they have large, active communities and often additional supporting software (examples are the very useful wrappers around Theano like Lasagne, Keras and Blocks that provider higher level abstractions to its engine).

Of course, with Google’s gravitas, one can expect that TensorFlow’s open source community will grow very fast and the contributors will quickly add a lot of additional features (and find hidden bugs).

Finally, keep in mind, that while Google provided us with this great data processing framework and some of its machine learning capabilities, it is likely that the most powerful machine learning algorithms still remain Google’s proprietary secret.

Nonetheless, TensorFlow is a huge and very welcome contribution to the open source machine learning world!

Where to go next?

You can find Google’s getting started guide here. The TensorFlow white paper is worth a read too. Source code can be found at the Github page. There is also a Vagrant virtual machine with TensorFlow pre-installed available here.