Tag Archive for: BigData

Turbocharge Business Analytics With In-memory Computing

One of the customer traits that’s been gradually diminishing through the years is patience; if a customer-facing website or application doesn’t deliver real-time or near-instant results, it can be a reason for a customer to look elsewhere. This trend has pushed companies to turn to in-memory computing to get the speed needed to address customer demands in real-time. It simplifies access to multiple data sources to provide super-fast performance that’s thousands of times faster than disk-based storage systems. By storing data in RAM and processing in parallel against the full dataset, in-memory computing solutions allow for real-time insights that lead to informed business decisions and improved performance.

The in-memory computing solutions market has been on the rise in recent years because it has been heralded as the platform that will accelerate IT modernization. In-memory data grids, in particular, show great promise because it addresses the main limitation of an in-memory relational database. While the latter is designed to scale up, the former is designed to scale out. This scalability is one of the main draws of an in-memory data grid, since a scale-up architecture is not sustainable in the long term and will always have a breaking point. In-memory data grids on the other hand, benefit from horizontal scalability and computing elasticity. Scaling an in-memory data grid is as simple as adding nodes to a cluster and removing it when it’s no longer needed. This is especially useful for businesses that demand speed in the management of hundreds of terabytes of data across multiple networked computers in geographically distributed data centers.

Since big data is complex and fast-moving, keeping data synchronized across data centers is vital to preserve data integrity. Keeping data in memory removes the bottleneck caused by constant access to disk -based storage and allows applications and their data to collocate in the same memory space. This allows for optimization that allows the amount of data to exceed the amount of available memory. Speed and efficiency is also improved by keeping frequently accessed data in memory and the rest on disk, consequently allowing data to reside both in memory and on disk.

Future-proofing Businesses With In-memory Computing

Data analytics is as much a part of every business as other marketing and business intelligence tools. Because data constantly grows at an exponential rate, in-memory computing serves as the enabler of data analytics because it provides speed, high availability, and straightforward scalability. Speeds more than 100 times faster than other solutions enable in-memory computing solutions to provide real-time insights that are applicable in a host of industries and use cases.

Location-based Marketing

A report from 2019 shows that location-based marketing helped 89% of marketers increase sales, 86% grow their customer base, and 84% improve customer engagement. Location data can be leveraged to identify patterns of behavior by analyzing frequently visited locations. By understanding why certain customers frequent specific locations and knowing when they are there, you can better target your marketing messages and make more strategic customer acquisitions. Location data can also be used as a demographic identifier to help you segment your customers and tailor your offers and messaging accordingly.

Fraud Detection

In-memory computing helps improve operational intelligence by detecting anomalies in transaction data immediately. Through high-speed analysis of large amounts of data, potential risks are detected early on and addressed as soon as possible. Transaction data is fast-moving and changes frequently, and in-memory computing is equipped to handle data as it changes. This is why it’s an ideal platform for payment processing; it helps make comparisons of current transactions with the history of all transactions on record in a matter of seconds. Companies typically have several fraud detection measures in place, and in-memory computing allows running these algorithms concurrently without compromising overall system performance. This ensures responsiveness of systems despite peak volume levels and avoids interruptions to customer service.

Tailored Customer Experiences

The real-time insights delivered by in-memory computing helps personalize experiences based on customer data. Because customer experiences are time-sensitive, processing and analyzing data at super-fast speeds is vital in capturing real-time event data that can be used to craft the best experience possible for each customer. Without in-memory computing, getting real-time data and other necessary information that ensures a seamless customer experience would have been near impossible.

Real-time data analytics helps provide personalized recommendations based on both existing and new customer data. By looking at historical data like previously visited pages and comparing them with newer data from the stream, businesses can craft the proper messaging and plan the next course of action. The anticipation and forecasting of customers’ future actions and behavior is the key to improving conversion rates and customer satisfaction—ultimately leading to higher revenues and more loyal customers.

Conclusion

Big data is the future, and companies that don’t use it to their advantage would find it hard to compete in this ever-connected world that demands results in an instant. Processing and analyzing data can only become more complex and challenging through time, and for this reason, in-memory computing should be a solution that companies should consider. Aside from improving their business from within, it will also help drive customer acquisition and revenue, while also providing a viable low-latency, high throughput platform for high-speed data analytics.

How To Remotely Send R and Python Execution to SQL Server from Jupyter Notebooks

Introduction

Did you know that you can execute R and Python code remotely in SQL Server from Jupyter Notebooks or any IDE? Machine Learning Services in SQL Server eliminates the need to move data around. Instead of transferring large and sensitive data over the network or losing accuracy on ML training with sample csv files, you can have your R/Python code execute within your database. You can work in Jupyter Notebooks, RStudio, PyCharm, VSCode, Visual Studio, wherever you want, and then send function execution to SQL Server bringing intelligence to where your data lives.

This tutorial will show you an example of how you can send your python code from Juptyter notebooks to execute within SQL Server. The same principles apply to R and any other IDE as well. If you prefer to learn through videos, this tutorial is also published on YouTube here:


 

Environment Setup Prerequisites

  1. Install ML Services on SQL Server

In order for R or Python to execute within SQL, you first need the Machine Learning Services feature installed and configured. See this how-to guide.

  1. Install RevoscalePy via Microsoft’s Python Client

In order to send Python execution to SQL from Jupyter Notebooks, you need to use Microsoft’s RevoscalePy package. To get RevoscalePy, download and install Microsoft’s ML Services Python Client. Documentation Page or Direct Download Link (for Windows).

After downloading, open powershell as an administrator and navigate to the download folder. Start the installation with this command (feel free to customize the install folder): .\Install-PyForMLS.ps1 -InstallFolder “C:\Program Files\MicrosoftPythonClient”

Be patient while the installation can take a little while. Once installed navigate to the new path you installed in. Let’s make an empty folder and open Jupyter Notebooks: mkdir JupyterNotebooks; cd JupyterNotebooks; ..\Scripts\jupyter-notebook

Create a new notebook with the Python 3 interpreter:

 

To test if everything is setup, import revoscalepy in the first cell and execute. If there are no error messages you are ready to move forward.

Database Setup (Required for this tutorial only)

For the rest of the tutorial you can clone this Jupyter Notebook from Github if you don’t want to copy paste all of the code. This database setup is a one time step to ensure you have the same data as this tutorial. You don’t need to perform any of these setup steps to use your own data.

  1. Create a database

Modify the connection string for your server and use pyodbc to create a new database.

  1. Import Iris sample from SkLearn

Iris is a popular dataset for beginner data science tutorials. It is included by default in sklearn package.

  1. Use RecoscalePy APIs to create a table and load the Iris data

(You can also do this with pyodbc, sqlalchemy or other packages)

Define a Function to Send to SQL Server

Write any python code you want to execute in SQL. In this example we are creating a scatter matrix on the iris dataset and only returning the bytestream of the .png back to Jupyter Notebooks to render on our client.

Send execution to SQL

Now that we are finally set up, check out how easy sending remote execution really is! First, import revoscalepy. Create a sql_compute_context, and then send the execution of any function seamlessly to SQL Server with RxExec. No raw data had to be transferred from SQL to the Jupyter Notebook. All computation happened within the database and only the image file was returned to be displayed.

While this example is trivial with the Iris dataset, imagine the additional scale, performance, and security capabilities that you now unlocked. You can use any of the latest open source R/Python packages to build Deep Learning and AI applications on large amounts of data in SQL Server. We also offer leading edge, high-performance algorithms in Microsoft’s RevoScaleR and RevoScalePy APIs. Using these with the latest innovations in the open source world allows you to bring unparalleled selection, performance, and scale to your applications.

Learn More

Check out SQL Machine Learning Services Documentation to learn how you can easily deploy your R/Python code with SQL stored procedures making them accessible in your ETL processes or to any application. Train and store machine learning models in your database bringing intelligence to where your data lives.

Other YouTube Tutorials: