Skip navigation

Pentaho

2 Posts authored by: Evan Cropper Employee

Pentaho Data Integration (PDI) and Data Science Notebook Integration

Evan Cropper – PMM - Analytics

1. Using PDI and Data Science Notebook and Python Together

 

Data Scientists are great at developing analytical models to achieve specific business results. However, deploying a model in a production environment requires a different skillset than data exploration and model development. The result is wasted human capital. The data scientist spends a significant amount of engineering the data, which is often re-written by your data engineers for production deployments.

 

So why not allow the data scientist and data engineer focus on what they do best? With Pentaho PDI’s Drag and Drop GUI environment data engineers can prepare and orchestrate data to seamlessly flow into a Data Scientist’s Notebook, i.e. Jupyter (which is the focus of this blog). Data Scientists, can explore analytical models using the Python programming language and related Machine Learning and Deep Learning frameworks with cleansed data. The Result? Models ready for production environments at a fraction of the cost and delivered in a fraction of the time.

 

2. Why would you want to use PDI and the Jupyter Notebook together to develop models in Python?

Pentaho allows Data Scientists to spend their time on data science models instead of data prep tasks and makes it easier to share Python scripts between data scientists and data engineers. By choosing Pentaho to operationalize data science, organizations can:

 

  1. Utilize a graphical drag and drop development environment, which makes data engineering easier with a toolbox of connectors to many data sources - easily configured instead of coded, tools that can blend data from multiple sources, and transformation steps to cleanse and normalize the data.
  2. Migrate data to production environments with minimal changes.
  3. Scale seamlessly to address growing production data volumes, and
  4. Share production quality data Sets and Python scripts between Data Engineers and Data Scientists, as shown below in a collaborative workflow between the two personas:

 

JN-1.jpg

 

 

https://whatsthebigdata.com/2016/05/01/data-scientists-spend-most-of-their-time-cleaning-data/

3. How do you develop Python models using PDI and Jupyter Notebooks?

JN-2.jpg

Dependencies and components tested with:

  • Pentaho versions applicable with: 8.1/8.2
  • Python.org 2.7.x or Python 3.5.x
  • Jupyter Notebook 5.6.0
  • Python JDBC package dependencies, i.e. JayDeBeApi and jpype

Pentaho PDI Data Service On-line help link that includes configuration, installation, client jars, etc. (https://help.pentaho.com/Documentation/8.2/Products/Data_Integration/Data_Services)

 

Basic process:

 

  1. In PDI, create a new Transformation connected to the Pentaho Server repository. Implement all of your data connections, blending, filtering, cleansing, etc., as shown in below example,

 

JN-3.jpg

 

2. Use PDI's Data Service feature to export rows from the PDI transformation (which later will be consumed in a Jupyter Notebook). Create a New Data Service by right-clicking on the last step in the transformation. Test the Data Service within the UI and select Save Transformation As to save the Data Service to Pentaho Server.

 

JN-4.jpg

JN-5.jpg

3. Before the Data Scientist can work in the Jupyter Notebook utilize a Data Grid Step to review the Data Grid Fields and Data Values. These input variables will flow into the Python Executor Step. Note they can be easily changed by the Data Engineer for new PDI Data Services.

 

 

JN-6.jpg

JN-7.jpg

JN-8.jpg

 

4. Below, the Python Executor – Create Jupyter Notebook –Python API contains Python Script, Input and Output references and more. From here, the Data Engineer can create the Jupyter Notebook for the Data Scientist to consume.

JN-9.jpg

 

5. Python Executor – Create Jupyter Notebook –Python API step automatically populates the Jupyter Notebook (shown below) with the cleansed and orchestrated data from the transformation. The Data Scientist is connected directly to the PDI Data Service created earlier by the Data Engineer.

JN-10.jpg

 

 

6. Data Scientists will retrieve, i.e. Enterprise Data Catalog, File Share, etc., the Jupyter Notebook file created by the Data Engineer and PDI. The Data Scientist will confirm the output from the Python Pandas Data Frame named df in the last cell.

JN-11.jpg

7. From here, the Data Scientist can begin building, evaluating, processing and saving the machine and deep learning models by utilizing the Pandas Data Frame named df. An example is shown below using a Machine Learning Decision Tree Classifier.

JN-12.jpg

 

JN-13.jpg

JN-14.jpg

 

 

4. How can Data Engineers and Data Scientists using Python collaborate better with PDI and Jupyter Notebooks?

 

  1. PDI’s graphical development environment makes data engineering easier.
  2. Data Engineers can easily migrate PDI applications to production environments with minimal changes.
  3. Data Engineers can scale PDI applications to meet production data volumes.
  4. Data Engineers can quickly respond to Data Scientist’s data set requests with PDI Data Services.
  5. Data Scientists can easily access Jupyter Notebook templates connected to a PDI Data Service.
  6. Data Scientists can quickly pull data on demand from the Data Service and get to work on what they do best!

Anand Rao, Principal Product Marketing Manager, Pentaho

 

 

1 Deep Learning – What is the Hype?

 

According to Zion Market Research, the deep learning (DL) market will increase from $2.3 billion in 2017 to over $23.6 billion by 2024. With annual CAGR of almost 40%, DL has become one of the hottest areas for Data Scientists to create models[1]. Before we jump into how Pentaho can help operationalize your organization’s DL models within product environments, let’s take a step back and review why DL can be so disruptive. Below are some of the characteristics of DL, it:

DL-10.jpg

DL-11.jpg

 

 

 

  • Uses Artificial Neural Networks that have multiple hidden layers that can perform powerful image recognition, computer visioning/object detection, video stream processing, natural language processing and more. Improvements in DL offerings and in processing power, such as the GPU, cloud, have accelerated the DL boom in last few years.
  • Attempts to mimic the activity in the human brain via layers of neurons, DL learns to recognize patterns in digital representations of sounds, video streams, images, and other data.
  • Reduces need to perform feature engineering prior to running the model through use of multiple hidden layers, performing feature extraction on the fly when the model runs.
  • Improves on performance and accuracy over traditional Machine Learning algorithms due to updated frameworks, availability of very large data sets, (i.e. Big Data), and major improvements in processing power, i.e. GPUs, etc.
  • Provides development frameworks, environments, and offerings, i.e. Tensorflow, Keras, Caffe, PyTorch, etc that make DL more accessible to data scientists.

 

2 Why should you use PDI to develop and operationalize Deep Learning models in Python?

 

Today, Data Scientists and Data Engineers have collaborated on hundreds of data science projects built in PDI. With Pentaho, they’ve been able to migrate complex data science models to production environments at a fraction of the costs as compared to traditional data preparation tools. We are excited to announce that Pentaho can now bring this ease of use to DL frameworks, furthering Hitachi Vantara’s goal to enable organizations to innovate with all their data. With PDI’s new Python executor step, Pentaho can:

  • Integrate with popular DL frameworks in a transformation step, expanding upon Pentaho’s existing robust data science capabilities.
  • Easily implement DL Python script files received from Data Scientists within the new PDI Python Executor Step
  • Run DL models on any CPU/GPU hardware, enabling organizations to use GPU acceleration to enhance performance of their DL models.
  • Incorporate data from previous PDI steps, via data pipeline flow, as Python Pandas Data Frame of Numpy Array within the Python Executor step for DL processing
  • Integrate with Hitachi Content Platform (HDFS, Local, S3, Google Storage, etc.) allowing for the movement and positioning of unstructured data files to a locale, (i.e. Data Lake, etc.) and reducing DL storage and processing costs.

 

Benefits:

  • PDI supports most widely used DL frameworks, i.e. Tensorflow, Keras, PyTorch and others that have a Python API, allowing Data Scientists to work within their favorite libraries.
  • PDI enables Data Engineers and Data Scientists to collaborate while implementing DL
  • PDI allows for efficient allocation of skills and resources of the Data Scientist (i.e. build, evaluate and run DL models) and the Data Engineer (Create Data pipelines in PDI for DL processing) personas.

 

3 How does PDI operationalize Deep Learning?

 

Components referenced are:

  • Pentaho 8.2, PDI Python Executor Step, Hitachi Content Platform (HCP) VFS
  • Python.org 2.7.x or Python 3.5.x
  • Tensorflow 1.10
  • Keras 2.2.0

Review Pentaho 8.2 Python Executor Step in Pentaho On-line Help for list of dependencies. Python Executor - Pentaho Documentation

Basic process:

 

1. HCP VFS file location within a PDI step. Copy and stage unstructured data files for use by DL framework processing within PDI Python Executor Step.

DL-9.jpg

 

Additional info: https://help.pentaho.com/Documentation/8.2/Products/Data_Integration/Data_Integration_Perspective/Virtual_File_System

 

https://help.pentaho.com/Documentation/8.2/Products/Data_Integration/Data_Integration_Perspective/Virtual_File_System2. 2. Utilize a new Transformation that will implement workflows for processing DL frameworks and associated data sets, etc. Inject Hyperparameters (values to be used for tuning and execution of models) to evaluate the best performing model. Below is an example that implements four DL framework workflows, three using Tensorflow and one using Keras, with the Python Executor step.

 

 

DL-12.jpg

 

DL-13.jpg

 

 

 

3. Focusing on the Tensorflow DNN Classifier workflow (which implements injection of hyperparameters), utilize a PDI Data Grid Step, ie named Injected Hyperparameters, with values used by corresponding Python Script Executor steps.

 

 

4. Within the Python Script Executor step use Pandas DF and implement the Injected Hyperparameters and values as variables in the Input Tab

 

DL-4.jpg

5. Execute the DL related Python script (either via Embedding or a URL to a file) and reference a DL framework and Injected Hyperparameters from inputs. Also, you can set the Python Virtual Environment to a path other than what is the default Python install.

DL-5.jpg

 

6. Verify that you have Tensorflow installed, configured and is correctly importing into a Python shell.

DL-6.jpg

 

 

7. Going back to the Python Executor Step, click on the Output Tab and then click on the Get Fields button. PDI will do a pre-check on the script file as to verify it for errors, outputs, etc.

DL-7.jpg

 

8. This completes the configurations for the running of the transformation.

 

4 Does Hitachi Vantara also offer GPU offerings to accelerate Deep Learning execution?

 

DL frameworks can benefit substantially from executing with a GPU rather than a CPU because most DL frameworks have some type of GPU accelerators. Hitachi Vantara has engineered and delivered the DS225 Advanced Server with NVIDIA Tesla V100 GPUs in 2018. This is Hitachi Vantara’s first GPU server designed specifically for DL implementation.

 

DL-8.jpg

More details about the GPU offering are available here: https://www.hitachivantara.com/en-us/pdfd/datasheet/advanced-server-ds225-datasheet.pdf

 

5 Why Organization use PDI and Python with Deep Learning:

 

  • Intuitive drag and drop tools: PDI makes the implementation and execution of DL frameworks easier with its' graphical development environment for DL related pipelines and workflows.
  • Improved collaboration: Data Engineers and Data Scientist can work on a shared workflow and utilize their skills and time efficiently.
  • Better allocation of valuable resources: The Data Engineer can use PDI to create the workflows, move and stage unstructured data files from/to HCP, and configure injected hyperparameters in preparation for the Python script received from the Data Scientist.
  • Best-in-Class GPU Processing: Hitachi Vantara offers the DS225 Advanced Server with NVIDIA Tesla V100 GPUs that allow DL frameworks to benefit from GPU acceleration.

 

 

 

1. Global Deep Learning Market Will Reach USD 23.6 Billion By 2024 End: Zion Market Research