Skip navigation

   

 

What Can You Do with Deep Learning in Pentaho?

 

By Ken Wood and Mark Hall

 

For those of you that have installed and are using the Plugin Machine Intelligence (PMI) plugin that Hitachi Vantara Labs released to the Pentaho Market Place back in March 2018, get ready for an exciting new

PMIDLLogo.pngupdate. This fall, we will release PMI version 1.4 as an update to the existing PMI which is an experimental plugin for Pentaho Data Integration (PDI). Our initial release of PMI focused on classical machine learning and the ability to build, use and manage machine learning models from four popular machine learning libraries – Python’s Scikit-Learning, R’s Machine Learning with R, Spark’s Machine Learning library and WEKA.

 

I say classical machine learning because traditionally classic machine learning has its best success executing on structured data. With the next release of PMI, we integrate a new machine learning library, what we refer to as “execution engines” – Deep Learning for Java (DL4J). This means PMI can now perform deep learning operations - training, validating, testing, building, evaluating and using deep learning models - directly from PDI.

 

AIDomainsDiagram.png

 

Deep Learning is gaining lots of attention in the industry for its ability to operate on unstructured data like images, video, audio etc. Deep Learning is a recent addition to the Artificial Intelligence domain of machine learning, though technically the technology has been around for quite some time.

 

NNComparison.png

 

Deep learning to some degree gets its name from the deep, complex, hidden, neural network layers the technology creates to analyze data. To be clear, both machine learning and deep learning can operate on both structured and unstructured data, it’s just that the current general practice and greater success rate of applying deep learning to unstructured data and applying classical machine learning to structured data is the state of understanding at tis time.

 

The reason we’re blogging about this now is because we showcased and demonstrated PMI v1.4 with deep learning at Hitachi NEXT 2018 in San Diego. Along with a series of one-on-one workshops showing the new deep learning step with PDI and PMI v1.4, we demonstrated an example application using deep learning in an interactive apparatus that uses two deep learning models in a PDI transformation, and then uses PDI to drive the entire application.

 

DLTransformation.png

 

This PDI transformation contains several parts when called,

  • The “Data Capture and Data Preparation” phase
    • This portion of the transformation starts by narrating what the entire transformation will do
    • Then communicates with a Raspberry Pi to capture a picture of a physical x-ray - essentially analog to digital conversion
    • Information about the image is then transformed into image metadata. Basically, an in-memory location of the actual digital image
  • The PDI transformation then executes the two deep learning models on the x-ray image. The two deep learning models vectorizes the image into usable numbers, determines the probability of identifying the body part focused on in the image and detecting whether an injury or anomaly exists.
  • The results of the two deep learning models is the probabilities of,
    • A multi-class classifier – Shoulder, Humerus, Elbow, Forearm and Hand
    • And a 2-class classifier, injury or anomaly detected – yes or no
    • These probabilities are numbers between 0 and 1
  • The next phase of the PDI transformation, “Results Preparation” takes the output probabilities (numbers between 0 and 1) from the deep learning models and prepares the result for use.
    • Determine the most likely value – max value is the “answer”
    • Format the 5 decimal digit value into a percentage and into a string
      • This formatting allows the next phase to say “Forty seven percent” instead of "4, 7, percent sign"
  • The last phase, “Confidence Dialog Preparation”, builds logic for the different speaking phrases and applies confidence to the result as an analysis.
    • For example, instead of saying, “There is a 98% chance that this elbow is injured.”. Just say “I detect that this elbow is injured.”. At 98%, we’ve determined that it is injured, but at 47%, we’re not too sure, so the spoken analysis would be “I detect a 47% probability that this elbow is injured, you might want to have it checked out.”.
    • This confidence logic applies to both the body part identification and the injury detection parts of the spoken analysis.

 

A diagram of the "Deep Learning Pipeline" can be seen here.

  • We use a "Speech Recognition Module" written in python to capture spoken phrases and determines the actions to be taken.
  • In case the environment is too noisy for sound, a special remote control application is available to manually HeyRayTweet.pngexecute the "Hey Ray!" command set.
    • A main transformation is used to interpret the incoming tasks and orchestrate the execution of other transformations as needed.
      • The tasks includes,
        • Introduction narration
        • Help on how to use "Hey Ray!"
        • Analyze the x-ray film and provide the results speech
        • the current analysis session can be saved to the Hitachi Content Platform (HCP)
          • During this operation, the content, x-ray image and analysis phrases, are converted into a single image movie file, then all of the content is saved to HCP
        • You can have "Hey Ray!" tweet the movie file
        • Provide insightful thoughts and opinions
        • And finally, "Hey Ray!" can tell radiologist jokes

 

HeyRayDemoConfiguration.png

 

We call this demonstration “Hey Ray!”. “Hey Ray!” is just an example of applying deep learning to a situation. We came up with "Hey Ray!" because of the dataset we had access to, it just happens to be x-ray images. We could have created something with flowers, food, automobiles, etc. We also decided to speak the results and add speech recognition for demonstration and "Wow Factor" for the Hitachi NEXT conference. Also, we felt that creating charts of probability distributions of number between 0 and 1 would take to long to explain, so why not have the demonstration state the results. This demonstration turned out to be highly interactive as the attendees could select a x-ray picture, insert it into the x-ray viewing screen and tell the device to "Analyze the x-ray".

 

 

NEXTDemoPictureLabeled.png

 

 

We will be providing more blogs about PMI 1.4 with deep learning and other information on the artificial intelligence that goes into “Hey Ray!” in the coming months to help support this release. Stay tuned!

 

What can you do with machine learning and now deep learning in Pentaho?

 

 

IMPORTANT NOTE:

It is important to point out that this initiative is not formally supported by Hitachi Vantara, and there are no current plans on the Enterprise Edition roadmap to support PMI at this time.  It is recommended that this experimental feature be used for testing, educational and exploration purposes only. PMI is supported by Hitachi Vantara Labs and the community. Hitachi Vantara Labs was created to formally test out new ideas, explore emerging technologies and as much as possible, share our prototypes with the community and users through the Hitachi Vantara Marketplace. We like to refer to this as "providing early access to advanced capabilities". Our hope is that the community and users of these advanced capabilities will help us improve and recommend additional use cases. Hitachi Vantara has forward thinking customers and users, so we hope you will download, install and test this plugin. We would appreciate any and all of your comments, ideas and opinions.