Tony Huynh

Intelligent Automation -  You *Can* Choose Happiness (aka - Hitachi Automation Director & Alexa)

Blog Post created by Tony Huynh Employee on Jan 19, 2018

 

Ask ten people for their thoughts on Artificial Intelligence and you will get answers that span the emotional range from “Alexa is great!” to “HAL 9000: I’m sorry Dave, I’m afraid I can’t do that”.

 

Personally, I believe that we need to embrace this nascent technology and trust that we will never need to meet HAL 9000, good intentions or not.

 

So how does Intelligent Automation impact YOU, a knowledge worker in high tech?  Especially if you’re a highly valued and  highly stressed member of an IT team, responsible for responding quickly and often to business & client needs, while at the same time ensuring that you’re “keeping the lights on” with zero impact to users’ ability to access business applications.

 

You’ve read in my previous blogs how Hitachi Vantara’s Hitachi Automation Director software can help accelerate resource development and reduce manual steps by >70%.

 

THIS IS PART II of the blog - how to get an ALEXA SKILL up and running.

 

PART III of the blog will be posted later :  Hitachi Automation Director’s capability to be integrated w ALEXA SKILL

 

Today, let’s take it a step further by discussing what you can do with Hitachi Automation Director’s flexible REST API with necessary context via JSON payload. Specifically how HAD’s infrastructure service catalog can be presented as menu items for upper lay CMP or a voice-oriented CMP (Cloud Management Platform) via Alexa Skill. Alexa demo is a technology preview that showcases how HAD can integrate with northbound cloud management layer.

 

That’s correct – use ALEXA in conjunction with Hitachi Automation Director to provision your storage, among other cool things – whoa!!!

 

 

 

FULL DISCLOSURE:  This is a technology preview demo to showcase Hitachi Automation Director capabilities, as a proof of concept. We have created ALEXA and Hitachi Automation Director demo and have shown to customers what can be done. Today, this is not a formally released or supported feature. For actual production use, you need to consider factors such as cloud integration, security, and contractual obligation with cloud vendor like Amazon Alexa or Google Home, etc.

 

Are you ready to get started?  Some minor (or major coding) to follow, depending on you background. 

The following is credited to Craig Chan, Hitachi Vantara’s Director of SW Strategy and resident man of a thousand hats and former developer, in his own words.

 

It’s always easier to leverage someone else’s work and that’s just what I did here.  I knew I wanted to use Python because we have sample Python scripts for our products that are easy to use.  So, a quick search on the internet turned up a nice Alexa Python Tutorial – Build a voice experience in 5 minutes or less.  What a great way to get started!  Setting up this dialog was a snap and it did only take 5 minutes.  I spent more time setting up my Amazon developer account.  Remember last time I took you through the dialog with Alexa to manage my storage?  Well here is where it all began.  Let’s look at the Color Picker dialog:

ME

“Alexa, open Color Picker”

ALEXA

“Welcome to the Alexa skills kit sample. Please tell me your favorite color by saying, my favorite color is red.”

ME

“My favorite color is blue.”

 

ALEXA

“I now know your favorite color is blue. You can ask me your favorite color by saying, what’s my favorite color?”

 

ME

“What’s my favorite color?”

 

ALEXA

“Your favorite color is blue. Goodbye.”

 

Simple, but effective.  I created my first Alexa Skill and it didn’t take much effort!  Now how does that all work?  Well the trick here is all the code is already written for you.  By selecting the “blueprint” you were basically taking Amazon’s code and using it as a skill.  We have to read through that code to understand how it works, but instead of pasting 207 lines of code here, go ahead to Amazon’s site and check it out.

Now I want to take that dialog and make it storage specific.  So instead of the above dialog, the new dialog looks like this:

 

ME

“Alexa, open my storage”

 

ALEXA

“Your storage is running properly. How can I help you manage your storage today? You can say, create a volume of 100 gigabytes.”

 

ME

“Create a volume of 20 gigabytes.”

 

ALEXA

“Okay. I can create a volume of 20 gigabytes. I can also format your volume for you. Just say, please format my volume.”

 

ME

“Please format my volume.”

 

ALEXA

“I am formatting your volume of 20 gigabytes. Formatting volume complete. Thank you for using Alexa to manage your storage.”

 

Great! Alexa just created a formatted a volume of 20 GB!  Well, not exactly.  You had that dialog with Alexa, but it didn’t really do anything.  Having the dialog is pretty cool though and it did hear what capacity you asked for and listened to your request to format it.  What happened here is I took the “myColorPicker” function and just modified the text.  I also wanted to know what variables were being saved so I changed those as well.  Now instead of saving my favorite color, it was saving my capacity choice.  Take a look at the code I attached here. It’s in Python so it’s pretty easy to read through.

 

As you read through the code you might have noticed something called an “intent”, or if you were paying real close attention, you might have noticed something else called a “slot”.  Intents are defined in the Amazon developer portal where you develop the actual skill that uses the code you put into Lambda.  The Color Picker Skill uses “MyColorIsIntent” and “WhatsMyColorIntent”.  The slot is the “LIST_OF_COLORS” or choices that you have for colors (I added purple to mine).  For my new skill, let’s call it VSPG Storage Creator, I changed the intents to “MyCapacityIsIntent” and “FormatVolumeIntent”.  Then I changed the slot to “LIST_OF_CAPACITIES”.  Now I didn’t want to go wild with capacities so only capacities of 10-100 in increments of 10 were allowed.  And one last thing, some sample utterances.  These are the phrases you are expecting the person talking to Alexa to say. Depending on how flexible you want Alexa to be, you can change this to whatever you want, but for simplicity, I just modified the Color Picker ones to “MyCapacityIsIntent Create a volume of {Capacity} gigabytes” and “FormatVolumeIntent please format my volume”.

 

Okay, that was a lot to read, and probably confusing unless brought into context.  Let’s follow the instructions below to first setup Lambda:

 

 

 

 

Code?! Yes code!  But this code is pretty easy, even if it’s really long.  So to make it easier on you, just copy and paste the below code to replace in the lambda_function.py area.

"""

This is a demo VSP-G Storage skill built with the Amazon Alexa Skills Kit.

 

"""

 

from __future__ import print_function

 

 

# --------------- Helpers that build all of the responses ----------------------

 

def build_speechlet_response(title, output, reprompt_text, should_end_session):

    return {

       'outputSpeech': {

            'type': 'PlainText',

            'text': output

        },

        'card': {

            'type': 'Simple',

            'title': "SessionSpeechlet - " + title,

            'content': "SessionSpeechlet - " + output

        },

        'reprompt': {

            'outputSpeech': {

                'type': 'PlainText',

                'text': reprompt_text

            }

        },

        'shouldEndSession': should_end_session

    }

 

 

def build_response(session_attributes, speechlet_response):

    return {

        'version': '1.0',

        'sessionAttributes': session_attributes,

        'response': speechlet_response

    }

 

 

# --------------- Functions that control the skill's behavior ------------------

 

def get_welcome_response():

    """ If we wanted to initialize the session to have some attributes we could

    add those here

    """

 

    session_attributes = {}

    card_title = "Welcome"

    speech_output = "Your storage is running properly. " \

                    "How can I help you manage your storage today? " \

                    "You can say, create a volume of 100 gigabytes."

    # If the user either does not reply to the welcome message or says something

    # that is not understood, they will be prompted again with this text.

    reprompt_text = "Sorry, I didn't catch that. " \

                    "How can I help you manage your storage today? " \

                    "You can say, create a volume of 100 gigabytes."

    should_end_session = False

    return build_response(session_attributes, build_speechlet_response(

        card_title, speech_output, reprompt_text, should_end_session))

 

 

def handle_session_end_request():

    card_title = "Session Ended"

    speech_output = "Thank you for managing your storage with Alexa. " \

                    "Have a nice day! "

    # Setting this to true ends the session and exits the skill.

    should_end_session = True

    return build_response({}, build_speechlet_response(

        card_title, speech_output, None, should_end_session))

 

 

def create_desired_capacity_attributes(desired_capacity):

    return {"desiredCapacity": desired_capacity}

 

 

def set_capacity_in_session(intent, session):

    """ Sets the capacity in the session and prepares the speech to reply to the

    user.

    """

 

    card_title = intent['name']

    session_attributes = {}

    should_end_session = False

 

    if 'Capacity' in intent['slots']:

        desired_capacity = intent['slots']['Capacity']['value']

        session_attributes = create_desired_capacity_attributes(desired_capacity)

        speech_output = "Okay. I can create a volume of " + \

                        desired_capacity + " gigabytes"\

                        ". I can also format your volume for you. " \

                        "Just say, please format my volume."

        reprompt_text = "I can also format your volume for you. " \

                        "Just say, please format my volume."

    else:

        speech_output = "I don't have that capacity available. " \

                        "Please try again."

        reprompt_text = "I don't have that capacity available. " \

                        "Please tell me a capacity number I can use."

    return build_response(session_attributes, build_speechlet_response(

        card_title, speech_output, reprompt_text, should_end_session))

 

 

def format_volume_from_session(intent, session):

    session_attributes = {}

    reprompt_text = None

 

    if session.get('attributes', {}) and "desiredCapacity" in session.get('attributes', {}):

        desired_capacity = session['attributes']['desiredCapacity']

        speech_output = "I am formating your volume of " + desired_capacity + " gigabytes"\

                        ". Formating volume complete. Thank you for using Alexa to manage your storage."

        should_end_session = True

    else:

        speech_output = "I don't have any capacity to format. " \

                        "You can say, create a volume of 100 gigabytes."

        should_end_session = False

 

    # Setting reprompt_text to None signifies that we do not want to reprompt

    # the user. If the user does not respond or says something that is not

    # understood, the session will end.

    return build_response(session_attributes, build_speechlet_response(

        intent['name'], speech_output, reprompt_text, should_end_session))

 

 

# --------------- Events ------------------

 

def on_session_started(session_started_request, session):

    """ Called when the session starts """

 

    print("on_session_started requestId=" + session_started_request['requestId']

          + ", sessionId=" + session['sessionId'])

 

 

def on_launch(launch_request, session):

    """ Called when the user launches the skill without specifying what they

    want

    """

 

    print("on_launch requestId=" + launch_request['requestId'] +

          ", sessionId=" + session['sessionId'])

    # Dispatch to your skill's launch

    return get_welcome_response()

 

 

def on_intent(intent_request, session):

    """ Called when the user specifies an intent for this skill """

 

    print("on_intent requestId=" + intent_request['requestId'] +

          ", sessionId=" + session['sessionId'])

 

    intent = intent_request['intent']

    intent_name = intent_request['intent']['name']

 

    # Dispatch to your skill's intent handlers

    if intent_name == "MyCapacityIsIntent":

        return set_capacity_in_session(intent, session)

    elif intent_name == "FormatVolumeIntent":

        return format_volume_from_session(intent, session)

    elif intent_name == "AMAZON.HelpIntent":

        return get_welcome_response()

    elif intent_name == "AMAZON.CancelIntent" or intent_name == "AMAZON.StopIntent":

        return handle_session_end_request()

    else:

        raise ValueError("Invalid intent")

 

 

def on_session_ended(session_ended_request, session):

    """ Called when the user ends the session.

 

    Is not called when the skill returns should_end_session=true

    """

    print("on_session_ended requestId=" + session_ended_request['requestId'] +

          ", sessionId=" + session['sessionId'])

    # add cleanup logic here

 

 

# --------------- Main handler ------------------

 

def lambda_handler(event, context):

    """ Route the incoming request based on type (LaunchRequest, IntentRequest,

    etc.) The JSON body of the request is provided in the event parameter.

    """

    print("event.session.application.applicationId=" +

          event['session']['application']['applicationId'])

 

    """

    Uncomment this if statement and populate with your skill's application ID to

    prevent someone else from configuring a skill that sends requests to this

    function.

    """

    # if (event['session']['application']['applicationId'] !=

    #         "amzn1.echo-sdk-ams.app.[unique-value-here]"):

    #     raise ValueError("Invalid Application ID")

 

    if event['session']['new']:

        on_session_started({'requestId': event['request']['requestId']},

                           event['session'])

 

    if event['request']['type'] == "LaunchRequest":

        return on_launch(event['request'], event['session'])

    elif event['request']['type'] == "IntentRequest":

        return on_intent(event['request'], event['session'])

    elif event['request']['type'] == "SessionEndedRequest":

        return on_session_ended(event['request'], event['session'])

 

You’ve just coded your very own Alexa skill! As you put that python script into Lambda, you might have noticed that we created our own names for the intents.  This leads us into configuring the skill to work with our intents.  Intents are things you want to happen.  For us, it’s about creating a volume and formatting that volume.  For these intents, we need to define a set of valid values (capacity amounts) and utterances (phrases that Alexa will understand).  Let’s configure our skill.

 

 

And we are done! Go ahead and test your new Alexa Skill and see how you can interact with Alexa.  Try different utterances and even different dialog in the code so Alexa says different things back to you.  Also give your own invocation name so it becomes your very own unique skill. 

 

Stay tuned for Part III of the blog, same time same channel!

 

Forward!!

Outcomes