It was much easier when I was smarter than my kids.
I could rattle off a number of inaccurate responses to common childhood questions like “why is the sky blue?”. “Because I said so, now get out of the bathroom”, and so on.
Unfortunately schooling and the Internet and their mother have taken away the vast majority of my enjoyment by providing correct and accurate answers, and even the cognitive means to deduce the answers for themselves. They have turned their new-found wisdom into insolent sarcasm, mostly directed at me. Damn them.
Now I get responses like these:
- - When demonstrating a complex Minecraft world to me: “I can explain it to you but I can’t understand it for you”
- - When I make a mistake but say “I meant to do that”: “What he lacks in common sense, he makes up for in self-esteem”
- - When I explain what I do for a living to family member: “Don’t fall for it <insert name>, he’s just making up words”
- - When someone asks why they are so quiet: “I’m not shy, I’m just holding back my awesomeness so I don’t intimidate you”
Even recently, when I (and by “I”, I mean an external paid professional) installed a new whole home cable system, which started providing content “recommendations” within the first few days, they were the first to recognize its sophistication. My conclusion that Eve (our Elf on the Shelf) evaluated our watching habits during the day and then called the cable company to inform them our family should be watching more educational programming was summarily dismissed. It seems like a reasonable assessment of the situation.
In unison, I was corrected: “The new feature is actually an implementation of a Recommender System, which is just one example of a Machine Learning algorithm. Obviously. Considering we didn’t fill out an online profile of the types of programming we enjoy, or our watching habits, or even just simple demographics, we should conclude that they are using a Latent Factors or Collaborative Filtering system to learn about our viewing habits. Duh.”
My eyes, equal parts glazed and teary, forced upon them an requirement to explain in further detail.
Out came the whiteboard. Apparently I was about to get schooled. Imagine a series of eye rolling and while I’m given this detailed explanation:
Machine Learning is how computers learn to use data to make (hopefully) good predictions or decisions. In the case of movie recommendations, a machine learning program comes to learn our preferences based on the movies we watch, buy, and rate. The program then uses this accumulated knowledge to make recommendations about content we haven’t yet watched. In fact, this is also how sites like Amazon.com make their “customers like you also bought/read/looked at” recommendations.
In a Big Data world, you can think of Machine Learning as the forward looking cousin to Data Mining. Whereas data mining seeks to discover properties or relationships within a data set, machine learning seeks to use a data set to enable predictions about data that has yet to be evaluated (e.g. movies we haven’t watched, bought, or rated). Machine Learning and Data Mining have in common the property that they both enable monetization or other non-cash derivation of value (e.g. risk/cost reduction) from data. Remember that “data is your greatest asset” story you keep telling your clients? Much like an oil refinery turns crude oil rich in potential energy into all sorts of useful products, machine learning (a data refinery perhaps?) turns data rich in potential value into insights.
Machine Learning algorithms are often used because the sorts of problems typically solved by Machine Learning are difficult to solve using conventional programming techniques Consider the complexity of writing code for identifying a coffee cup, or set of flying birds from a video (Computer Vision), or trying to make sense of written text (Natural Language Processing), or ensuring the safety of passengers in self driving vehicles. Simply put, statistically or otherwise mathematically based Machine Learning algorithms work better on these kinds of problems without the burden of thousands of lines of “if then else” code!
Let’s examine in limited detail a few Machine Learning concepts:
- - Training Data: People tend to learn by practice. Machine Learning algorithms are the same; they learn by training on practice data. We hope that by choosing training data that is sufficiently representative of all of the possible data that may be encountered that we will be able to train a program to make useful predictions or decisions about data outside of the training set.
- - Regression: Regression defines a class of Machine Learning problems focused on predicting a number of some sort. Examples of regression problems include predicting the price of a house or the temperature in Animal Kingdom tomorrow. If you imagine a two dimensional plot of training data, the simplest form of regression would plot a line that is on average as close as possible to all of the points. Values for new points are predicted by finding their position on the line.
- - Classification: Classification on the other hand is focused on putting things into groups. Think pro or con, spinney rides or boat rides, etc. Again, try to visualize a two dimensional plot of training data. Further imagine that the data represents two groups. In this case the simplest form of classification seeks to draw a straight line between the data points so that the groups are as correctly separated as possible. We make predictions about new data simply by putting them on graph and figuring out what side of the line they’re on.
- - Decision Trees: Decision trees are a classification or regression technique that has the unique property of being understandable by humans. In some cases it is insufficient for a Machine Learning algorithm to be merely correct. In such cases users of the algorithm must understand why predictions or decisions were made. Consider the process of applying for a loan. In nearly all cases a lending institution will apply an algorithm that is based on or looks like a decision tree to decide whether a loan application will be granted. Decision trees work by learning human readable rules (e.g. if credit score > x) that repeatedly split training data into subsets relative to the outcome in question.
So you see, it’s quite simple, however Eve couldn't possibly have done the math on her own.
Paging Mr. Morrow. Mr. Tom Morrow. I have a new definition for you.