Machine Learning Simplified

Lisa Olson
8 min readMay 26, 2018

It’s crazy to think about how comfortable we’ve gotten as a society interacting with our devices. We’re not phased by regularly talking to our cellphones or TVs and watching them respond. Interesting to think how much high tech the younger generations will grow up with as a sense of ‘normal’. In that sense, when words like ‘Artificial Intelligence’ and ‘Machine Learning’ get thrown around, whether knowing it or not, we all have somewhat of an idea of what this looks like. But let’s break it down even more.

Artificial Intelligence

So what is machine learning? To talk about machine learning, we need to talk about artificial intelligence. Artificial Intelligence is the high-arching concept of developing software that can perform tasks that would normally take human involvement. In other words, the idea of machines having the capacity to behave and make decisions like humans. Examples of this that you’re most likely familiar with are the navigation systems on smartphones, email inboxes, and plagiarism checks. The navigation applications are able to predict the best route to go as it gathers information while you’re driving, dynamically sort out what category emails belong in based on what it’s reading and compare two essays and determine their similarities/differences. These are all things that would normally take human involvement.

Machine Learning

So that’s artificial intelligence, at least at a high level. Something that at this point, you’ve most likely interacted with, seen, and experienced. How does machine learning come into play? What does it mean and how does it differ from artificial intelligence? Machine learning is the capability of a computer program to improve how it performs a certain task based on past experience. It’s a machine that has the ability to learn. How is it possible that we can go online to our social media websites and the computer, not us, can recognize our friends and suggest them to us? The idea of tagging is all done through machine learning. Another example would be the suggestions we’re able to see on Netflix, Amazon, Instagram, etc. What do these examples have in common? They’re taking in a massive amount of data and intelligently sorting through it to spit out some options. So for instance, Facebook’s tagging feature. It takes in all the faces it knows of, compares it to it’s data set it has on hand, on spits out some suggested tags for us to choose from. In the case of Netflix or Amazon, it takes in all the data it knows of about what we like, what we generally click on, the similarities between two sets of tv shows, and spits out some shows that follow a specific set of criteria.

We make these types of decisions all the time as humans. We decide which restaurant to go to based on the Yelp reviews and looking at the menu, maybe debating prices and thinking of how much we usually spend and pick something within a middle range. That’s what machine learning does! As a subset of Artificial Intelligence, it makes ‘smart’ decisions based on the data given to it, similar to those we make as humans.

Hopefully you feel you have a grasp on what’s happening conceptually at a high level. But what about the ‘how’ question. How does this actually work? I think when discussing the ‘how’, it starts to feel pretty overwhelming and confusing, hence why everything gets tossed into the ‘artificial intelligence’ bucket. But I’ll do my best to break it down into bite-size, comprehendible pieces.

Supervised vs. Unsupervised Learning

You can break down machine learning into two separate categories called ‘Supervised Learning’ and ‘Unsupervised Learning’. In Supervised Learning, the program is fed labeled data and desired output to learn from. For example, think of labels in children’s books when they’re learning their first words. A picture of a fish would have a sentence stating, ‘this is a fish’. The program ‘learns’ this way, by memorizing these labels over hundreds of pictures of fish, and by using that knowledge, can make predictions of what it’s seeing. In Unsupervised learning, you have the opposite effect. All the data is unlabeled and the program learns by creating patterns of the data it sees, without any instruction or ‘answers’, so to speak. This would be similar to jumping into a foreign country and trying to learn a language purely by listening and interacting with native speakers, not by sitting in a classroom or being told what words mean directly. Another category worth mentioning is ‘Reinforcement Training’. This is when the program is ‘graded’ based on the attempts. For example, it plays a specific game hundreds or millions of times and records the ‘failures’. It learns by failing and receiving feedback; the ‘trial-and-error’ approach.

Supervised vs. Unsupervised are two major categories of splitting up machine learning. From there, it subdivides even further into different approaches. The most popular approaches to answer the ‘how’ question, (how does this actually work) are k-nearest neighbor, Decision Tree Learning, and Deep Learning, but we’ll mention a few others to have a more well-grounded of idea of how these conceptual ideas are actualized and implemented.

Let’s jump into these and see how they work!

k-nearest neighbor

The way this algorithm works on a very high level is that it groups pieces of data together and decides how ‘near’ they are to each other. Hence, the name, nearest neighbor. Based on a certain point in a vector, the points will be reassigned to form around the closest points to it. So if you had a drawing with a red dot in the center surrounded by 3 other dots of varying colors. k-nearest neighbor would circle those 4 dots and based on the reference dot of red, decide how to shift the dots around in the vector so that they’re nearest to those most similar.

Decision Tree Learning

Decision Trees are somewhat explained in the name themselves. They make decisions based on a particular condition. It will decide whether to spit out one answer or another based on the data it sees. To oversimplify, think of pictures of birds and cows. There’s a set of parameters defining birds and defining cows. For example, cows have 4 legs, birds have 2 legs. Based on the pictures it’s seeing, the decision tree decides that the data is either weighted more heavily towards a bird or more heavily towards a cow and learns form these repeated decisions.

Neural Networks

The Artificial Neuron Network concept was set up with the idea of mimicking how the brain functions. Our brain sends off chemical responses, one neuron to the next, based on what it’s seeing and interacting with. It passes the updated information off, neuron to neuron, until it comes to it’s final, conclusive answer. The Neuron Network functions in a similar fashion. The network has an input layer, an in-between layer, and a final output. Let’s use the example of the cows and the birds again. The input layer provides data from one of the animals needing classification. For example, this could be the number of legs, the appearance of wings, or the relative size or weight. Based on these classifications, the in-between layer transforms input into output, classifying the data. To go into a little more detail, each of these layers consists of values varying from 0 to 1 of how much similarity there is from one layer to the next. They’re weighted with point values to determine on a scale how much of that particular classification is present and then passed along to the next layer. Based on these weights and biases, it’s able to learn.

Deep Learning

If you’ve heard anything about machine learning before, you’ve probably heard about deep learning. Deep learning refers to multiple layers within the neural network. Rather than a single in-between layer, there can be many layers, creating a more complex and dynamic neural network.

Linear and Logistic Regression

Linear Regression falls under the Supervised Learning category and the way it works is that it predicts outcomes based on continuous features. So picture a graph in your mind with random points scattered across it. Linear Regression tries to form a line based on the points it sees. It forms a line based on these points, either a positive relationship between the independent variable x and the dependent variable y or a negative relationship. Negative relationship would mean x is increasing while y is decreasing. Positive relationship would mean y is increasing while x is increasing. This line that is formed is what is called the Regression Line. Once there’s a final line with a final point, the estimated point at the end of the line is compared with the actual point.

Logistic Regression is also classified as Supervised Learning. It’s used to predict the likelihood of an event occurring based on previous data. In terms of x and y, y can be either 0 or 1. If the event does not happen, y is given the value of 0. It it does happen, y is given the value of 1. This is knows as Binomial Logistic Regression. If there are multiple values of y, it’s called Multinomial Logistic Regression. Dislike Linear Regression where it tries to predict data by finding a line, logistic regression doesn’t see straight lines but instead sees relationships between variables and comes up with predictions based on the odds and ratios discovered.

Human Biases

A final note worth mentioning is that the computer is taught by what the human programs. There will always be some sense of bias depending on how the human valued the importance of one thing over another. In the earlier example of the fish, if we only fed the program thousands of pictures of clownfish, it would be severely biased towards the clownfish as opposed to other types of fish.

A lot of this stuff is easier to grasp once you watch some videos and see it in action. But I hope this was helpful on a conceptual level to understand a bit about AI, Machine Learning, Deep Learning, and a little of the ‘how’ it all works at a basic level.

Happy coding!

--

--

Lisa Olson

Front End Developer. Passionate about everything I do. How we spend our days is how we spend our lives.