What is Machine Learning? Explained With Dogs
Machine learning is a prolific buzzword that appears to be ever-growing in popularity, especially over the past decade. However, for such a touted topic, few have set out to break it down into plain language. In this post, we will be doing just that. So, what exactly is machine learning?
Is machine learning worth it?
Having worked for the past decade in the world of technology development, we regularly come across people who associate machine learning with superior outcomes.
But is this really true?
Yes and no… It is most certainly not as useful as many people would have you believe and is most certainly not suited to every application. However, machine learning can be immensely helpful for some specific use cases.
But what are the use cases of machine learning, and how does it work? We will explore this and more in the following sections.
What you will get from this article
- What is machine learning?
- How does it work?
- Real-life use case example
What is machine learning?
At a high level, machine learning is nothing more than the science behind making computers learn with minimal human input. That is, setting them up so they can learn from experience and improve themselves over time without being made to do so.
Machine learning however is not as complex as the media would have you believe, nor is not even as innovative! Machine learning has been around since the 50’s.
How can this be?
Because machine learning is nothing more than a basic algorithm running through a loop, constantly returning to a fork in logic of “yes” and “no”. Either “yes”, the guess was right so associate it with the input, or “no”, do not associate this answer with the input.
How does it know if it is right or wrong?
In crude terms, we need something to provide feedback to the algorithm. This segways nicely into our next section.
How does machine learning work?
Before we dive in, I should preface the following by first clarifying that this is a reductionist interpretation of just one method that a machine learning algorithm may be trained.
Let's use an analogy
Let’s suppose you are crazy about dogs and are fervently studying for an exam on dog breeds. For this exam, you will be presented with a picture of a dog (such as SAPHI’s office mascot seen below) and then asked what the breed is.
So now we know what the exam will be and the skills we require to ace it, we can set about making a plan to prepare ourselves. In this case, our plan is pretty straightforward; we expose ourselves to hundreds of pictures of dogs and have someone provide us with basic feedback on our answers. Slowly but surely, we learn to recognise not just differences between vastly different species such as a Great Dane and a Chihuahua.
Sounds too easy, right?
Well, it is a little more complex. Whilst we must learn to recognise differences between breeds, we must also learn to recognise the variations within each breed.
What do I mean by this?
Let me answer this question with another. What makes a Jack Russell a Jack Russell? The variation within this one breed can be significant and even resemble the characteristics of another, leading to confusion.
For instance, some Jack Russells will have longer fur, shorter fur, long legs, short legs, many spots, few spots, floppy ears, straight ears, but when it comes down to genetics; they are all the same breed.
Through repetition and consistent feedback, we slowly learn to spot the variations, so we don’t confuse them with similar-looking breeds in the future.
What does this have to do with machine learning?
Bear with me; it will all tie together. Let’s say you are studying for the exam with your friend, Bec. Bec collects 20 different pictures of various breeds and begins quizzing you. These pictures are your ‘inputs‘, and the answers you give are your ‘outputs‘ – just like in the world of machine learning.
You look at image one (input one) and give your three best guesses (outputs) in order of probability.
You are 60% confident the breed is a Jack Russell. However, a nagging suspicion makes you think it could be a Fox Terrier or possibly even a Scottish Terrier.
1. Jack Russell
2. Fox Terrier
3. Scottish Terrier
After you guess, Bec reveals the answer is, in fact, number 2, the Fox Terrier. With this feedback on hand, you store that image in memory, so when it reappears in the future, you will have a higher probability of picking the correct breed.
Through a process of repeated trial and error, you eventually build up an ever-improving, segmented mental map of each breed and its variations. In other words, you become a super-efficient dog-breed detector.
So, is this how machine learning works?
In a round-a-bout way, yes. Crudely speaking, machine learning learns similarly to humans through repetition and feedback.
Right now, machine learning is being deployed by companies across industries to solve some pretty large scale technical problems. One such company is found right here in Australia.
Tiliter is a Sydney based company that has raised millions in funding for their unique application of machine learning. The company has centred their business around building machine learning algorithms that automatically identify barcode-less products at supermarkets (think fresh fruit and veg).
Tiliter’s application of machine learning has a similar path to the above example, the difference being, you ar are replaced by an algorithm and Bec is replaced by customers.
Let's have a look at how such an algorithm could be trained
Suppose a customer at Woolworth’s wants to buy a pink lady apple. As they reach the counter, you will see the barcode scanner, except this particular scanner uses a camera and a specialised machine learning algorithm in training.
The scanner views the apple (input), spits out three possible options (outputs) as to the variety and then ranks them in order of probability. The algorithm’s first option is a granny smith, followed by a royal gala and, finally, a pink lady.
The algorithm outputs these options as images on the screen above the scanner; the customer then selects the correct option, which supplies the algorithm with the feedback it needs. This feedback allows the algorithm to refine its ability to recognise pink lady apples in the future.
Over time and through many exposures, the algorithm will develop its own mental map, categorising various produce items and their variations. After a period of time, it will become so adept at recognising the produce that it won’t require any human input.
Whilst machine learning can add significant value to some activities, it is by no means a panacea. When forced into applications outside of its range, it can lead to massive wastages in time, money, and energy. At the end of the day, it is best suited to specific applications such as the Tiliter example.
Thank you for reading!
Thank you for reading this article. We hope it gave you some valuable insights into the electronic product development process.
To find out more about the electronic product development process, reach out to our experienced engineers at firstname.lastname@example.org for a quick chat.
Follow our page