Friday 26 May 2023

What is machine learning?

The biggest quality that sets AI aside from other computer science topics is the ability to easily automate tasks by employing machine learning, which lets computers learn from different experiences rather than being explicitly programmed to perform each task. This capability is what many refer to as AI, but machine learning is actually a subset of artificial intelligence.

Machine learning involves a system being trained on large amounts of data, so it can learn from mistakes, and recognize patterns in order to accurately make predictions and decisions, whether they've been exposed to the specific data or not. 

Examples of machine learning include image and speech recognition, fraud protection, and more. One specific example is the image recognition system when users upload a photo to Facebook. The social media network can analyze the image and recognize faces, which leads to recommendations to tag different friends. With time and practice, the system hones this skill and learns to make more accurate recommendations.

As mentioned above, machine learning is a subset of AI and is generally split into two main categories: supervised, and unsupervised learning.

Supervised learning

This is a common technique for teaching AI systems by using many labelled examples that have been categorized by people. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest -- you're essentially teaching by example. 

If you wanted to train a machine-learning model to recognize and differentiate images of circles and squares, you'd get started by gathering a large dataset of images of circles and squares in different contexts, such as a drawing of a planet for a circle, or a table for a square, for example, complete with labels for what each shape is. 

The algorithm would then learn this labeled collection of images to distinguish the shapes and its characteristics, such as circles having no corners and squares having four equal sides. After it's trained on the dataset of images, the system will be able to see a new image and determine what shape it finds. 

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorize that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't set up in advance to pick out specific types of data; it simply looks for data with similarities that it can group, for example, grouping customers together based on shopping behavior to target them with personalized marketing campaigns. 

Reinforcement learning

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

Consider training a system to play a video game, where it can receive a positive reward if it gets a higher score and a negative reward for a low score. The system learns to analyze the game and make moves, and then learns solely from the rewards it receives, reaching the point of being able to play on its own and earn a high score without human intervention.

Reinforcement learning is also used in research, where it can help teach autonomous robots about the optimal way to behave in real-world environments.

One of the most renowned types of AI right now are large language models (LLM). These models use unsupervised machine learning and are trained on massive amounts of text to learn how human language works. These texts include articles, books, websites, and more. 

In the training process, LLMs process billions of words and phrases to learn patterns and relationships between them, making the models able to generate human-like answers to prompts. 

The most popular LLM is GPT 3.5, on which ChatGPT is based, and the largest LLM is GPT-4. Bard uses LaMDA, a LLM developed by Google, which is the second-largest LLM.


Part of the machine-learning family, deep learning involves training artificial neural networks with three or more layers to perform different tasks. These neural networks are expanded into sprawling networks with a large number of deep layers that are trained using massive amounts of data. 

Deep-learning models tend to have more than three layers, and can have hundreds of layers. It can use supervised or unsupervised learning or a combination of both in the training process.

Because deep-learning technology can learn to recognize complex patterns in data using AI, it is often used in natural language processing (NLP), speech recognition, and image recognition.

Courtesy: zdnet.com


No comments:

Post a Comment

Maximizing Content Delivery Efficiency: Optimizing Performance in AWS CloudFront

  Unleash Blazing-Fast Content Delivery: Your Guide to CloudFront Optimization Introduction: AWS CloudFront stands as a cornerstone of moder...