top of page

Transfer Learning

Transfer learning is a machine learning technique where a model trained on one task is re-purposed or transferred to another related task. The idea is to leverage knowledge learned from one domain or task and apply it to a different but related domain or task, typically when the new task has limited labeled data available.


Here's how transfer learning generally works:


Pre-trained Model:

A pre-trained model is first trained on a large dataset for a specific task, such as image classification or natural language processing. This pre-training is often done on a large, generic dataset to learn general features and patterns.


Transfer Learning:

Instead of training a model from scratch for the target task, the pre-trained model is used as a starting point. The knowledge and features learned during pre-training are transferred or fine-tuned to the new task.


Fine-tuning:

The pre-trained model is further trained or fine-tuned on the new dataset specific to the target task. During fine-tuning, certain layers of the model may be frozen (kept unchanged) to preserve the learned features, while others are updated to adapt to the new task.

Learn more AI terminology

IA, AI, AGI Explained

Weight initialization

A Deep Q-Network (DQN)

Artificial General Intelligence (AGI)

Neural network optimization

Deep neural networks (DNNs)

Random Forest

Decision Tree

Virtual Reality (VR)

Voice Recognition

Quantum-Safe Cryptography

Artificial Narrow Intelligence (ANI)

A Support Vector Machine (SVM)

Deep Neural Network (DNN)

Natural language prompts

Chatbot

Fault Tolerant AI

Meta-Learning

Underfitting

XGBoost

bottom of page