draft, under development
we're using this section for drafting purposes
Generative AI: Artificial intelligence techniques that focus on generating new data instances similar to a given dataset.
Machine Learning (ML): A subset of AI that enables machines to improve at tasks with experience.
Deep Learning: A subset of machine learning that uses neural networks with many layers to learn data representations.
Neural Networks: Computing systems vaguely inspired by the biological neural networks that constitute animal brains.
Convolutional Neural Networks (CNNs): Deep learning algorithms that can take in an input image, assign importance to various aspects/objects in the image, and differentiate one from the other.
Recurrent Neural Networks (RNNs): A class of neural networks that are effective for modeling sequence data such as time series or natural language.
Generative Adversarial Networks (GANs): A class of machine learning frameworks designed by two neural networks, the generator, and the discriminator, contesting with each other.
Transformer Models: A type of model introduced in the paper "Attention is All You Need", primarily used for understanding sequences, and the basis for many language models like GPT (Generative Pre-trained Transformer).
Autoencoders: A type of neural network used to learn efficient codings of unlabeled data; they work by compressing the input into a latent-space representation and then reconstructing the output from this representation.
Variational Autoencoders (VAEs): A type of autoencoder that provides a probabilistic manner for describing observations in latent space.
Latent Space: The representation of compressed data in a lower-dimensional space, often used in generative models to generate new instances.
Natural Language Processing (NLP): The branch of AI focused on giving computers the ability to understand text and spoken words in a manner similar to human beings.
Text-to-Image Generation: The process of generating photorealistic images from textual descriptions using generative AI models.
Style Transfer: The technique of applying the style of one image to the content of another image using convolutional neural networks.
Data Augmentation: Techniques to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data.
Overfitting: A modeling error in machine learning when a function is too closely fit to a limited set of data points, impacting the model's ability to generalize to new data.
Underfitting: Occurs when a model is too simple to capture the underlying structure of the data.
Transfer Learning: The practice of reusing a pre-trained model on a new, related task.
Fine-tuning: A process to tweak a model that has been pre-trained on one task to make it perform better on a slightly different task.
Tokenization: The process of converting text into smaller units, such as words or phrases, often used in natural language processing.