Generative AI Glossary

draft, under development

When You're Just Starting, for Personal Use

Top 5 Things To Start With

You'll Hear These Words, but you don't have to really know what they mean yet

Boneyard / Junkpile / ALL THE THINGS

we're using this section for drafting purposes

  1. Generative AI: Artificial intelligence techniques that focus on generating new data instances similar to a given dataset.

  2. Machine Learning (ML): A subset of AI that enables machines to improve at tasks with experience.

  3. Deep Learning: A subset of machine learning that uses neural networks with many layers to learn data representations.

  4. Neural Networks: Computing systems vaguely inspired by the biological neural networks that constitute animal brains.

  5. Convolutional Neural Networks (CNNs): Deep learning algorithms that can take in an input image, assign importance to various aspects/objects in the image, and differentiate one from the other.

  6. Recurrent Neural Networks (RNNs): A class of neural networks that are effective for modeling sequence data such as time series or natural language.

  7. Generative Adversarial Networks (GANs): A class of machine learning frameworks designed by two neural networks, the generator, and the discriminator, contesting with each other.

  8. Transformer Models: A type of model introduced in the paper "Attention is All You Need", primarily used for understanding sequences, and the basis for many language models like GPT (Generative Pre-trained Transformer).

  9. Autoencoders: A type of neural network used to learn efficient codings of unlabeled data; they work by compressing the input into a latent-space representation and then reconstructing the output from this representation.

  10. Variational Autoencoders (VAEs): A type of autoencoder that provides a probabilistic manner for describing observations in latent space.

  11. Latent Space: The representation of compressed data in a lower-dimensional space, often used in generative models to generate new instances.

  12. Natural Language Processing (NLP): The branch of AI focused on giving computers the ability to understand text and spoken words in a manner similar to human beings.

  13. Text-to-Image Generation: The process of generating photorealistic images from textual descriptions using generative AI models.

  14. Style Transfer: The technique of applying the style of one image to the content of another image using convolutional neural networks.

  15. Data Augmentation: Techniques to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data.

  16. Overfitting: A modeling error in machine learning when a function is too closely fit to a limited set of data points, impacting the model's ability to generalize to new data.

  17. Underfitting: Occurs when a model is too simple to capture the underlying structure of the data.

  18. Transfer Learning: The practice of reusing a pre-trained model on a new, related task.

  19. Fine-tuning: A process to tweak a model that has been pre-trained on one task to make it perform better on a slightly different task.

  20. Tokenization: The process of converting text into smaller units, such as words or phrases, often used in natural language processing.