Adversarial Machine LearningAI AgentsAI and EducationAI and MedicineAI AssistantsAI HardwareAI Voice TransferApproximate Dynamic ProgrammingBackpropagationBayesian Machine LearningCurse of DimensionalityData LabelingDeep LearningDeep Reinforcement LearningDimensionality ReductionF1 Score in Machine LearningFeedforward Neural NetworkFine Tuning in Deep LearningGated Recurrent UnitGenerative AILarge Language Model (LLM)Machine LearningMultimodal LearningNatural Language Generation (NLG)Natural Language Processing (NLP)Natural Language Understanding (NLU)Precision and RecallTransformersZero-shot Classification Models
AlphaGo ZeroBERTChatGPTChess botsDall-EDiffusion ModelsDistilBERTGoogle's BardHidden Markov Models (HMMs)Inference EngineLarge Language Model (LLM)Llama 2LLM CollectionMidjourney (Image Generation)MistralMultimodal AI Models and ModalitiesOpenAI WhisperPerceptronProbabilistic Models in Machine LearningRoBERTaRule-Based AISpeech-to-text modelsText-to-Speech ModelsXLNet
Activation FunctionsBatch Gradient DescentBeam Search AlgorithmBenchmarkingClassificationCognitive MapCURE AlgorithmExpectation MaximizationFlajolet-Martin AlgorithmGaussian ProcessesGenerative Adversarial Networks (GANs)Gradient Boosting Machines (GBMs)Gradient ClippingGradient ScalingGroundingHyperparametersKeyphrase Extractionk-ShinglesLatent Dirichlet Allocation (LDA)Mixture of ExpertsMultimodal AIOnline Gradient DescentOverfitting and UnderfittingPrompt ChainingPrompt EngineeringRetrieval-Augmented Generation (RAG)RLHFSentiment AnalysisSequence ModelingSemantic KernelTokenizationWinnow AlgorithmWord Embeddings
Last updated on January 18, 20241 min read
The table below includes some of the most influential models, and the order is sorted by their release dates.
|Successor to GPT-3, built on a similar architecture but with improvements.
|175 billion parameters, known for its versatility and capability.
|17 billion parameters, aimed at natural language understanding and generation.
|Initially withheld from public release due to concerns over potential misuse.
|Designed to understand the context of words in search queries.
|Extended Transformer model to handle longer sequences of text.
|First Generative Pre-trained Transformer with 117M parameters.
|Deep contextualized word representations, allowing for rich word meanings.