Warning: Attempt to read property "base" on array in /usr/home/pasiona.com/web/wp-content/plugins/wp-user-profile-avatar/shortcodes/wp-user-profile-avatar-shortcodes.php on line 665 Warning: Attempt to read property "base" on array in /usr/home/pasiona.com/web/wp-content/plugins/wp-user-profile-avatar/shortcodes/wp-user-profile-avatar-shortcodes.php on line 665

AI glossary

David Mirón 04/07/2023
    Imagen sobre la Inteligencia Artificial

    Artificial intelligence is a field that has revolutionized the way people interact with technology and address different problems. The latter has generated over time a wide range of new terms and concepts.

    Throughout this article we will make sure you understand the terms used in this technology as well as in the media. Also, it is important to note that some or many of these terms are evolving rapidly as technology matures and advances, which means that what the referent terminology may have changed in a span of 6 months.

    As we’ve discussed, while AI is an ever-changing space, it’s essential that we put conversation and understanding first on this journey. Our workers, customers, and readers understand the benefits, challenges, and potential of this moment.

    OpenAI: OpenAI is an AI research organization focused on developing an Artificial General Intelligence (AGI) that benefits everyone.

    API de OpenAI: The OpenAI APIis a service provided by OpenAI that allows developers to access and use its AI models, such aChatGPT, for various applications.

    GPT (Transformador Preentrenado Generativo): GPT refers to a series of AI models developed by OpenAI. They were designed to perform natural language processing tasks and are capable of generating coherent and contextually relevant text.

    Embedding: It is a type of attention mechanism that is used in transformers and that allows a model to relate the different positions of the same sequence.

    Prompt: They are input texts that are given to a language model to generate a response or complete a specific task.

    Zero-shot Learning: It is a machine learning approach in which a model can make predictions, or complete tasks, without being explicitly trained on the data for that task.

    Few-shot Learning: It is a machine learning approach in which a model can quickly adapt to new tasks, learning from a small number of labeled examples.

    Aprendizaje por Refuerzo a partir de la Retroalimentación Humana (RLHF): This method combines reinforcement learning with human feedback, and that allows AI models to learn and adapt to human preferences and values.

    Deep Learning: Refers to a subfield of machine learning that uses artificial neural networks to model complex patterns, and make predictions, or decisions based on input data.

    Unsupervised learning: Unsupervised learning is an approach to machine learning in which a model learns patterns and structures within the input data, without explicit output labels. (Usually after clustering or dimensionality reduction.)

    Supervised Learning: Sucede cuando un modelo se entrena dentro de un conjunto de datos que contiene pares de entrada y salida. This learning is capable of predicting outputs based on new inputs.

    BERT (Representaciones de Codificador Bidireccional de Transformadores): BERT is a model based on a pretrained transformer, developed by Google, to perform natural language comprehension tasks. It can be adjusted for specific applications.

     

     

    ChatGPT: It is a conversational AI model developed by OpenAI. It is based on the GPT architecture and has been designed to generate human-like responses in text-based conversations.

    Stable difusion: Stable diffusion is a research area focused on improving the training of large-scale AI models, by introducing stability and controllability during the diffusion process.

    Rapid Engineering: It refers to the process of designing effective prompts to elicit the desired responses from language models, improving their utility and reliability.

    InstructGPT: It is an AI model developed by OpenAI to follow instructions given in notices. That allows you to generate more precise and specific answers for each task.

    Inteligencia Artificial General (AGI): The AGI, for its acronym in English, refers to a hypothetical AI, capable of performing any intellectual task that can also be carried out by a human being. Esta IA demuestra habilidades cognitivas similares a las humanas en diversos dominios.

    LaMDA: It is the conversational AI model of Google. It was designed to participate in open domain conversations and is capable of understanding and generating responses to a wide variety of topics.

    Attention Mechanism: Attention mechanisms in neural networks allow models to weigh the importance of different input elements against each other, improving their ability to process context.

    Diffusion Models: These models represent the diffusion of information, influence, or other phenomena through a network.

    LLM (Large Language Models): They are AI models trained on large amounts of text data, capable of understanding and generating human-like text.

    Pre-workout: It is the initial phase of training a deep learning model on a large data set (often without supervision).

    Alignment Problem: The alignment problem refers to the challenge of designing AI systems that understand and act on human intentions, values, and goals, rather than being optimized for unintended goals.

    Procesamiento del lenguaje natural (PNL): NLP is a field of AI that allows computers to understand, interpret and generate human language.

    Red neuronal artificial: An artificial neural network is a computational model inspired by the structure and function of the human brain. It consists of interconnected nodes, called neurons, that process and transmit information.

    Backpropagation: It is a widely used optimization algorithm in neural networks that minimizes the error between the predicted outputs and the actual outputs by adjusting the model weights.

    Fine tuning: It is the process of adapting a previously trained model for a specific task. Labeled data that is relevant to that task is used, thus refining its performance.

    Transformer: It is a deep learning architecture that allows tasks to be completed sequence by sequence. It is widely known for its self-serving mechanism, which allows capturing long-range dependencies on data.

    Token: A token is a unit of text, such as a word or subword, that serves as input to a language model.

    Context Window: It is the maximum number of tokens that a language model can process in a single step, which determines its ability to capture the context in the input data.

     

    David Mirón

    Passionate about technologies since I was 11. I still feel the same excitement today with Solidity as I did 26 years ago when running Turbo C.

    ,

    Go back