FT AI Glossary | Financial Times

AI Video & Visuals


algorithm: A set of rules that a computer follows to complete a task. A computer takes input, for example from a dataset, runs tests or calculations on it, and produces output. Algorithms can be used in this way to identify patterns in data and make predictions.

Algorithm bias: Decision-making errors and unfair results that may arise from data processing problems by algorithms, or from imperfections or biases in the underlying data itself. Bias can cause the algorithm to inadvertently privilege or disadvantage one group of users over another. Examples include customers being treated differently due to systemic prejudices regarding race, gender, sexuality, disability, or ethnicity.

Placement: The field of research responsible for ensuring that artificial general intelligence (so-called god-like AI) systems have goals that align with human values. For example, alignment researchers helped train an AI model not to answer questions about how to self-harm or use biased language.

General Artificial Intelligence: A computer system capable of generating new scientific knowledge and performing any task a human can perform. This will enable the creation of hyperintelligent computers that can learn and develop autonomously, understand their environment without the need for supervision, and transform the world around them (see). artificial intelligence, God-like AI and Super Intelligence).

Artificial Intelligence (AI): The science that enables machines to perform tasks that previously required human intelligence. For example, reasoning, decision making, distinguishing between words and images, etc. Learn from mistakes, predict outcomes, and solve problems. This involves using a computer to recreate or perform such actions, often with greater speed and accuracy than has previously been achieved (see machine learning).

big data: Very large datasets that can be analyzed computationally to uncover patterns, trends, and relationships. Businesses may use big data analytics to identify common human behaviors, transactions and interactions.

Chatbot: A software application that can answer text questions and mimic human conversation by analyzing the text and predicting the desired answer. Chatbots are primarily used as virtual assistants for customer service websites, but generative AI has made chatbots available for writing sentences in a variety of formats (see Chatbots). Generation AI and pre-trained generative transformer).

Chat GPT: A natural language processing chatbot that utilizes AI technology developed by Open AI. ChatGPT is based on a language-based model that the developer leverages user feedback to fine-tune. In response to text-based questions and “prompts” that describe the type of document output desired, chatbots can write articles and essays, write emails, tell creative stories, and generate programming code. increase.

calculation: The computational power that an AI system needs to process data, train machine learning models, perform predictions, and other tasks. Measured in floating-point operations per second (FLOPS).

Computer vision: The field of study that uses computers to obtain useful information from digital images and videos. Applications include object recognition. facial recognition; medical imaging; navigation. video surveillance. It uses machine learning models that can analyze and distinguish between different images based on their attributes.

Dar-E: A deep learning model developed by OpenAI that can generate digital images from text-based natural language descriptions called prompts entered by a user.

Data science: Research that involves processing large amounts of data to identify patterns, identify trends and outliers, and provide insight into real-world problems.

Deepfake: A synthetic voice, video, or image that can convincingly represent a real person or create a realistic impression of someone who has never existed before. Created by machine learning algorithms, deepfakes can make it appear that real people are saying and doing exactly what their creators want them to do. Deepfakes raise concerns about their ability to enable financial fraud or spread political misinformation (see “Deepfakes”). Generative Adversarial Network).

Deep Learning (DL): A subset of machine learning that can be used to solve complex problems such as speech recognition and image classification. Unlike machine learning, which requires human input to understand and learn from the data, DL ingests unstructured data such as text, music, and video in its raw form, and creates a cross-category transition between different categories of data. You can tell the difference. DL models run on software called neural networks, which are modeled after the human brain.

Floating point operations per second (FLOPS): A unit of measurement used to calculate the power of a supercomputer.

Generative al: A subset of machine learning models that can generate media such as text, images, and music. Generative AI is trained on vast amounts of raw data (e.g., the text of millions of web pages and books), learns patterns in it, and performs best when asked questions written in text. Generate the most likely correct response.language (see machine learning).

Generative Adversarial Network: A machine learning technique that can generate data that is difficult to distinguish from the data it is trained on, such as realistic “deepfake” images. A GAN consists of her two competing elements, a generator and a discriminator. The generator creates fake data, the discriminator compares it to the actual “training” data, and gives feedback on where it detects differences. Over time, the generator learns to create more realistic data, and eventually the discriminator can no longer distinguish what is real and what is fake.

Generative pretrained transformer: One of a large family of language models developed by OpenAl since 2018 and used to power ChatGPT chatbots.

Godlike AI: General term for artificial general intelligence.

hallucinations: Flaws in generative AI models can allow chatbots to state false facts or “invent” reality. Examples of hallucinations include making up fake book quotes and answering “elephant” when asked which mammal lays the largest eggs.

Human in the Loop (HITL): A system composed of humans and AI components. Humans can intervene by training, tuning, and testing the system’s algorithms to produce more useful results.

Large Language Model (LLM): Machine learning algorithms that can recognize, summarize, translate, predict and generate text.

Machine learning (ML): Applications of AI. It enables computer programs to automatically learn and adapt from new data without being specially programmed. ML programs improve over time through training, learning from past experiences and mistakes, and discovering patterns in new data.

Multi-agent system: A computer system containing multiple interacting software programs known as “agents”. Agents often actively assist and collaborate with humans to complete tasks. The most common everyday examples are virtual assistants on smartphones and personal computers such as Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana.

Natural language processing: The branch of AI that uses computer algorithms to analyze or synthesize human speech or text. The algorithm looks for linguistic patterns in how sentences and paragraphs are constructed, and how words, contexts and structures work together to create meaning. It is used to develop customer service chatbots, speech recognition, and automatic translation.

neural network: A computer system for processing data inspired by the way neurons interact in the human brain. Data enters the system, neurons communicate to make sense of it, and the system creates the output. The more frequently the system processes the data, the better it will be able to identify differences within the dataset (for example, distinguish between images).

Open Source: Software and data that can be edited and shared freely to allow researchers to collaborate, review and replicate their findings, and share new developments with the wider developer community.

Singularity: The current hypothetical point in time when artificial general intelligence may surpass human intelligence, accelerating technological progress and automating any knowledge-based task.

Super Intelligence: An AI system that is self-aware and possesses a higher level of intelligence than humans.

Supervised learning: A form of machine learning that uses labeled data to train algorithms to classify data and accurately predict outcomes. The inputs are labeled so the model can measure how well it recognizes or distinguishes between them and learns over time.

Turing test: A test that tests a machine’s ability to exhibit human-like intelligence. It was first devised by mathematician and computing pioneer Alan Turing in his 1950 paper “Computing Machines and Intelligence” as an “imitation game”. In this test, a human rater asks questions to another human or machine through a computer keyboard and monitor. If the evaluator cannot tell from the written answers which is human and which is machine, then the machine has passed Turing-his test.

Unsupervised learning: A type of machine learning in which algorithms analyze and cluster unlabeled data sets by looking for hidden patterns in the data. No human intervention is required for training or modification.

Definition taken from Financial Times article and Alan Turing Institute Data Science and AI Glossary



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *