Unpacking ChatGPT
ChatGPT is a specific product of a technology class known as Large Language Models (LLM). LLM is an application area of machine learning (ML), itself central to modern AI. Like all ML algorithms, ChatGPT looks at large amounts of data, finds “patterns” in the data, i.e. regularities with sufficiently high probabilities, and uses these patterns to determine from the previous word to the next word. Make predictions such as which words to generate. Planum.
In school you might have sat down for a test where you were shown a series of shapes – triangles, circles, stars, triangles – and were asked to predict what would come next. Learning is what you do, he said.
The term “GPT” comes from the phrase Generative pre-trained Transformer. It is “generative” because the user generates the text in anticipation of what might be helpful based on the question or instruction. It has been pre-trained by an algorithm called Transformer using a large text corpus.
In a nutshell, LLMs such as ChatGPT are complex ML algorithms that find patterns in very large amounts of texts generated by people in the past and use them to help specific users based on their input. Predict what you think Its complexity is obvious, with ChatGPT having an estimated 175 billion parameters and GPT-4, an advanced version of ChatGPT, with an estimated 170 trillion parameters.
According to Evgeniou, to understand the potential of LLMs such as ChatGPT, it’s important to understand that they are foundational models, not necessarily products. The underlying model is used in a variety of downstream applications, so what we’re seeing is just the tip of the iceberg.
The foundation for countless applications
ChatGPT is most commonly used for synthesizing or summarizing text, translating text into programming languages such as R and Python, and searching. In the business context, Planum provided example applications for copywriting marketing materials, customer correspondence, synthesizing large legal documents, creating operational checklists, and creating financial summaries.
ChatGPT’s ability to generate text from different perspectives can broaden your perspective and enhance your creativity. beyond human imagination, said Evgeniu. For example, you can generate short text summaries such as the company’s mission statements from different perspectives, such as Europeans, Americans, Chinese, 10-year-olds or he-80-year-olds.
AI is already being used in businesses to enhance creativity and business success.For example, Coca-Cola effectively uses AI to Recent marketing campaignBut Pranam emphasized that creativity is not limited to just the creative field. This technology can harness human creativity by generating alternatives such as business plans and business models. Ultimately, however, a human must assess the quality of the generated content.
In more advanced applications, Olsen said, innovation is typically driven by basic and corporate research. The more AI can help these processes, the faster we can achieve true innovation. This is similar to how AI in biomedical research has reduced the time it takes to discover drugs and predict protein folding to a fraction of the time humans spend.
ChatGPT being the foundation model means that ChatGPT is the foundation for many applications. Evgeniou says that the AI can enhance human intelligence, It has led to the creation of new needs that we were unaware of, creating new companies, products, markets and jobs at a much faster pace.
What does ChatGPT mean for your business?
ChatGPT opens up new possibilities, but requires a healthy process to enable humans and AI to work together effectively. Evgeniou says one of the most important lessons in technology adoption is that it requires organizational change to successfully implement it and unlock its value.
In addition, trust is a necessary component of technology adoption. But trust is a double-edged sword. Too much trust in technology by users can lead to overconfidence in decision-making and narrative fallacy where people make up stories based on the narratives generated by the LLM. For high-risk applications, endanger their safety.
As Evgeniou pointed out, trust is also related to liability issues. Are professionals such as doctors, lawyers, and architects liable if they make mistakes as a result of prioritizing the AI’s decisions over their own judgment? Am I eligible?
From a consumer trust and safety perspective, the exponential growth of content enabled by technologies such as ChatGPT has made content moderation, a key issue for online trust and safety, more important for online platforms. It’s getting difficult. Attention is also focused on the role of AI in creating information filters and bubbles.
The families of the victims of the terrorist attack in Paris sue google For the role of AI-recommended algorithms in allegedly facilitating terrorism. The Communications Decency Act (Section 230) was first challenged in the United States Supreme Court. This raises alarm bells about the potential dangers of recommendation algorithms and exposes other online platforms that use AI to litigation risk, Evgeniou said.
Talent development is another consideration. Puranam warned that overreliance on LLM can atrophy our skills, especially in creative and critical thinking. Companies should avoid the short-sighted view of automating low-end work simply because technology makes it possible. “In some professions, you can’t be a partner without being an associate, and you can’t be a full professor without being a research assistant,” he said. Therefore, automation without due consideration of talent development can disrupt an organization’s talent pipeline.
Evgeniou suggested developing guidelines for companies to use AI safely, identifying who, when and how AI should be used. “AI adoption requires a human in the driver’s seat to monitor AI behavior,” he said.
Is Society Ready?
While it’s understandable that some would worry about being replaced by ChatGPT, there hasn’t been a tech unemployment in 150 years, says Olsen. A more important concern is how AI will affect income distribution, as AI is not expected to lead to mass unemployment in the next five to ten years.
New technologies can have two effects: productivity and substitution. As economist Robert Solow observed, the impact on productivity will only become apparent in productivity statistics over time. As for the substitution effect, the degree of influence varies depending on the skill level of the individual.
In the 1850s, technological innovations with a low-skill bias replaced skilled shoemakers with unskilled workers mass-producing shoes in factories. Meanwhile, the skill-biased technology that enabled factory automation from the 1980s to his 2010s favored those with college degrees over less-skilled factory workers. It is currently unknown which groups will benefit from his LLM.
At a more fundamental level, the question is whether LLM is truly unbiased and inclusive. Understanding how it learns makes it clear why it is inherently biased. ML algorithms such as ChatGPT build knowledge through unsupervised learning (that is, observation of conversations), supervised learning, and reinforcement learning, where experts “train” models based on user feedback.
This means that ChatGPT “learns” from the individuals who train and use it, and the machine adopts its values, views, and prejudices towards politics, society, and the world at large. So ChatGPT can be democratized, but it can also be centralized depending on the experts who train it, Puranam said.
Additionally, the risk of misinformation is heightened as content spreads rapidly and can be weaponized to threaten democracy and institutions.still expected influence a campaign, said Evgeniu. Pranam also warns that people whose social life exists only through online channels are at higher risk, as they may not be able to distinguish between truth and falsehood. Olsen agreed that ChatGPT can perpetuate the views of individuals already siled in information bubbles online.
Panelists were cautiously optimistic and agreed that appropriate controls and regulations are necessary to ensure the ethical and responsible use of technologies such as ChatGPT.
learn to work together
In practice, regulation has always lagged behind technological innovation.european union digital services law As soon as it came into force in the second half of 2022, security measures to protect your online safety fell behind. This is because it only targets online platforms such as Facebook and Google, and ChatGPT does not target ChatGPT, even though it aggregates online content.
Similarly, foundational models can be used downstream in high-risk products, but fall through the cracks in AI regulation. As big tech companies continue to develop new foundation models, downstream offerings may proliferate. If the underlying models remain unregulated, they can become a single point of failure in a massive cascade.
But regulating emerging and evolving technologies across different geographies presents challenges. AI algorithms adopt values from data As a result, different regions may have different AI cultures. This adds to the complexity of regulation, Evgeniou said. Even with the same regulations in different parts of the world, the implementation and results are different, not only because the legal system but also the values are different.
Despite the challenges, a combination of actions by data scientists, businesses and regulators can make the technology more trustworthy and secure. Transparency and trust often go hand in hand, and businesses benefit from being transparent in their interactions with their customers. For example, customers can be notified when content is generated by ChatGPT or when they are interacting with a machine rather than a human.
An ongoing development to make AI more in line with human values is the field of human feedback reinforcement learning (RLHF), Evgeniou said. Incorporating human feedback can improve the quality of AI output based on human values. However, according to Evgeniou, we are just beginning to solve the AI value consistency problem.
On the one hand, AI has been proven to be able to beat humans at chess, but this is not the case in all fields. AI can be used to complement humans. To do that, we need to better understand the opportunities and limitations of combining the two. As the LLM continues to evolve, all panelists identified the human-machine ensemble as a promising field for using AI to improve the quality of human thinking and identifying the conditions necessary to achieve it. I thought.
