GPT-4: How to use an AI chatbot to surpass ChatGPT

Applications of AI


When ChatGPT was launched, people were in awe and impressed with its natural language capabilities as an AI chatbot, originally powered by the GPT-3.5 large-scale language model, but the long-awaited GPT-4 large-scale language model suddenly brought what we thought was possible with AI into reality, with some calling it an early sign of AGI (Artificial General Intelligence).

What is GPT-4?

GPT-4 is the latest language model developed by OpenAI that can generate text that resembles human speech. It is an extension of the technology used in ChatGPT. It was previously based on GPT-3.5 but has since been updated. GPT stands for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like humans.

According to OpenAI, this next-generation language model outperforms ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating creative projects and collaborating with users. Examples of these include music, screenwriting, technical writing, and even “learning the user's writing style.”

GPT-4 Developer Livestream

Longer context also comes into play here: GPT-4 can now process up to 128,000 tokens of text from users. You can even send GPT-4 a web link and ask it to interact with the text on that page. OpenAI says this will help with longer-form content and “augmented conversations.”

GPT-4 can now also receive images as the basis for a conversation. In one example provided on the GPT-4 website, the chatbot is given an image of some baking ingredients and asked what it could make with them. It's not currently known whether it can use video as well.

Images used with permission of the copyright owner

Finally, OpenAI also says that GPT-4 is significantly safer to use than previous generations: In OpenAI's own internal tests, it is reported to be 40% more capable of generating factual responses, while being 82% less likely to “respond to requests for unauthorized content.”

To make these advancements, OpenAI has been trained with human feedback, claiming to have “engaged with over 50 experts to get early feedback in areas like AI safety and security.”

In the first few weeks since its release, users have posted some of the most amazing things they've created with GPT-4, from inventing new languages ​​to detailing ways to escape into the real world to building complex animations for apps from scratch. One user even managed to create a working version of Pong on GPT-4 in just 60 seconds, using a combination of HTML and JavaScript.

How to use GPT-4

Jacob Roach / Digital Trends

GPT-4 is available to all users at all subscription levels offered by OpenAI. Free level users have limited access to the full GPT-4 model (approximately 80 chats within 3 hours) until the cooldown timer is reset, before being switched to the smaller, less capable GPT-4o mini. To get additional access to GPT-4 and be able to generate images with Dall-E, upgrade to ChatGPT Plus. To upgrade to a $20 paid subscription, click on the ChatGPT sidebar.[Plus にアップグレード]Just click here, enter your credit card details, and you'll be able to switch between GPT-4 and the older version of LLM.

If you don't want to pay for it, there are a few other ways you can experience the power of GPT-4. First, you can try it as part of Microsoft's Bing Chat. Microsoft has revealed that it uses GPT-4 in Bing Chat, which is completely free to use. However, it's clear that Bing Chat lacks some GPT-4 features, combining them with Microsoft's own technology. However, you still get access to the enhanced LLM (Large Language Model) and the advanced intelligence that comes with it. Note that while Bing Chat is free, it is limited to 15 chats per session and 150 sessions per day.

And much more Some applications currently use GPT-4, such as the question-answering site Quora.

When was GPT-4 released?

Shutterstock

GPT-4 was officially launched on March 13th, as previously confirmed by Microsoft, and was first made available to users through ChatGPT-Plus subscriptions and Microsoft Copilot. GPT-4 is also available as an API “for developers to build applications and services.” Companies that have already integrated GPT-4 include Duolingo, Be My Eyes, Stripe, and Khan Academy. The first public demo of GPT-4 was streamed live on YouTube, showcasing the new features.

What is GPT-4o mini?

GPT-4o mini is the latest version of OpenAI's GPT-4 model line. It is a streamlined version of the larger GPT-4o model, suitable for simple but high-volume tasks that benefit more from quick inference speeds rather than leveraging the power of the entire model.

GPT-4o mini was released in July 2024 and replaced GPT-3.5 as the default model with which users interact on ChatGPT once the three-hour query limit on GPT-4o was reached. According to data from Artificial Analysis, 4o mini significantly outperforms similarly sized mini models such as Google's Gemini 1.5 Flash and Anthropic's Claude 3 Haiku on MMLU inference benchmarks.

Is GPT-4 better than GPT-3.5?

The free version of ChatGPT was originally based on the GPT 3.5 model, but as of July 2024, ChatGPT runs on GPT-4o mini. This streamlined version of the larger GPT-4o model is far superior to GPT-3.5 Turbo: it can understand and respond to more inputs, has more safeguards in place, provides more concise answers, and is faster. Operational costs are reduced by 60%.

GPT-4 API

As mentioned above, GPT-4 is available as an API to developers who have made at least one payment to OpenAI in the past. The latest version of GPT-4 has been released for developers to use through an API, along with the traditional GPT-3.5 model. At the time of the GPT-4o mini release, OpenAI noted that GPT-3.5 will still be available to developers, but will eventually be taken offline. The company has not set a timeline for when that will actually be.

While the API is primarily intended for developers building new apps, it's also causing confusion for consumers. Plex lets you integrate ChatGPT into the service's Plexamp music player, but to do so you need a ChatGPT API key, which must be purchased separately from ChatGPT Plus, so if you want API access you'll need to sign up for a developer account.

Is GPT-4 getting worse?

As much as GPT-4 impressed people when it was first released, some users noticed a decline in the quality of its answers over the following months. This was noticed by key figures in the developer community and even posted directly to OpenAI's forums. However, it's all anecdotal, and an OpenAI executive denied the hypothesis on Twitter. According to OpenAI, it's all in our heads.

No, they didn't make GPT-4 dumber. Quite the opposite: the new version is smarter than the previous one.

Current hypothesis: With heavy use, you start to notice issues you didn't notice before.

— Peter Welinder (@npew) July 13, 2023

Then came research showing that the quality of answers was actually deteriorating with future updates to the model: By comparing GPT-4 from March to June, researchers were able to see that GPT-4's accuracy dropped from 97.6% to 2.4%.

While it's not conclusive evidence, it does seem like what users are noticing is more than just imagination.

Where is GPT-4's visual input?

One of the most anticipated features of GPT-4 is visual input, which will enable ChatGPT Plus to interact with images as well as text, making the model truly multimodal. Uploading an image for GPT-4 to analyze and manipulate is as easy as uploading a document: just click the paperclip icon on the left side of the context window, select your image source, and attach your image to the prompt.

What are the limitations of GPT-4?

While discussing GPT-4's new features, OpenAI also noted the limitations of the new language model. Like previous versions of GPT, OpenAI said the latest model still suffers from “social bias, hallucinations, and adversarial prompts.”

In other words, GPT-4 isn't perfect. It still gets answers wrong, and the internet is filled with examples that show its limitations. But OpenAI says these are all problems it's working to solve, and that GPT-4 is generally “less creative” with its answers, making it less likely to fabricate facts.

Another major limitation is that the GPT-4 model was trained on internet data until December 2023 (GPT-4o and 4o mini ended in October of the same year). However, since GPT-4 can perform web searches and doesn’t simply rely on a pre-trained data set, it can easily find and track down more recent facts from the internet.

GPT-4o is, of course, the latest release. GPT-5 is still arriving.








Source link

Leave a Reply

Your email address will not be published. Required fields are marked *