As CEO of ConnectSafely, working on a parent’s guide to generative AI, I naturally turned to ChatGPT for help. I got some good advice, which I’ll get to later, but first some general background on “Generative Artificial Intelligence” (GAI).
AI has been around for quite some time, but generative AI, which can create new content such as text, images, music, and even computer code, is relatively new. In recent months, we’ve seen some impressive early models, including OpenAI’s ChatGPT, Google Bard, and the new Microsoft Bing. Each of these relies on something called a “large language model” that accesses, analyzes, and uses the vast amounts of data found online to generate new content. Microsoft’s Bing AI is a major investor in OpenAI, whose technology is used in his Bing AI products.
Answer questions, write poetry, plan vacations
The GAI system speaks and understands natural language. You can ask questions and perform tasks just like you would talk to a person. For example, you can say, “Where is the capital of France?” You can be very specific, like, “Write a story about a Jewish girl from China and her friend from Mexico,” or you can write songs about specific people. You can also use services like ChatGPT to plan your vacation. I asked them to plan a road trip between Las Vegas and Rimrock, Arizona and they gave me a very detailed itinerary. I received
These services can also write essays. This creates some problems for educators who worry about students using essays rather than writing them themselves.
My main concern with these services is that they don’t always cite sources and make mistakes. I found a few mistakes that I may not have. It correctly listed some publications I wrote but added some that were not correct. but sadly it never happened. The latest version of ChatGPT no longer makes that mistake, but it still mistakenly thinks I used to write for the Wall Street Journal.
Most of what I’ve found are harmless minor errors, but there’s even the danger of misinformation or even deliberate misinformation. These models currently do not verify the accuracy of the information, but simply state “facts” based on the information they find. As computer science says, “garbage in, garbage out.” There is a risk that these systems will regurgitate false information, leading to dangerous consequences.
Create safety guides with ChatGPT
I’m ashamed to admit that I did a great job of providing useful information for parents. I feel like it’s somehow different, but when I asked ChatGPT itself, ‘Can I use ChatGPT content as I wrote it?’, it was OK. “It’s generally acceptable to treat it as your own output, as long as you’re properly interacting with the AI and providing input to guide the generation,” he said. It may be fine for OpenAI owners, but I have a problem. Even if it was a machine, it feels like plagiarism, or at least cheating.As a journalist, I cite sources often, and I cite them too. Many educators feel that students would object to using her ChatGPT content as if it were their own. I think editors feel the same way because they pay me to provide original content.
So instead of plagiarizing ChatGPT, I’m quoting it here as if I were reporting an interview with an expert. While it is very common for me and other journalists to rely on experts, it would be unethical not to cite sources.
Advice for parents
According to the service, parents should start by understanding the basics of generative AI and discuss its strengths and weaknesses with their children. This includes understanding how to use it creatively, as well as ethical concerns such as deepfakes and misinformation. ”
It also includes “teaching critical thinking and media literacy,” which includes “encouraging children to question the authenticity of content they encounter online. Teach them to seek out, verify information, and beware that AI-generated content like deepfakes can be misleading and deceptive.” For decades, online safety As someone who has advised parents about sex, I can’t agree more.
As ConnectSafely puts it, ChatGPT advises parents to “monitor their children’s online activities” and “stay informed about the platforms, apps, and websites their children are using.” doing. Many of them may incorporate generative AI technology. Maintain an open line of communication and discuss any concerns or questions you may have. ”
We also agree with ChatGPT that parents should “educate their children on the importance of protecting their personal information and maintaining strong privacy settings on the platforms they use.” “Some generative AI technologies can be abused to collect personal information or create targeted content based on your preferences,” it notes.
Like good teachers, ChatGPT says it encourages “creativity and exploration,” and encourages parents to “explore AI-powered tools and resources for children to develop their skills and learn in a safe and responsible way.” Invite them to help you express your creativity.
Finally, we advise parents to: Regularly research and engage with reputable sources to better understand the evolving digital landscape and make informed decisions for your family. ”
As you can see, I’m very impressed with how well ChatGPT and other GAI systems work. They are very powerful and only get more powerful. But as Spider-Man’s “Uncle Ben” said, “With great power comes great responsibility.” This also applies to those who develop and use these powerful technologies. We are in the very early stages of something that could impact knowledge and creativity in the same way cars impacted transportation. And, like with cars, there is a risk that bad things can happen. I don’t know if Henry Ford thought deeply about the unintended consequences of mass-produced cars, but I hope the AI community, along with the public and regulators, do all they can to minimize the risks. I’m here.
Larry Magid is a technology journalist and Internet safety activist.