Generative AI Needs Guardrails As Enterprises Add Generative AI To Software Development

AI For Business


This voice is auto-generated. Please let us know if you have any feedback.

AI’s ability to create software is steadily improving. Platforms like GitHub’s Copilot, AWS’ CodeWhisperer, and Tabnine use open source code to create software and empower developers through natural language interfaces.

Analyst firm Forrester calls such solutions TuringBots. It’s AI-powered software that can help you plan, design, build, test, and deploy application code. However, the exponential growth of interest in generative AI has raised the question of what impact this technology might have on the software creation process.

Companies looking to harness the power of AI in software creation need clear guardrails to keep their applications secure and their processes running smoothly. Adoption has already begun, and even in the experimental phase, CIOs should create policies to shape how they add to the development lifecycle.

“I don’t think closing companies is the right policy,” Mike Gualtieri, Forrester’s vice president and chief analyst, said at a panel discussion last month.

Instead, executives should stay up-to-date with the latest developments from the vendor landscape, understand what works within the current ecosystem, and make deployment decisions based on that, Gualtieri said. rice field.

testing is the key

AI tools can generate code and make suggestions one after another, even for the simplest of prompts. However, there should be a layer of protection between the machine-written code and the production environment.

As shown in February, GitHub’s Copilot tool generated an average of 46% of code when developers used the tool, up from 27% in June. Increased has.

Diego Lo Judith, vice president and principal analyst at Forrester, says it’s important to remember that AI creates code based on what humans have built before.

“Is all source code out there completely secure and free of vulnerabilities? No, it’s not,” said Lo Guidice. “You still need to go through the steps of security checks and running security scanning tools.”

Human involvement is key to how generative AI will shape the software development lifecycle.

“You can never blame ChatGPT and you can never blame TuringBot,” Guarteri said. “You are still responsible.”

Most organizations are still in the experimental stages of their generative AI efforts. And despite the potential data privacy risks and unknowns associated with the emerging technology of generative AI, management believes the benefits outweigh the risks.

“Each company may have a different approach, but they need to start working on this technology and quickly understand how it works for developers, development teams, and IT as a whole. [unit]It’s much more efficient,” Lo Giudice said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *