Companies were told to set AI policies

AI News

PETALING JAYA: Companies need to put in place effective policies around the use of artificial intelligence-powered tool ChatGPT in the workplace as its use increases among employees, HR and IT experts say says.

These policies should cover issues such as confidentiality, regulation and quality, they said.

Describing ChatGPT as the latest disruptive technology to follow the Internet and smartphones, Dr Selvakumar Manickam, Cyber ​​Security Researcher Associate Professor at Universiti Sains Malaysia, will become a powerful tool for businesses for effective communication, marketing and planning. I said it is possible.

“Therefore, policies on the effective use of ChatGPT should be encouraged in enterprises.

“Of course, these policies should also cover issues such as confidentiality, regulation and quality,” he said.

However, cybercriminals can leverage ChatGPT to carry out new forms of phishing attacks and use it to create emails and messages that can bypass email security scanners, Selvakumar added. .

This is because ChatGPT can accelerate the learning process for aspiring hackers.

“From corporate use, employees may inadvertently feed data or information into ChatGPT, which may be incorporated as part of ChatGPT’s knowledge corpus, exposing sensitive corporate data to other users. there is,” he said.

Legal practitioner Chia Swee Yik said there is always a need for some IT or Internet policy to act as a control.

“This could be done by amending existing policies, introducing new policies, or making statements informing employees.

“I think employees are already using ChatGPT to help them do their job, especially in content creation and generation roles.

“So this certainly carries with it risks to employers such as confidentiality, accuracy of information and copyright, just to name a few,” Chia said, noting that such policies govern He added that it needed to be introduced and made clear the expectations of those around him. Its uses are:

In terms of disciplining employees who abuse ChatGPT, he said it would depend on policy, but disciplinary action should be consistent with the employer’s disciplinary policy and commensurate with the severity of the violation.

Malaysian Employers Federation (MEF) President Datuk Dr Syed Hussain Syed Husman said stakeholders need to know about ChatGPT and its features.

As such, it is premature to create guidelines or policies until they have a more detailed understanding, he said.

“Yes, if it’s going to go mainstream. Then, like all things, we need to put policies in place for governance. This is the right thing to do.

“Like all new technologies, systems and languages ​​of communication, we need to understand their benefits and limit their negative impacts,” he said, adding that now stakeholders are raising the issue. He added that he did not.

ChatGPT guidelines will vary depending on the type of job or industry in which it is used, said Datuk Koong Lin Loong, Finance Secretary of the Malaysian Chinese Chamber of Commerce and Industry (ACCCIM).

This is because the nature of the work may be technical or require certain things, and the variables will be different, so policies need to be adapted accordingly, he added. .

“For example, many people use Google, but there is no concrete guide. So does social media. .

“You can have a penknife that’s used to open letters and boxes, but it’s dangerous if you use it incorrectly. So how you use it is important,” Koong said.

ChatGPT is a member of the large-scale language model generative pre-training transformer (GPT) family developed by a US company called OpenAI.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *