Using AI, leaders consider public safety and privacy rights

AI For Business


Mass shooting. Bullying that can’t be stopped online or in real life. As the country’s social problems continue to permeate our schools, the tech sector is providing proactive solutions aimed at curbing self-harm and violence against others. Today, using artificial intelligence (AI), teams of school district leaders can alert and respond to threats in real time.

But how much does it cost?

In this moment of ever-advancing technological progress, with AI able to comb public social media in an instant and security cameras everywhere, the human element is more essential than ever, experts say. says the house. Front and center are public safety and individual rights.

“In the security world, there is always a balance between keeping people safe and keeping things private,” said Derek Peterson, CEO of Ronkonkoma-based Soter Technologies. “You are always walking that tightrope.”

Robert Messas.Courtesy of Central Business Systems

said Robert Mesas, vice president of Central Business Systems, a full-service IT consulting firm based in Melville.

Still, the “goal” of most surveillance programs is “to raise awareness of self-harm and violence,” said Mesas, whose client base is the school district. These clients want choice when it comes to public safety.

They want a solution when 74 people have already died or been injured in US schools this year alone, including the fatal shooting of three children and three adults in Nashville last week. And in 2022, he will have 46 school shootings, the highest number since 1999, according to Washington Post trackers.

“School officials, as well as the government, are under pressure from everyone to do something, and I don’t think anyone really knows what to do,” Michael Nižić told LIBN. Nizich is Director of the Entrepreneurship and Technology Innovation Center and Adjunct Associate Professor of Computer Science at New York Institute of Technology. But in this light, he speaks as a father and citizen who, like everyone interviewed for this article, wants his children to be safe.

When it comes to mass shootings, nearly half of the shooters leaked plans ahead of time, according to the Violence Project, a nonprofit research center. And social media can provide that platform, especially as its use becomes more and more common.

Mesas said Central Business Systems is piloting an AI product that monitors public online and social media 24/7 on platforms like Instagram and Facebook to identify potential threats. It is said that there is The company offers two of his products. One sends alerts directly to school districts, who apply and evaluate their own resources. Second, Central Business System’s partners evaluate flagged content and notify the school where appropriate, and the school’s own team of experts enforces prescribed protocols.

Michael Nižić.Courtesy of New York Institute of Technology

“We call it web scraping…we do it all the time, and it’s legal,” Nižić said of combing public posts online. “Somebody’s data is public because they ticked off their user agreement and made it public. They’re basically making it public for the whole world to access.”

Nižić said data is “the new oil.” When oil was discovered in the 1800s, “we had to create a use for it so we could sell it.” Today, with AI algorithms and fast and efficient processing power, massive amounts of data are used for decision-making in various industries, from finance to insurance, and the future holds limitless possibilities.

Still, the act of rummaging through social media “could raise potential concerns,” says Paul, president of Plainview-based Long Island Software & Technology Network, which fosters regional technology centers. said Trapani. Issues may include defining “school boundaries” and “students’ individual rights,” Trapani said.

Paul Trapani.Photo by Justin Poldino

According to Trapani, web scraping can also have unintended consequences. Please adopt. An employer might deploy her AI to search her social media to find out everything that person has tweeted. “AI can find it in a second,” and perhaps even store it in a database, he said. “It stays there forever.”

Or, says Nizich, “you have a ‘nice kid’ who tells bad jokes, and the system warns schools that this could be a target.” At that same school, there may be a really bad boy who is smart enough not to tell bad jokes.

Additionally, clients may be contracted and not “fair contributors,” so companies may try to deliver results to their clients, Nizich said.

Overall, he said, there could be “too many negatives to make such a program positive.”

According to Mesas, Central Business Systems does not want to issue false alerts. He said the product is customized to include keywords to highlight relevant posts. Yes, he said there may be times when students just post song lyrics.

That’s why it’s important that a multi-faceted team evaluates alerts. The team could consist of superintendents, counselors, psychologists, and other people who know children well, and Messas assesses the situation.

“This is very important,” Mesas said, adding that it’s important for school districts and their boards to meet to discuss and develop policies.

“Parents have the right to opt out of services so that their children are not monitored,” Mesas said.

Soter’s Juno AI product, meanwhile, uses artificial intelligence cloud-based sentiment and behavioral analytics to help school officials gain insight into the emotional climate of their school. For example, the program can detect if a fight is starting on her third floor, for example, and alert school officials who may be on the ground floor. In this way, you can react in real time rather than watching camera recordings after the fact to gain insight into incidents.

Soter can quickly edit your face before publishing your video, in line with federal and institutional privacy guidelines.

Derek Peterson.Photo by Judy Walker

The company also says it has the ability to study facial expressions to “identify kids who are having a bad day” and warn schools “before they do anything that harms the school.” Peterson said.

The FlySense Vaping Solution also monitors air quality and abnormal sounds, including escalating voices, without actually recording conversations, and alerts school personnel in real time if a fight breaks out in the bathroom. , can detect both vaping and potential bullying.

But Peterson says technology is “not a silver bullet to keep people safe.” Noting the importance of the human touch, Peterson said the company is working with schools to create anti-e-cigarette awareness campaigns. to help make it happen.”

The school district already monitors online activity with school-issued laptops and uses the school’s Wi-Fi at school, Mesas said. But what happens when a student leaves school property and potentially harmful behavior continues?

That’s what some say oversight can make a difference.

“In 2015, we actually saved a child from suicide,” Peterson said.

At the time, the company monitored the school’s social media and caught students trying to set fire to it.

“There was a video of herself on fire and our tool picked it all up,” he said. “That’s how we entered this market.”

One thing is clear: AI is not going away.

“We are just getting started,” said Trapani.

Sorter, for example, has products in every state and thousands of schools in 22 countries, with a massive influx in Australia, Peterson said. The company plans to announce a major Canadian expansion in the coming months.

Meanwhile, the debate over privacy and safety rages on. Mesas poses the question: If it can save one of her lives, is it worth doing?

That’s why “school boards and parents need to dig deep. It’s a process.”

[email protected]

l





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *