According to a report by the Israel National Council for the Child, during the period March-October 2020, when Covid-19 induced lockdowns and social distancing, the number of incidents reported to the National Center for the Protection of Children on the Internet increased by 63% compared to the same period in 2019. The most common complaints were suicide threats, bullying, shaming, grooming and harassing minors online, and sexual predatory offenses. This dismal reality is not limited to Israel: according to the National Center for Missing and Exploited Children (NCMEC), in 2020 over 21 million reports were received pertaining to some sort of sexual abuse of children online – 28% more than in 2019. The lion’s share of the complaints related to pictures or videos (more than 65 million items).
In a world where access to the Internet is practically unlimited, the task of supervising and monitoring the contents to which children are exposed online must fit the new reality. Israeli start-up, L1ght ,developed an innovative technology that identifies and warns harmful content in real time, as soon as children are exposed to them. “The solutions that were created in the 2000s to deal with this problem are no longer suitable today,” explains Avner Sakal, CEO of L1ght. The smart algorithms that the company has developed combine Artificial Intelligence and Machine Learning in order to identify and monitor toxicity online, including bullying, toxic discourse and pedophilic content.
“When L1ght was established, our goal was mainly to create technologies that would protect children from online dangers, but we quickly understood that they could protect everybody: children, adults, men, women, minorities and other communities. The API we developed, L1ghtning, is unique and makes it possible to analyze a large range of toxic phenomena, from textual, to visual,” says Sakal.
One of L1ght’s most complex and important missions is to identify Child Abuse Material (CSAM) on the Internet. The company does this by using a unique algorithmic infrastructure that identifies such material both in images and in videos. This infrastructure can improve and adjust itself to various forms of content. The company’s engineers developed the technology to identifying harmful material in video files, making it possible to detect specific harmful segments and even specific frames. This is a veritable game changer for certain content companies.
“A significant share of the large content providers use human moderation services to filter content,” explains Ron Porat, one of the company’s founders and its CTO. “Our technology enables moderators to watch a few seconds of a movie instead of watching the entire work. Moreover, it significantly reduces the psychological and emotional impact of extensive exposure to this type of content.”
“In addition to identifying complex toxic human behaviors, L1ght is currently developing another technology that will analyze behaviors occurring in real time and deduce from them possible future behaviors,” adds Alon Gur, a senior developer at L1ght. “This will allow us to identify toxic comments and help stop a discourse of hatred before it develops.”
Proud to take part
Imbar Cohen is a data analyst at L1ght. She chose this challenging position, which entails reviewing and annotating problematic material so that the company’s algorithms will learn to identify them. Each workday, a large variety of content passes through her – from malicious texts to sexually explicit visual content. “I chose to join L1ght because I felt it was very important for me to find a job where I could make a real contribution to people and I found that purpose here,” she says. “My role has a clear and direct impact on what is happening online. I could have worked in other high-tech sectors but the most important thing for me, and also the most satisfying, is to be able to have a positive effect on people’s lives and to literally save children.”
Noga Mindlin, who is also a data analyst at L1ght, adds: “My job is definitely not trivial. I compare it to infrastructure work: the Internet was developed over time, but they forgot to install a ‘sewage system’ for it and, as a result, disease is being spread and people are getting hurt and even dying from it. L1ght brought advanced drainage technology to the Internet, is cleaning the filth and saving the users. I am truly proud to be a part of this.”
Yaakov Schwartzman feels the same way. As an ultra-Orthodox Jew, he stands out in L1ght’s start-up environment, but when you learn that he has eight children and that he and his wife have raised more than ten foster children, you can understand why he finds it so important and considers his work to be no less than a mission to protect children online. “My work is to integrate our technologies into large platforms, thereby pushing our sophisticated algorithms to the edge, to their peak ability to detect problems and dangers, and in this way enable our clients to create safer environments,” he says.
As part of his job, Yaakov sees the actual impact of the company’s product when he works with American law enforcement agencies and adapts L1ght’s technological solutions to meet their needs. “In some cases, we worked with the authorities very closely. The information that we analyzed helped them arrest pedophiles and even led to 120,000 pedophiles to be banished from WhatsApp. It is very rare that a job enables you to carry out a real tikkun olam, and that you can see it happen in front of your eyes.”
Growing market need
In the United States, children between the ages of 8 and 12 are exposed to screens 4-6 hours a day, and adolescents spend up to 9 hours a day in front of a computer, television, game console or cell phone.
“Everyone spends more time in front of screens and the result is an explosion of problematic material,” Sakal explains. “When you look at the growth forecasts for gaming platforms – that are often used for posting toxic content – it is obvious that key to the problem is pricing. The whole industry is now coping with increasing issues of safety and trust as well as with a lack of tools. We see this with moderation companies that have started working with us lately. It makes us especially proud that we developed the best algorithmics in the world for detecting CSAM and other malicious content. We still haven’t solved every issue, but we are constantly making progress.”
What prompts companies to seek out your services?
“Just as once organizations didn’t understand the necessity of a firewall and now it is obvious that you can’t turn on a server without one, today content platforms understand that they must filter toxic content. When a client who understands this necessity approaches us, we help them define the level of toxicity they can tolerate. In this way, we understand what we must remove and in which ways, and then we connect and define our systems so that they will solve the client’s problem while striking the right balance between quality, precision and cost effectiveness. Our work consists of all these aspects, but thanks to our high-quality team and our professionalism, we are succeeding and are creating the global impact that we set as our goal when we founded the company.”