Qualifire: The Platform That Prevents Artificial Intelligence From "Going Off The Rails"
The high demand of integrating artificial intelligence capabilities into the operations of companies and organizations is accompanied by concerns about potential damage that could occur in cases where the AI's boundaries are not sufficiently clear. This is precisely where Qualifire comes in, with a unique platform it has developed that offers quality control, moderation and enforcement of organizational standards on generative AI, guaranteeing real-time protection for users.

A child is having a computer conversation with a chatbot based on an artificial intelligence model. The child writes that their parents are annoying. The bot responds by saying that it has heard of children who kill their parents. A girl asks an AI-based illustration model to draw Barbie dolls for her. The response she receives shows one of the dolls holding an automatic rifle.
These two cases are not imaginary. Both actually happened in reality and were reported in the news. The Israeli company Qualifire is developing technology to prevent such cases, which is already being used by technology, retail, finance and education organizations. "The mission is to enable organizations to use generative artificial intelligence technologies safely," says the company's CEO Gilad Ivry, who founded it together with his brother Dror Ivry, who serves as the VP of Technology. "AI can 'go off the rails' for various reasons. Language models may exhibit unpredictable and even dangerous behavior. A lot of effort is invested in alignment processes during training in order to produce safer models, but there are always exceptions. We make sure that in any case of such an anomaly, it will be blocked, tagged and reported".
How?
We developed a platform for quality control, moderation and enforcement of corporate standards on generative AI, which protects users in real-time. The principle guiding us is to minimize the language model's freedom (aka semantic scope) by enforcing rules and standards on the content it generates, and identifying and blocking violations in real-time. In effect, we get under the hood of the client's systems and add a layer of moderation. It is deployed in the production environment in real-time, when the language model is already in use, and scans every content item it creates – before it reaches the user. In this way, it ensures pro-actively that it meets the set of organizational requirements: no offensive content, no illegal content, and any tailored customer requirements. The platform examines the output in a very short period, milliseconds, and determines whether it violates the defined criteria. You can look at it as a real-time filter, an assistant that performs a continuous filtering process, with constant corrections. Our platform also learns the customer's specific nuances, and continually improves itself."
What specific requirements can the customer define?
We built a personalized system that, as mentioned, is a learning system. The client provides us with a set of requirements and important points, and our system learns autonomously and generates definitions, while ensuring efficiency in use. In other words, the goal is to produce a layer of protection that will be effective, will not add significant processing time, and can run at a very large scale. So, for example, if there is a chatbot for a cooking products company that advises users on which tools to use – of course it is important for the company that the bot does not recommend competitors' products. No commercial company wants its bot to introduce political issues in their chats with customers. In the education system, a teacher can define that a model should generate a story only from words taught in class, not from those that have not yet been learned. Commercial brands that create marketing content want it to be in their corporate language, with all that entails, and everyone wants the model to generate text in a tone and style that is right and relevant for them. The system studies the client's environment and important points, runs experiments with different types of models, and makes adjustments until the optimal model is found."
Grant for the Pre-Seed track
Today, many organizations are striving to integrate artificial intelligence models into their operations. Whether in customer service, sales, marketing or business functions – the potential is huge and appealing ("Almost every business function can integrate artificial intelligence," says Ivry). But many of these organizations are concerned about the potential damage that could occur in cases where the model's boundaries are not sufficiently clear. "What stops organizations from adopting AI is the fact that it's very difficult to trust the reliability of the language models," says Ivry.
"It's a bit like when the car was invented. At first there wasn't a very high adoption rate for this new technology because the necessary infrastructure and safety mechanisms were missing. These developed only later and made it possible to maximize the value from the invention. Artificial intelligence these days is in a similar situation. We have the basic engine – the powerful core model capable of generating text – but a lot of the supporting infrastructure is missing, including safety and moderation mechanisms. Qualifire's vision is to provide a comprehensive infrastructure for organizations so they can build AI-based applications that can really be trusted."
Isn't artificial intelligence smart enough to learn from its mistakes and correct itself over time?
"We are often asked why the models don't improve to the point where there is no need for safety mechanisms. It was recently published that the AI scaling laws have been broken - and that increasing the size of the model does not necessarily yield a smarter one. We have pretty much exhausted that track. The complexity shifts right, from training time to inference time, where Qualifire is positioned.
Furthermore, there are organizations that cannot rely solely on the model's learning, especially those that are under tight regulation, such as healthcare, finance and security organizations. If an investment firm offers a chatbot to its clients – it must always comply with all the rules, from data protection and privacy to avoiding financial recommendations and advice."
Has the war affected the company's operations?
"We were fortunate to receive a grant in 2024 from the Innovation Authority for the Pre-Seed track, about a year after the company was established. Since then we've created a significant partnership with Google and collaborations with companies and integrators in Israel and abroad. When the war broke out, we were just before the first funding round, and everything was halted for two months. We also stopped everything – some of us were called up for reserve duty and some of us volunteered. Among other things, we helped with the 'Liri's Smile' project – an AI and open-source intelligence system designed to locate hostages in Gaza and prevent mass terrorist activities within the country. So there is no doubt that the company was temporarily affected by the situation. Although no existing or potential clients have told us that they refuse to work with Israelis, there are fewer investors coming to Israel and fewer collaborations with Israeli companies – and we assume that this affects us like everyone else."
What is the company's vision for the future?
"To enable the business sector to utilize artificial intelligence technologies in a good way and become the standard for AI quality control and protection for users and organizations."
Qualifire
Year established: 2023
Founders: Gilad Ivry and Dror Ivry
Field of activity: Artificial intelligence
Guiding motto: Building trust in generative artificial intelligence
In collaboration with Qualifire