Noa wakes up, dons an earpiece and greets a new day. When she opens the fridge, she doesn’t need to check what to add to her shopping list. The smart device scans the fridge and updates the list for her.
- Israeli scientist bends the rules. Literally.
- Nanotechnology to find the right drug for cancer, without poisoning the patient
- European Individual Chess Championship kicks off in Jerusalem
At night, when Noa goes shopping, the computer reminds her what to buy and recommends the cornflakes with the fewest calories or without gluten. On her way home, when she sees a guy she knows but can’t remember from where, a little voice whispers that they took a theater course together and notes that he is single according to his Facebook page.
At day’s end, when she meets a friend over coffee, she won’t have to remember her friend prefers soy milk and an almond croissant because her device will remind her.
Smart devices today are used as an extension of the brain. We know longer need to retain large amounts of information, only to know how to access it at the right moment. Smartphones allow access to endless rafts of information and enrich our lives. Imagine the possibilities when devices not only “guess” when we ask them questions but also understand exactly where we are, what we are doing and mainly what is going on around us, helping without us actively turning to them.
Initial examples of personal aides, synthesizing portable computers and applications, already exist, such as Google glasses. However, before current technology can function as such, it has to clear some hurdles, primarily the need to teach the computer to see with at least the same efficiency as humans. In other words, computers have to be able to deeply analyze reality automatically and autonomously. Even discerning objects trivial to the human eye, like cats and dogs or different types of tables, is a challenge for an algorithm. The level of difficulty rises further when a machine needs not only to identify an object but also to understand the place and situation it is in, and to add further layers of insight.
A group established last year in Intel Haifa is trying to meet this exact problem. The team of some 60 workers includes 40 in Israel. Another 15 are being added, and another 30-40 will be recruited next year. The team’s constant growth attests to its relative importance within the semiconductor giant, says team leader Ofri Wexler.
Wexler told TheMarker that the mechanical eye bears certain advantages over its human counterpart. “A person takes time to distinguish between hundreds of items on a store or library shelf,” says Wexler. “A computer can see all the labels and read them simultaneously.” Wexler hints at his group’s prototype, which can read 300 labels of 300 items simultaneously and select the one the user is seeking, and deduced based on his special traits and preferences.
An advanced camera can also widen the human field of vision. “This ability is critical, for example, in driving,” says Wexler. “A driver can be distracted and his field of vision is imperfect. A camera affixed to a car can warn the driver about potholes or warn him to slow down because a pedestrian he didn’t notice is approaching.”
Such solutions exist, like Israel’s Mobileye for cars. In the next stage, explains Wexler, cameras will “look” not only at the road but also at the driver and recommend rest if he starts tiring.
“There are endless uses for a computer able to analyze and understand a picture it takes,” he sys. “Tons of cameras are installed globally, and no one is checking the picture they take. The data from these cameras are sent to the cloud, saved there and then analyzed. It’s an expensive, problematic process. We want to put a small ‘brain’ in all these cameras to discard the 99% of the frames that are uninteresting and only send to the cloud or further analysis the few that contain something important worth looking at.”
How do you teach a machine to see independently, without programming and teaching it mechanically to distinguish between objects? Intel employs a technique called Deep Learning, part of a bigger branch of science known as Machine Learning, which deals in imbuing computer systems with the ability to learn autonomously.
“We train our machine,” explains Wexler. “We take millions of pictures from the Internet and tell the computer: Here are 1,000 pictures representing a certain object, like a chair, and as such we build a model that in the end the machine learns. Some training processes are done with human supervision and some without. At the end of the day, there’s no person teaching the machine, rather the machine learns by itself, and that’s how it knows, for example, to read announcements or to identify pedestrians.”
Gadi Singer, general manager of Intel Israel Development Center, says this work method is dramatically different from previously standard techniques in the program development world. “Until now the computer world was programed – people wrote programs that told computers exactly what to do,” says Singer. “Teaching a machine introduces a completely new dimension. You program the machine’s ability to learn through examples and trials and don’t program the results. It’s also the way people learn.”
Ronny Ronen, director of Microarchitecture Research at Intel Israel Development Center, adds that such a method works well mainly when analysis of visual information is required. “Therefore, the subject of machine vision is at the forefront,” he says. Ronen says his dream is an application that will identify and define flowers in photographs, which seems trivial but is currently nonexistent. Ronen says previous methods tried to define certain characteristics – such as requiring a computer to identify eyes, nose and mouth – to identify a face. In Deep Learning, the machine needs to decipher without help from those same characteristics, so it will understand itself what it sees and not rely on orders or conditions that a program defined. The greatest challenge for a machine, he says, is extracting relevant shreds of information from huge amounts of data.
Conversing with computers
Machine sight is a developing research field, and academic researchers are making breakthroughs in it. So companies in the field employing workers with advanced degrees in Israel and abroad are cooperating with them. “A new algorithm is found every month, and there are scientific breakthroughs. It’s a dynamic field,” says Wexler.
Intel Israel has collaborated with the Technion and Hebrew University since 2012, investing $15 million. The company recently held a conference on the field in Israel, with participants representing global companies such as Google, Facebook and Baidu.
It’s no coincidence Intel picked Israel for its center. Israel has long been at the forefront of machine vision, thanks to its military industry. These companies for years have needed to develop advanced optics for drones and robots so they could independently identify their surroundings and enemy objects. Likewise, companies like Rafael and Israel Aircraft Industries collaborate with Israeli academia.
For Intel, entering these new areas is an opportunity to reestablish its previous status as a trailblazer in human knowledge. Intel is identified today more than anything with the microprocessors installed in desktop and laptop computers. In the last decade it missed the opportunity to become a leading player in developing processors for mobile devices that conquered the consumer world, losing out to Qualcomm. So Intel sees great importance in developing tools to restore its status and turn it into a significant player in the next wave of consumer computerization – wearable computerization that is important to information obtained from various sensors, including cameras.
Yet companies developing applications for a world in which a machine is required to understand reality are not waiting for Intel’s developments. Mobileye and other firms, mainly those developing smartphone apps, developed their systems based on existing technology. They rely on existing hardware and offer programming improvements. Intel intends to improve machines’ basic ability to make complex analyses of their surroundings and implant these abilities within the chips.
Thus, the work done by the Israeli team that Wexler leads represents not only an evolution of computing ability but also a dramatic conceptual change in computerization: a transition from an age in which man learned to speak machine language to an age in which the machine learns to speak human language.
“Sixty years ago the first computers worked in computer language,” says Singer. “My lecturer at the Technion said one day we would all speak in zeros and ones,” he recalls. “In the coming years, we’ll see progress in the field of conceptual computerization and a transition to a language more natural for humans. When we look at the future, we see computers watching, listening, and speaking human language.”
Until this vision is fulfilled, we’ll have to continue picking our own cornflakes in the supermarket and recognizing acquaintances on the street. People have done it for many years; we can continue a few more.