The confluence of sports and technology raises novel legal and ethical questions. For instance, when analyzing data that enables the evaluation of athletes’ performances and conditioning, genetic testing is an advanced but intrusive way to determine health risks they may face, of which they themselves are unaware.
This raises the question of whether it is admissible to administer genetic tests to determine the best training regime for a particular professional athlete. What would happen should the team or an other employer fire them as the result of discovering a potentially fatal health issue?
Dov Greenbaum, director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Interdisciplinary Center in Herzliya, says the institute has been cooperating with a laboratory at Stanford University, which turned to it for help with legal questions. He says the university seeks personal genetic information about the school’s hundreds of student athletes that could affect their medical risks as well as their training and conditioning. U.S. law prohibits employers from discriminating against employees on the basis of genetic information, but college athletes are not generally recognized as employees and thus do not enjoy these protections.
“We ran a study, and our answer to Stanford was that while student athletes are not protected as employees are, there are many ethical and legal problems in this area that have no simple solution,” says Greenbaum. “Therefore, we advise them to run the [genetic] tests only with regard to specific issues that are directly connected to athletic concerns, and not to test whether these athletes will develop Parkinson’s disease or find out that their fathers are not in fact their fathers.” Furthermore, the athletes need to give full permission and need to be fully aware of the meaning of the test, and they cannot be charged for it if they aren’t interested.
Another issue is the use of face-recognition technology on spectators at sporting events. “Even if spectators want to remain anonymous and for no one to know where they are and at what time, the technology identifies them as being present in the soccer stadium, and they will begin to receive announcements related to the game and for the purchase of related merchandise,” says Greenbaum. Do I, as a spectator, have a right to anonymity? Should data related to my location be available? Greenbaum says that most of these questions are not simple and have yet to be answered.
Working the system
The Meitar Institute doesn’t only focus on sports. One of the hottest areas that Greenbaum is involved in is autonomous driving. Many believe that self-driving vehicles will revolutionize daily life, with implications for consumer behavior and for public health, among other areas. The demand for cars will drop — as will the number of fatal road accidents, and consequently also the number of donor organs available for transplant.
Greenbaum is writing a book, scheduled for publication next year, about the ethical questions for society and the law that can be expected with the widespread introduction of autonomous vehicles.
He describes a study carried out by Meitar in 2016, together with the Transportation and Road Safety Ministry, that considered how traffic laws will have to be modified to accommodate the introduction of self-driving cars.
“Would the driver of an autonomous vehicle still get a ticket for driving while holding a cellphone? If the car hits someone, who is responsible, the one who programmed the car or its owners? The conclusion is that many laws will need to change. Israel can be an excellent laboratory for the integration of autonomous cars, because it has no borders, unlike the United States, for instance, in which every state has different laws, and it would be absolutely possible to control which models of car will be introduced and when,” Greenbaum says.
Many ethical questions arise, one of which relates to the so-called trolley problem, introduced by the British philosopher Philippa Foot in 1967. In its original form, a runaway trolley is moving toward a group of five workers on the track. You control a switch that would divert the trolley to a track with one person on it. In both cases, the people on the tracks are unable to move. You can do nothing and allow the trolley to kill the five people on the main track, or pull the lever and divert the trolley to the track with one person. Which is the most ethical option?
In the same way, says Greenbaum, the self-driving car must be programmed to make decisions in the case of an unavoidable accident. Should it shift trajectory to prevent hitting five people at the cost of hitting a single person, thus saving four lives but intentionally taking one? And what should it decide if the potential victim is a child?
MIT Media Lab, for example, created a platform called Moral Machine to gather opinions on how autonomous vehicles should decide in such situations. The eventual goal is to use this data as a basis in studying decision-making in machines.
The widespread use of autonomous cars will affect other issues as well, Greenbaum explains. The need for building new roads may fall as the result of an anticipated decline in congestion. Driving instructors will become obsolete, as will parking lots: Some cars will return home after each trip.
Tax revenue will also be affected. In the era of automatic vehicles, couples will need only one car, reducing a major source of sales tax. Governments will also issue, and collect less revenue, fewer fines for parking and driving violations.
Organ donation, too, will be affected. “Autonomous cars will be electric, for the sake of better control of the vehicle. So not only will there be less air pollution, but many fewer people are forecast to die in road accidents and therefore there will be significantly fewer donated organs. So, what do we do? Do we approve organ purchases? This new situation will also demand more enforcement of pedestrians, given that knowing that the car will stop for them will more frequently jaywalk, creating a mess that will offset the benefits. As when drivers using Waze are cautioned by the app answer ‘I am not driving’, people will find ways to work the system. We will need to change the law and be aggressive with regard to pedestrians who will exploit the situation,” Greenbaum says.
Greenbaum holds a number of degrees, including a bachelor’s in biology and economics from New York’s Yeshiva University, a law degree from University of California, Berkeley and a doctorate in bioinformatics from Yale University’s genetics department.
He grew up in Canada and immigrated to Israel eight years ago. In Israel, he worked for several years in offices specializing in patent law. Four years ago connected with businessman Zvi Meitar, who died in 2015, and began to direct the institute, which conducts studies in areas including virtual reality, financial technology, robotics, the internet, artificial intelligence and automated vehicles, and publishes its findings in academic journals.
One of the most interesting studies, conducted in 2017 together with IBM, was in the field of artificial intelligence and law. The context of the study was a famous ruling in the United States regarding the use of music by Prince. In 2007, an American named Stephanie Lenz got into trouble with Universal Musical Corp. after she uploaded a 29-second video to YouTube in which two small children are running around a kitchen to Prince’s “Let’s Go Crazy.” After the company demanded that the video be removed, YouTube did so. But Lenz teamed up with Electric Frontier Foundation, an advocacy organization for freedom of expression on the internet, and won a bench ruling that her use of his music fell under the category of fair use.
Lenz demanded damages from Universal for their unjustified demand to remove the content. The Federal Court of Appeals ruled in 2015 that Lenz was entitled to damages and that owners of the creative rights who demand that YouTube remove work uploaded by a third party must consider in advance, in good faith, prior to submitting their demand, if the use is permitted under the fair use exemption. In June of this year, Universal informed Lenz, whose child who appears in the video is now 12, that they had reached a settlement.
The question whether this constitutes fair use or not was considered in its own right, and thus raises the question of whether it’s possible to quantify fair use in order to understand in which instances it may be determined one way or the other. The Meitar Institute study with IBM tried to reach a conclusion on the basis of surveying all of the legal rulings on fair use cases, such that if someone demands that YouTube remove a video it will possess an algorithm that will clarify if it must respond or not. “We say that there is not enough big data on law. There are maybe 200 or 300 rulings in all of the United States that deal with this subject, and this is not enough for artificial intelligence,” says Greenbaum.
At the institute they considered a broader question relating to artificial intelligence. Is it possible to “dispense” with lawyers by replacing them with computers? Geenbaum directed me to the website www.willrobotstakemyjob.com, which looks at the chance of occupations disappearing. According to the website, the chance that lawyers will disappear is exceedingly low: 3.5%. Taxi drivers and chauffeurs, on the other hand, have an 89% chance of becoming obsolete. In the case of journalists (writers), the chance is 11%. The most secure occupations are members of the clergy, with a negligible chance of 0.81% of disappearing, and psychologists, with a chance of only 0.423%.
Who’s responsible for the drone?
Smartphones create an enormous amount of data that reveals many details about us, and can serve as the basis for design changes. Take for example location data on users of public transportation. It can tell us if there are enough buses for people waiting and if there is a need to open new routes.
Greenbaum says the institute has considered these issues for the Transportation Ministry, responding to a request to address the legality of technology. The researchers raised questions of privacy and noted that if data is collected from smartphones, those who don’t have them are not considered. “The answer was that it isn’t sufficiently clear if it is actually legal and thus it will not likely occur.”
Drones also raise legal questions. “We are interested in the question of how we develop a law that will distinguish between people purchasing and playing with a drone on the one hand, and companies using them on the other. Drones execute deliveries and therefore there are also questions regarding who is responsible, how are drones insured, if their airspace can be restricted and to what altitude can they be flown, etc. The current law is intended for airplanes, and thus there is a broad data field that needs to be tailored.”