Sometimes, rational choices on the road—and good choices—are made and result in breaking the law. However, if driverless cars are released to the public, something that feels closer every year, would they be able to make these same decisions to break the law? It’s an interesting ethical question.

For example, a driver could see an obstruction in the road, such as part of a ruptured tire from a previous vehicle or a branch from a fallen tree. If the road is clear, the driver can swerve and go around the obstruction, avoiding the crash. Human drivers do this all of the time, even if it means going over the center line.

However, a driverless car may be programmed not to cross that line under any circumstances, especially in a no passing zone, to avoid accidents with oncoming traffic. Faced with the same obstruction, could the car override its programming and avoid the object, or would it slam on the brakes and stop in the middle of the road, waiting for the way to be cleared?

Strict following of the laws means that the car would in fact stop. If the obstruction was thrust suddenly into the way, this could result in a very quick stop, and cars behind the driverless car could run into the back of it. While the safer option may have been to just swerve around the object, the car’s programming may cause it to make a choice that leads to injury or death.

Now, it’s fair to note that the trailing car could still be at fault for following without enough distance to stop, but that doesn’t change the ethical question about whether or not causing an avoidable accident is a good outcome, regardless of fault.

If these automated vehicles do end up on the roads in Wisconsin, people must know all of the legal ramifications.

Source: The Atlantic, “The Ethics of Autonomous Cars,” Patrick Lin, accessed Nov. 06, 2015