|
|
|
|
|
|
BMW Garage | BMW Meets | Register | Today's Posts | Search |
|
BMW 3-Series (E90 E92) Forum
>
Landmark patent decision may be the death knell of self-driving automobiles
|
|
05-08-2020, 01:18 PM | #1 |
Major
14095
Rep 1,336
Posts
Drives: Porsche 993
Join Date: Mar 2020
Location: Dog Lake, South Frontenac, Ontario Canada
|
Landmark patent decision may be the death knell of self-driving automobiles
Motor Mouth: Let the war between man and machine begin
Landmark patent decision may be the death knell of self-driving automobiles by DAVID BOOTH | 3 HOURS AGO https://driving.ca/features/feature-...-machine-begin If it were not for the coronavirus and the boredom it has imbued, I would have missed it. You probably did. It was, as they used to say in the newspaper biz, “buried” way in the back of the news section, rating, in the most generous of journals and web sites, barely three paragraphs: “EPO publishes grounds for its decision to refuse two patent applications naming a machine as inventor” is hardly the stuff of viral tweets, now is it? Yet, in a decision that I suspect will someday be landmark — and in this I make no claim to prescience, only paranoia — the U.S. patent office recently rejected two patents by artificial intelligence. Yes, those of you already parsing the grammar of that last sentence, you’re reading that right: I did specifically say “by” and not “for.” On April 27, the U.S. government officially rejected two patents in which an artificial intelligence system called Dabus was listed as the inventor. Though physicist Stephen Thaler, the creator of Dabus, said he had nothing to do with the inventions — namely a food container easy for robots to grasp and some sort of unique warning light — the U.S. Patent and Trademarks Office ruled that only “natural persons” can be recognized as inventors. It mirrors a similar rejection of Dabus’ claims by the U.K.’s Intellectual Property Office and, according to the BBC, follows such a “surge in AI-driven filings” to the European Patent Office that “the World Intellectual Property Organisation (WIPO) has started a consultation on this issue and is due to continue the discussion at a session in mid-May, with the outcome expected to influence future IP policy.” “Good God, Dave,” I hear you caterwauling. “You’re killing us. What the hell does robot-friendly Tupperware have to do with cars?” Well, getting quickly to the point, a few years ago the French justice system held its annual #nuitdudroit — quite literally “the night of the law” — that allows lawyers and lay people alike to tackle the important legal questions of the day. In 2018, the Paris Court of Appeal’s public forum debated the future case — as in, the year 2041 — of a car accident that killed 50 and seriously injured 200 more. And hopefully you’ve sussed where this is going now; the car is question was autonomous and the legal point being debated was whether the Artificial Intelligence system driving the car was responsible for all that mayhem, and/or whether the people who designed it should be charged. Understand that this debate — and it was truly something to behold in its sincerity — created no legal precedent, its conclusions having no standing in any court of law. But the French take their democracy a little more seriously than most, and the mock trial in question was presided over none other than Valery Turcey, the real president of the Paris Court of Appeal. Real lawyers argued the prosecution and defense positions, jurors actually adjudicated, and testimony was given. Now, it doesn’t matter what the court’s decision was; for what it’s worth, it ruled that Artificial Intelligence was guilty of vehicular manslaughter and sentenced it to rehabilitation, literally “une rééducation algorithmique”. Or that all victims were fictitious, the jurors unsworn, and decision rendered unbinding. What really matters is that serious legal minds are already trying to wrap their heads around who will be responsible for the automobiles of the future. More importantly — and why the U.S. Patent office’s decision is, I’m sure, sending chills up inventors’ spines — is that engineers realize machine learning is absolutely essential to fully autonomous automobiles, yet no one, not engineers nor their managers, wants to take responsibility for AI’s actions. Let that sink in for a moment. Artificial intelligence is seemingly essential for the self-driving car of the future, but who in their right mind is going to accept responsibility for an algorithm that can morph into something they didn’t invent? WHO IN THEIR RIGHT MIND IS GOING TO ACCEPT RESPONSIBILITY FOR AN ALGORITHM THAT CAN MORPH INTO SOMETHING THEY DIDN'T INVENT? Automakers are already struggling with that dilemma of responsibility. For one thing, no programmer can possibly imagine every possible scenario that an autonomous automobile might someday face. Non-machine-learning robots can only act on what we humans teach them, and it would take a Keith Richards-sized hit of lysergic acid to imagine a broom-wielding woman in an electric wheelchair chasing a duck across an intersection. Yet, according to Google, that is exactly what one of its self-driving cars faced in Austin, Tex. Indeed, the instances of self-driving cars balking at situations they have not been programmed for are unsurprisingly common and, despite our best efforts, will continue ad infinitum. We will never be able to claim a car is totally autonomous — i.e., that it can be driven anywhere under any circumstance — unless it can react to new situations without outside input. On the other hand, no one seems to want to take responsibility for a car they no longer own. The question of insurance is difficult enough to reconcile, but what of the moral hazards that come with programming a car to make split-second life and death decisions? Such situations may seem theoretical, their odds astronomical, but someday, somewhere, a robotic car will have to make a decision of equivalencies: Given no safe alternative, whom do I kill — the owner of the car I am controlling, or the three children in back of the school bus? The grandmother to the left, or the two toddlers to the right? Rhetorical you may think these questions are, but MIT took this classic “Trolley Problem” seriously enough to conduct an online survey to determine where our moral driving compasses point. And I can assure you that automakers — an exception might have to be made for Tesla — are indeed struggling with these very questions. So here’s the kicker. If our future self-driving cars are to operate only with the algorithms we code into their ECUs, then those decisions will have to be premeditated. That is, someone will have to program them with their moral compass; whose life to save and whose to not. If, on the other hand, an AI’ed automobile can “learn” to make those decisions on its own, those programmers — and the car companies they work for — are off the hook. Now do you get why engineers might be eager to have patents granted to Artificial Intelligence? |
05-08-2020, 01:20 PM | #2 |
Major
14095
Rep 1,336
Posts
Drives: Porsche 993
Join Date: Mar 2020
Location: Dog Lake, South Frontenac, Ontario Canada
|
I for one am not in any big hurry to share the road with self driving cars....I don't frankly see the rush to get them on the road and I think this is good news.
|
Appreciate
0
|
Bookmarks |
|
|