Last month it was reported that one of Google’s self-driving cars was involved in an accident with a bus. The crash occurred when the parked Google car decided that there was enough room to pull out in from of the bus and that the bus would slow or yield. I thought that the whole point of self-drive cars was that they were to be safer than humans. But what happens to the right of way? In this case the bus clearly had the right of way; but the machine decided that it would rely on the good nature of the bus driver. This will simply not do. My bank doesn't rely on my good nature that I’ll pay back a million pound loan that I’ll borrow. It’ll want to see credit history; a business plan; and a guarantee that it’s going to gets its moolah back. So why has a car been allowed to think the same way? The fact that Google has amended its programming to mistrust drivers of large vehicles less is not the answer at all. If self-drive cars aren't to yield at all; then how is that different to a human driver? In a world where we are increasingly mistrusting other people; why has a computerised car been allowed to think the opposite. These cars need to yield when they should in order to make them safe. Google, it just doesn't make sense.
No comments:
Post a Comment