#1
Although I can understand the feelings behind some of this behavior, and I am as skeptical of the tech monopolies goals and effects, being a Luddite is probably not the best way to handle the problem. Working in technology for the past 25 years, there has always been a need to stay ahead of the crushing wave you ride, to usurp the wave, to become the thing about to crush you. Typically, that means staying ahead of technology, to use it to automate yourself out of one's current role, and into a new one. That is not an option for most people.
How can we help? It seems obvious that driverless cars can be a major enhancement over humans driving cars themselves, and I don't doubt that automation will ultimately be safer than human drivers, but how can we position technology to enhance our lives, while still allowing people to have a decent life? Do we require that all automation has a human handler? Can we make that handler role worthwhile to a person asked to perform it? Will smashing the figurative looms resolve this problem? Can we provide education in a broader way to actually help those that might be displaced? Do we need a guaranteed income? Do we need to reign in the plans of the tech giants to monopolize segments? To me, it seems like we need to develop a government that cares about human welfare, education, and security.
https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html?comments#permid=29957464
#2
Technological advances will simply occur, and the problem is less with the technology than with the employment structure we have, one guided by corporations, with little input from the stakeholders affected by the decisions made. For all the conservative swipes at European capitalism, the stakeholder model works - here, shareholders only care about their asset values, completely divorced from decisions and from human welfare.
https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html?comments#permid=29957479
#3
Not minimizing the concerns of those complaining about being in an experiment that they did not choose, essentially being guinea pigs, we are subject to this on a daily basis but we are enured to those risks. With the current administration's loosening of environmental standards, we are subject to microparticles and mercury, and increased levels of noxious and harmful elements, destroying both the planet and our health. The services people receive in their cars and homes via IoT and automation open them up to hacking risks. Doctors routinely experiment on patients, administering placebos, and in the worst cases, prescribing medications for off-label uses. The list goes on and on how we are experimented on regularly.
https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html?comments#permid=29958642
#4
@david - I think you are right, but the technology cannot overnight usurp human activity. At first, the cars would be used for straightforward driving, in clear traffic, but over time, both the infrastructure and the cars' technology will improve to a point that it can usurp most, if not all, human activity. It is a matter of time, but it does point to the need to some limits on its uses, until it can be proven to be safe, or at least much safer than humans driving.
https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html?comments#permid=29957436:29958734
#5
@CTMD - If we needed to fall back from AI - I assume you are thinking of some kind of doomsday scenario - what makes you think driving will even a possibility?
As for anticipation, as more driving is automated, it will also become more regular and predictable, as we won't have to deal with the irregularity and craziness of human drivers.
https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html?comments#permid=29957593:29959131
#6
I was researching how self-driving cars are programmed, using a set of logical rules, a general set of heuristics, or via data mining, although the lasts seems a stretch at the moment. That said, I was wondering about some of the comic results.
As an example, two cars designed by different companies and not having the same set of rules, simultaneously approach a merge, and although they could follow a rule of left yields to right, I was wondering about a race condition where car #1 requires something from car #2, and car #2 requires something from car #1, and they get stuck in an endless loop of waiting.
Another, maybe not so comic, is the different programmatic 'natures' of companies' cars. Say an Uber car was programmed more aggressively than the Waymo car, and if they approached the same intersection, Uber decides, "it is mine, move", and the Uber 'kills' the Waymo vehicle. At some point, standards will need to be set, unless the industry gets there first
Anyway...
https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html?comments#permid=29959367
Comments