3 stories
·
1 follower

Tesla says Autopilot was active during fatal crash in Mountain View

1 Comment

Enlarge (credit: Don McCullough)

Last week's fatal Tesla Model X crash in Mountain View, California, occurred while the vehicle had Autopilot engaged, Tesla said in a Friday blog post. "Our hearts are with the family and friends who have been affected by this tragedy," the company said. The crash claimed the life of an Apple engineer, Walter Huang, according to the Bay Area's ABC 7 News.

The vehicle ran into a concrete lane divider at high speed. The crash, and a subsequent fire, fully destroyed the front of the vehicle.

"The reason this crash was so severe is because the crash attenuator, a highway safety barrier which is designed to reduce the impact into a concrete lane divider, had been crushed in a prior accident without being replaced," according to Tesla. "We have never seen this level of damage to a Model X in any other crash."

Read 9 remaining paragraphs | Comments

Read the whole story
v
2210 days ago
reply
.
Share this story
Delete

Apple details tools to help developers comply with new EU data regulations

1 Share
Article Image

Apple on Friday unveiled a set of developer tools designed to keep app makers in line with the European Union's upcoming General Data Protection Regulation, a set of rules that grants users more control over their digital histories.
Read the whole story
v
2210 days ago
reply
Share this story
Delete

Ted Chiang on the similarities between “civilization-destroying AIs and Silicon Valley tech companies”

1 Comment and 3 Shares

Ted Chiang is most widely known for writing Story of Your Life, an award-winning short story that became the basis for Arrival. In this essay for Buzzfeed, Chiang argues that we should worry less about machines becoming superintelligent and more about the machines we’ve already built that lack remorse & insight and have the capability to destroy the world: “we just call them corporations”.

Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

As you might expect from Chiang, this piece is full of cracking writing. I had to stop myself from just excerpting the whole thing here, ultimately deciding that would go against the spirit of the whole thing. So just this one bit:

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse.

Ok, just one more:

The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieves its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

You should really just read the whole thing. It’s not long and Chiang’s point is quietly but powerfully persuasive.

Tags: artificial intelligence   economics   Ted Chiang
Read the whole story
v
2210 days ago
reply
Share this story
Delete
1 public comment
glenn
2312 days ago
reply
Ted Chiang!
Waterloo, Canada