Will Knight:

The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

Later in the article:

Tom Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was.

I get the desire to know how this stuff works, and it seems important while simultaneously not important. If it works, it works, and who cares how it works? If it saves your life, because it found cancer well in time to treat it, do you care?

Likewise: do you really care why Siri thinks you would like a restaurant? Or why Netflix thinks you would like another show? Not really.

Then pull this thread more: when you do a massive calculation on a calculator, do you know how it works? I mean there are mathematical rules, but how do you know if it is right? Surely someone, somewhere knows it is right? Right? Do you know that for sure?

Can of fucking worms.

Posted by Ben Brooks