‘Artificial Intelligence and What Computers Still Don’t Understand’

[Gary Marcus writing about the failure of artificial intelligence][1]: > In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human…

[Gary Marcus writing about the failure of artificial intelligence][1]:

> In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human beings possess.

This is an interesting post. Marcus and Levesque are focusing on the fact that most AI systems are designed to game a particular test, rather than trying to actually achieve intelligence.

Sounding smart, instead of *being* smart.

I’m not sure the problem is the researchers, as much as the funders. If the “awards” for AI are a particular test that is easy to game, then the funders want those awards for their mantles — thus you get AI that games those tests, instead of getting actual AI. That’s annoying, but I don’t see how it changes unless you find someone who is more Apple like with their spending on R&D (i.e. Not giving two shits about what others think, and instead trying to make the best system they can.).

With Siri Apple is trying that, but Siri isn’t so much designed to be AI, as it is designed to be a verbal interface to your iOS device right now. In that context Apple is doing OK, but in the context of AI Siri is piss-poor.

For instance, I just asked iOS 7 Siri: “What does my schedule look like for this week.”

What’s the expected answer? What’s the desired answer?

I expected Siri to list out the appointments I had in my calendar, but what I really desire is to know if this is a busy week or not. Siri can’t answer that, because “she” wasn’t designed to answer that query.

Siri told me I had eight appointments, but is that a lot or a little? I don’t know, and apparently Siri doesn’t care to know either. The better solution to that query would be for Siri to look at:

– Current appointments as compared to historical weeks.
– Email count in my inbox (unread, and emails that sound actionable).
– Tasks in my to-do app of choice.

If you compile all that information then you stand a chance at spitting out: “You have quite a bit more appointments scheduled this week — including one on Thursday that keeps you out of the office for the day. Luckily, your inbox and to-do list are fairly light compared to last week.”

That’s helpful, accurate, and *meaningful* information. I know what last week was like, so if Siri compares tasks and emails to last week and appointments to history — that’s great information and easy for me to understand. Highlighting things that keep me out of the office all day are equally great because I *really* need to know those things and likely would stress out if I had forgotten.

To me, that’s AI: the prediction of what my *desired* answer is, and the useful summary of the historical data that most humans would internalize. There’s a lot of companies out there that want to build this, but I don’t trust them. They want my data running through their servers — with Siri this could potentially all be done on the device, with anonymous meta-data sent out for quick analysis.

[1]: http://www.newyorker.com/online/blogs/elements/2013/08/why-cant-my-computer-understand-me.html

Note: This site makes use of affiliate links where and when possible. These links may earn this site money when utilized. 

BECOME A MEMBER

Join Today, for Exclusive Access.


Posted

in

by