Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For one, to do things differently, I'd start without trying to define 'intelligence'. Nobody can define what a game is and still we learn, understand and say plenty of interesting things about games. Nobody can you define 'water' in a way that captures all the mental images people have when they hear that word and still we can say lots of relevant things about water. Trying to capture something like 'intelligence' with words is a foolish endeavour.


I agree with you in general (you can't get wet from the word "water"). In my experience, though, if you're building something fuzzy, it usually turns out to be a complete waste of money with no results. Human beings seem to need reasonable narrow scope to produce something useful. So while the team doesn't need to waste time on formally defining "intelligence" in a way that completely captures all its aspects, they do need to precisely define what they're building, to some reasonable degree.

Without a scope definition they can't budget their funds, their time, and their human resources. Projects like these usually result in a waste of money with nothing to show for it. Of course if they try to define what they're building, it will be intimately linked to the definition of "intelligence" (assuming they claim they're building an intelligent machine). Then someone will come along and propose a counterexample that demonstrates how the machine likely isn't intelligent at all, and cannot perform well on some problem where humans do spectacularly, thereby shifting the team's scope and definition. And so, they'll be back to square one.


intelligence is how great our ability to predict is according to this talk: http://www.ted.com/talks/lang/eng/jeff_hawkins_on_how_brain_...


How dictionaries circularly define words using other words, and how humans might learn by progressively expanding analogies starting with simple 'axioms' of body sensations like up vs. down (more on other pages): http://members.cox.net/deleyd/politics/cogsci5.htm


Wow, these three comments have just mirrored my own recent thoughts to the greatest extent I can remember. I present to you the Infinite Curiosity Loop:

http://funnylogic.com/times/txt/2009-11-infinite-curiosity-l...

And for what it's worth, the definition of intelligence has been on my mind a lot.


If you'd like to explore this subject in a structured way, then there's a large library of philosophical writings on the subject :). I think Hilary Putnam's essay on "Brains in a vat"[1] may be a nice starting point, from which can explore both earlier and later work.

[1] http://evans-experientialism.freewebspace.com/putnam05.htm


The letter 'g' is a completely arbitrary shape. Similarly an entire word 'everything' is just an arbitrary shape (though subdivided into constituent organized blobs of arbitrary shape). Letters and words seem to be symbols/code that stand for something else, and that something else might be raw sensory neuron patterns or something (imaging an apple with your eyes shut might be a trick to trigger that raw data without needing external stimulus to do so, such as seeing a real apple). Even the "sesame street" concept of up/down might require some sort of raw inner ear balance and sight sense to experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: