Hold a Conversation
Alan M. Turing, one of the founders of computer science, made a bold prediction in 1950: Machines would one day be able to speak so fluently that we wouldn't be able to tell them apart from humans. Alas, robots (even Siri) haven't lived up to Turing's expectations -- yet. That's because speech recognition is much different than natural language processing -- what our brains do to extract meaning from words and sentences during a conversation.
Initially, scientists thought it would be as simple as plugging the rules of grammar into a machine's memory banks. But hard-coding a grammatical primer for any given language has turned out to be impossible. Even providing rules around the meanings of individual words has made language learning a daunting task. Need an example? Think "new" and "knew" or "bank" (a place to put money) and "bank" (the side of a river). Turns out humans make sense of these linguistic idiosyncrasies by relying on mental capabilities developed over many, many years of evolution, and scientists haven't been able to break down these capabilities into discrete, identifiable rules.
As a result, many robots today base their language processing on statistics. Scientists feed them huge collections of text, known as a corpus, and then let their computers break down the longer text into chunks to find out which words often come together and in what order. This allows the robot to "learn" a language based on statistical analysis. For example, to a robot, the word "bat" accompanied by the word "fly" or "wing" refers to the flying mammal, whereas "bat" followed by "ball" or "glove" refers to the team sport.