Chris Stoltz reports in with this:
Hey Ben:
A friend, Mark, while in undergrad was part of an experiment at then University of Calgary. He was asked to listen to a CD of a computer-generated, monosyllabic language while sketching pictures that appeared on a computer screen. (Mark said it sounded like slow simple Chinese: monosyllabic, with maybe 2-3 very exaggerated tones). After half an hour, and, having copied six or seven pictures in crude sketches, the experimenter took the sketches, threw them in the trash, and said “OK now we are going to play you some of the computer language back. I want you to simply tell me if what you hear sounds right or wrong.”
Mark then heard about 30 snippets of the computer language and said “right” or “wrong” to each. Mark said he didn’t think at all– things just “felt” right or wrong. When he finished, the experimenter told him that he was about 95% correct. The experimenter then told him that these results were consistent across all genders ages etc. etc., and that the effect fell off with time (e.g. Without reinforcement, people who did the test phase the next day scored around 50%, same as chance).
The experimenter said that he was working on proving aspects of unconscious language acquisition, and his hypothesis was that some features of language could be acquired even without knowing meaning. This was proven sixty years ago with phonological properties (e.g. even an English baby knows that a sound like “pf” is not English) but this guy was trying to see if people could pick up grammar stuff unconsciously.
Chris
