In this section, Hofstadter talks in detail about how the process of how Jumbo works. In the previous section, Hofstadter just talked about the process of building different "gloms" to form a word. This time around, however, Hofstadter talks about how these gloms get built and are judged.
Hofstadter uses the term "happiness" and "temperature" to determine the likelihood that these gloms are building towards a viable answer. I find this as a very interesting idea and would be curious to find out what type of algorithms Hofstadter uses to determine the "happiness" of a glom. While these ideas are very easy and simple to understand in abstract, they seem like they would be extremely complex when it comes to coding this gauge.
Hofstadter also talks about how if a glom has a low happiness, it is "undone" by going step by step backwards looking for different possible branches it may have taken. This too seems like a simple idea in principle, but a complicated one in implementation. I would be very interested to see just how Jumbo would decipher "pang-loss", "pong-lass", and "pan-gloss" all to have different "temperatures". Overall this whole project sounds like one that is simple to explain in theory, but a very difficult task to convert from theory to code; more so than any other program I have seen before most likely.
Wednesday, September 30, 2009
Subscribe to:
Comments (Atom)