Share Site Map Contact us Home Page
Home > LMC 1 > Summary and Conclusions


LMC1 Quick Overview

LMC1 Announcement

LMC1 Detailed Info

LMC1 Official Rules

LMC1 Interface Specs

LMC1 Testing Zone

Enter LMC1

LMC1 FAQ

Summary and Conclusions
Summary and Conclusions
  Printable version
This section summarizes the conclusions reached at the LMC1 Workshop, which took place at Ai's research facility on 15-17 March 2002.
Despite the fact that Ai ceased to operate as a commercial organization after LMC1 was announced and before the submission deadline, we managed not only to keep the Challenge alive and bring it to full conclusion, but also to hold the LMC1 Workshop, and transform the Learning Machine Challenge from a one-time event into an annual event. This achievement is particularly remarkable as it is based strictly on voluntary work. Moreover, we managed to recruit two of the top performers (Curtis Huttenhower and Shenghuo Zhu) to join the steering committee, and agree to exclude themselves from taking part as competitors of LMC2. Following is a summary of the conclusions derived from LMC1, and corresponding modifications to be implemented in LMC2.

Scope

LMC1 was announced as an attempt to solve a very general problem: To perform well in a number of rule-based systems (“games”), the rules of which are not disclosed. The charter of the competition was also very general: “To promote original research in Artificial Intelligence”.

Nevertheless, Ai’s motivation in announcing LMC was in fact more focused and restricted: To identify and promote Machine Learning algorithms that could perform well in Language Acquisition tasks. The rationale behind the extremely broad scope of LMC1 (despite Ai’s more focused interest) was this: Restricting the competition to language acquisition problems would have naturally attracted people and organizations who specialize in language related research, while we felt that much may be gained by importing general learning algorithms (possibly applied to problems very different from language acquisition), and testing them in a language scenario.

This strategy proved to be frustrating and misleading to some of the participants, who ignored (perhaps with good reason) the bias towards language games, and focused on non-language problems such as Roshambo and Tron. The best example of such frustration was the use of “examples” in some of the contest games (games which provided the right response for turn n at the beginning of the input of turn n+1).

The LMC1 workshop thoroughly discussed this issue, so that LMC2 would be much more clear and explicit regarding the precise motivation and scope. It was agreed the LMC2 will be restricted to single-player language games only, while retaining the basic LMC infrastructure to allow for future competitions to focus on other scenarios such as 2-player games. This conclusion requires the creation of an additional level in the LMC protocol, providing the flexibility to employ a specialized protocol while retaining the lower level genral underlying basic LMC1 protocol.

Algorithms

LMC1 was attended by approximately 90 participants. We were delighted to have players from a variety of machine learning disciplines, ranging from Neural Networks to Associative learning, evolutionary and genetic algorithms.

The LMC1 workshop dedicated one full day to presentations and discussions about the algorithms employed by the participants. We found that most of the presented players shared two main characteristics:

(1) The algorithms were all based on some variation of a decision tree, constructed throughout the game, containing accumulated information about the flow of the game and various successful and failed paths.

(2) Most of the algorithms employed a number of heuristics which competed for the “right” to provide the response. In other words, instead of using a single method, the player employed several different algorithms, with some sort of top-level decision mechanism grading the various algorithms and picking the one to use in each particular turn.

The second observed principle was actually put to use and implemented by us: The engine running as HAL’s brain on the Ai site was upgraded to use a number of different heuristics, each offering a response for each turn. The choice of the response to be used is governed by the “degree