Not signed in (Sign In)


Bienvenido, Visitante

Desea participar en estas discusiones?
Puede iniciar sesión si tiene
una cuenta, o registrarse ahora.

Vanilla 1.1.4 is a product of Lussumo. More Information: Documentation, Community Support.

    • CommentAuthorYumy
    • CommentTimeSep 8th 2017
    A team at Northwestern University in Illinois has developed a computational model that performs at human levels on a standard intelligence test. Researchers say this represents an important step toward making artificial intelligence systems that see and understand the world as humans do.
    Sample question from the Raven's Progressive Matrices standardized test. Sample question from the Raven's Progressive Matrices standardized test. According to Northwestern Engineering's Ken Forbus, the model performs in the 75th percentile for American adults, making it better than average. The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition.
    The computational model is built on CogSketch, an artificial intelligence platform developed in Forbus' laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner's structure-mapping theory.
    Forbus' work was published online this month in the journal Psychological Review.
    The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition.
    While Forbus' system can be used to model general visual problem-solving phenomena, the research team specifically tested it on Raven's Progressive Matrices, a nonverbal standardized test that measures abstract reasoning. All of the test's problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix. The computational model performed better than the average American.
    According to researchers, the Raven's test is the best existing predictor of what psychologists call “fluid intelligence,” or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships. The results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence.
    The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as "the clock is above the door" or "pressure differences cause water to flow." These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.
    According to Forbus, most artificial intelligence research today concerning vision focuses on recognition, or labeling what is in a scene rather than reasoning about it. But recognition is only useful if it supports subsequent reasoning. The research provides an important step toward understanding visual reasoning more broadly.
    You can visit our websit for more news