You are here

Visual System

Coordinate System

Quick, draw a two-dimensional graph. Did you put the x-axis on the bottom, increasing to the right with the y-axis on the left increasing upwards? You sure didn't draw it with y-axis decreasing downwards. Now draw a graph with a nature center point. Yeah, you get the picture.

Canonical Lisp uses a computer screen based coordinate system, +x to the right, +y down. While the slot names are the same (for now), the coordinates in jACT-R are retinotopic visual angles (degrees). 0,0 is at center, +x to the right, +y up.

Recycled Locations

From an informal survey of modelers, I found that almost none of us use the visual-location chunks for anything other than to encode some object. Because of this limited use and the fact that during the lifetime of a model there can be thousands of these chunks created, jACT-R actually recycles visual-locations.

There is at most one visual-location for each possible retinotopic position (as defined by the visual field size and resolution). Its x and y locations are fixed, but the remaining slot values are mutable by the system. The mutable slot values have a relatively limited lifespan. They are valid only after a visual search until the next visual search is started. In this way the explosive growth of visual locations is eliminated, which is particularly important when operating for long periods in a truly embodied environment.


Canonical ACT-R marks FINSTS at the visual-location level. This makes sense assuming that the visual scene is limited. However, if you've got multiple objects at the same location (due to visual resolution issue or overlap), you can easily end up missing objects and can possibly end up in a situation where you can never actually encode some objects.

To prevent this, jACT-R assigns FINSTS at the object level. In simple scenes, the behavior is exactly the same. In more complex scenes (with overlapping objects), this allows the visual system to differentiate between attended and unattended objects at the same location. The semantics are exactly the same for making a visual-location request.

The only incompatibility this introduces is with the visual-location buffer query. Canonical ACT-R supports ?visual-location> attended {t|new|nil}. jACT-R does not support this query at all. I'm looking at implementing it in the future, but it will be based on the attended status of all the objects at that location.

If you require this query, contact me and I'll see about bumping up its priority.

Movement Tolerances

Canonical ACT-R uses movement tolerance to allow the model to encode a visual object after it has moved from the location returned from the visual search. jACT-R uses this tolerance to also deal with object tracking. If the object is moving too fast, it will exceed the tolerance. When this occurs, the track is lost, the visual buffer state is set to error, but the chunk is maintained in the buffer. (I'm still trying to figure out whether it should be removed from the buffer or not)

Object tracking

So if I've attended to an object why do I have to explicitly tell the visual system to follow it? When something you're looking at moves, it's hard not to follow it. In jACT-R all attended objects are automatically tracked. The only benefit of using the object tracking mechanism is that it stuffs the updated visual location of the object into the visual-location buffer.

If the parameter EnableStickyAttention is true, attended objects will remain attended as long as the object is in the visual field. No amount of movement will shake it.


All perceptual/motor modules are already asynchronous.