Print-friendly PDF Version

The network consists of three input layers and an intermediate layer. The three input layers an eye-centered layer, an eye position layer and a head-centered layer are also output layers; the final estimates of the network are read from these layers after relaxation. The three input layers consist of three topographic layers of N units indexed by their position, i(or j,k), where i(or j,k) = 1N. Similarly, the intermediate layer is a topographic 2D map of NN units indexed by their position l, m, where l = 1N and m=1N.

 

Connection weights

      The input layers are symmetrically interconnected with the intermediate layer (hidden layer), and the corresponding matrices of connection weights are denoted by Wr, We, Wa for, respectively, the eye-centered, eye position and head-centered layers. The connection strengths between unit i (j,k) in each  input layer and unit (l,m) in the intermediate layer are given by

      The variable sw represents lateral spread: unit i is strongly connected if . Note that with these connection matrices, unit (l,m) in the intermediate layer is most strongly interconnected with unit i=l in the eye-centered layer, j=m in the eye position layer and k=l+m in the head-centered layer. Unit (l,m) is connected more weakly to neighboring units in each layers, with the spatial extent of these connections is controlled by sw .

 

Network initialization

      Rri(t), Rej(t), and Rak(t) are denoted as the activity of unit i (j,k) in the eye-centered, eye-position and head-centered layer at time t.

      For eye-centered position, the probability distribution for the initial activity, denoted Rri(0), is given by

 

      The expressions for P(Rej(0)|xe) and P(Rak(0)|xa) are identical to P(Rri(0)|xr), except that r is replaced by e or a. The expressions for f (xa) and f (xe) are identical to fi(xr) except that r is replaced by a or e.

      The activity in the intermediate layer, Alm(0), is initialized to 0: Alm(0) = 0 for all l,m.

 

Recurrent network evolution

      The evolution of the activities in the recurrent network is described by a set of coupled nonlinear equations. Denoting Alm(t) as the activity of unit (l,m) in the intermediate layer at time t, the evolution equations are written

 

 

where Llm(t) represents a linear pooling of activities from the three input layers. The activation functions Alm or Rri, Rej,Rak implement a quadratic nonlinearity coupled with a divisive normalization.

 

Parameters used in the simulation

K = 20 Hz, n = 1 Hz, s = 0.40 radians, Kw = 1, m = 0.002 s and S = 0.1 Hz.

sw = 0.37 radians, Cr=Ce=Ca=1 s.