A lot of different works were published which examine basic search task where a user has to find a given object inside some picture or other object. During this task the subject’s eye movement are recorded and later on analyzed for a better understanding of a human’s brain and the corresponding eye movements to a given task. In such search tasks like ”find-the-object” the question arises if it is possible to determine what a subject is looking for just by considering the given eye movement data, without knowing what he/she is looking for.
In this work an eye tracking experiment was introduced and conducted. The experiment presented different random-dot pictures to the subjects, consisting of squares in different colors. In these pictures the task was to find a pattern with a size of 3x3 squares. For the first part of the experiment, the used squares were in black and white, in the second part gray was added as an additional color. During each experiment the eye movements were recorded.
Special software was developed and introduced to convert and analyze the recorded eye movement data, to apply an algorithm and generate reports that summarize the results of the analyzed data and the applied algorithm.
A discussion of these reports shows that the developed algorithm works well for 2 colors and different square sizes used for search pictures and target pat- terns. For 3 colors it is shown that the patterns which the subjects are searching for are too complex for a holistic search in the pictures - the algorithm gives poor results. Evidences are given to explain this results.