In order to investigate large information spaces effectively, it is often necessary to employ navigation mechanisms that allow users to view information at different scales. Some tasks require frequent movements and scale changes to search for details and compare them. We present a model that makes predictions about user performance on such comparison tasks with different interface options. A critical factor embodied in this model is the limited capacity of visual working memory, allowing for the cost of visits via fixating eye movements to be compared to the cost of visits that require user interaction with the mouse. This model is tested with an experiment that compares a zooming user interface with a multi-window interface for a multiscale pattern matching task. The results closely matched predictions in task performance times; however error rates were much higher with zooming than with multiple windows. We hypothesized that subjects made more visits in the multi-window condition, and ran a second experiment using an eye tracker to record the pattern of fixations. This revealed that subjects made far more visits back and forth between pattern locations when able to use eye movements than they made with the zooming interface. The results suggest that only a single graphical object was held in visual working memory for comparisons mediated by eye movements, reducing errors by reducing the load on visual working memory. Finally we propose a design heuristic: extra windows are needed when visual comparisons must be made involving patterns of a greater complexity than can be held in visual working memory.