This paper addresses the problem of vision-based navigation and proposes an original control law to perform such navigation. The overall approach is based on an appearance-based representation of the environment, where the scene is directly defined in the sensor space by a database of images acquired during a learning phase. Within this context, a path to follow is described by a set of images, or image path extracted from the database. This image path is designed so as to provide enough information to control the robotic system. The central contribution of this paper is the closed-loop control law that drives the robot to its desired position using this image path. This control does not require either a global 3D reconstruction or a temporal planning step. Furthermore, the robot is not constrained to converge directly upon each image of the path, but chooses its trajectory automatically. We propose a process of qualitative visual servoing, enabling us to enlarge the convergence space towards positioning in a range within a confidence interval. We propose and use specific visual features which ensure that the robot navigates within the visibility path. Experimental simulations are given to show the effectiveness of this method for controlling the motion of a camera in three-dimensional environments (free-flying camera, or camera moving on a plane). In addition, experiments realized with a robotic arm observing a planar scene are also presented.