Automatically analyzing video data is extremely important for applications such as monitoring and data collection in transportation scenarios. Machine learning techniques are often employed in order to achieve these goals of mining traffic video to find interesting events. Typically, learning-based methods require significant amount of training data provided via human annotation. For instance, in order to provide training, a user can give the system images of a certain vehicle along with its respective annotation. The system then learns how to identify vehicles in the future - however, such systems usually need large amounts of training data and thereby cumbersome human effort. In this research, we propose a method for active learning in which the system interactively queries the human for annotation on the most informative instances. In this way, learning can be accomplished with lesser user effort without compromising performance. Our system is also efficient computationally, thus being feasible in real data mining tasks for traffic video sequences.