Interactive machine learning has become an increasingly popular topic in recent years as it often significantly reduces overhead that users solve real world problems through working with difficult datasets. It allows us to gracefully combine automated services with direct user manipulation.

Semi-supervised learning can be seen as a primary technique of interactive machine learning. For example, active learning interactively selects and presents difficult-to-classify learning cases to users and receives users' labeling of them. These learning cases are usually with high entropy, and the labels of them lead to largest improvement. Another example technique is reinforcement learning, which rewards good learning results and penalizes bad learning results based on user feedback.

Different from the existing approaches of semi-supervised learning that only allow users to work with each case at the low level, we design a framework that allows users to improve the overall data representation, learns from user interactions, and presents the learning results to users as additional guidances. This framework is developed based on the ideas of visual text analytics.

Existing systems that allow interactive user visual analysis usually adopt topic modeling techniques. Original features are reduced to a lower-dimensional topic space, in which documents are grouped. One type of such system, including UTOPIAN and iVisClustering, visualizes the topics, so that users can adjust the topic-term distribution at the term granularity. We work in the medical domain and focuses experts on natural high-level tasks (to group medical images) and encodes their interactions as constraints to improve the learning results. Another type of system, including LSAView and iVisClassifier, involves document-level interactions. These systems require users to change the parameters of the algorithms. In contrast, our system updates the underlying topic model based on experts' natural manipulations of the images.

Image grouping in knowledge-rich domains is challenging, since domain knowledge and expertise are key to transform image pixels into meaningful content. Manually marking and annotating images is not only labor-intensive but also ineffective. Furthermore, most traditional machine learning approaches cannot bridge this gap for the absence of experts' input. We thus present an interactive machine learning paradigm that allows experts to become an integral part of the learning process. 

This paradigm is designed for automatically computing and quantifying interpretable grouping of dermatological images. In this way, the computational evolution of an image grouping model, its visualization, and expert interactions form a loop to improve image grouping. In our paradigm, dermatologists encode their domain knowledge about the medical images by grouping a small subset of images via a carefully designed interface. Our learning algorithm automatically incorporates these manually specified connections as constraints for re-organizing the whole image dataset.

Useful Materials and Resources
Interactive Machine Learning