CAST: Effective and Efficient User Interaction for Context-Aware Selection in 3D Particle Clouds

Description: 

We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user’s subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.

Main reference:

L. Yu, K. Efstathiou, P. Isenberg and T. Isenberg, “CAST: Effective and Efficient User Interaction for Context-Aware Selection in 3D Particle Clouds,” in IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 886-895, 31 Jan. 2016, doi: 10.1109/TVCG.2015.2467202.

Relevant reference:

L. Yu, K. Efstathiou, P. Isenberg and T. Isenberg, “Efficient Structure-Aware Selection Techniques for 3D Point Cloud Visualizations with 2DOF Input,” in IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 12, pp. 2245-2254, Dec. 2012, doi: 10.1109/TVCG.2012.217.