Decision support systems - automated agents that provide complex algorithms for decision making - are often effective but simultaneously opaque; meanwhile, simple tools are transparent and predictable but limited in their usefulness. Tool creators have responded by increasing transparency and customizability of complex algorithms or by adding new heuristics to simple algorithms. Unfortunately, requiring user input or attention leads to cognitive bandwidth demands that could hurt performance in time-sensitive operations. On the other hand, enlarging the scope of algorithms may make them more complex, reducing predictability. Ideally, designers of information systems would intimately know how complexity and transparency of algorithms affects human cognition. However, not all of the factors that affect decision-making in human-agent interaction (HAI) are fully understood.
In this work, we conduct a quantitative investigation into the role of inter-task cognition in the context of decision support systems. We conducted several experiments with different task parameters that quantify the relationship between human cognition and the availability of system explanation/control under varying degrees of algorithm error. A novel measurement framework quantifies human cognitive and decision-making behavior in terms of which information tools are used, which information is incorporated, and overall decision success. Key findings are 1) a simple, reliable, domain independent profiling test can predict human decision behavior in the context of HAI, 2) correct user beliefs about information systems mediate the effects of system explanations to predict adherence to advice, and 3) explanations from and control over complex algorithms increase trust, satisfaction, interaction, and adherence, but they also cause humans to form incorrect beliefs about data.