In recent years, the interpretability of sophisticated AI and ML models has gained importance. With interpretability, companies aim to increase their performance by promoting the adoption of better-performing AI systems. It has been shown that interpretability increases adoption, trust, and performance in predictable environments. However, users tend to resist following algorithmic advice in inherently uncertain domains such as recommending stocks, providing medical care, and managing a retail supply chain. In this work, we empirically investigate the effect of interpretability on adoption, trust, and performance in an uncertain business domain.