Data-intensive studies in the domain of accelerator-based High Energy Physics, HEP, have become increasingly more achievable due to the emergence of machine learning with high-performance computing and big data technologies. In recent years, the intricate nature of physics tasks and data has prompted the use of more complex learning methods. To accurately identify physics of interest, and draw conclusions against proposed theories, it is crucial that these machine learning predictions are explainable. For it is not enough to accept an answer based on accuracy alone, but it is important in the process of physics discovery to understand exactly why the output was generated. That is, completeness of a solution is required. In this paper, we survey the application of machine learning methods to a variety of accelerator-based tasks in a bid to understand what role interpretability plays within this area. The main contribution of this paper is to promote the need for explainable artificial intelligence, XAI, for the future of machine learning in HEP.