Current machine learning applications and algorithms have developed promise to produce autonomous systems that automatically perceive, learn, predict, and act on their own. However, the effectiveness of these systems is limited by the machine's current inability to explain their decisions, algorithmic paths, and actions to human users. The purpose of this chapter is to apply explainable artificial intelligence (XAI) to black-box models using an example of the U.S. Geological Survey's LANDFIRE Existing Vegetation Type (EVT). This chapter also demonstrates the tools developed to assist scientists/analysts in understanding and trusting prediction outcomes of vegetation type that streamline development of the LANDFIRE EVT product.