Why does existing AI need to be explainable? Applying AI where someone may get hurt requires: safety regulation, explanation & understanding. But the internal decisions of AI are not explainable, earning a well-deserved designation of "black-box". Thus until AI becomes more transparent, applications will remain limited in fields where safety and understanding are paramount such as medicine, self-driving cars and banking. Illuminated AI helps debug your models by revealing goals of the network and insights from test cases. With this knowledge you, your customers and those who certify AI algorithms can be more confident of your AI. We enter a new era of introspective AI that is more trusted and safer, allowing broader applications in medicine, self-driving cars and banking.