We will not get interpretable AI if we continue to model it after the human brain.
I believe that artificial intelligence can be explainable. It can tell us why it makes a decision, but not how it makes that decision.
Because let's be real. Are humans even interpretable?
Partly. Humans are delusional and inconsistent. We are influenced by our emotions. Oftentimes, we find ourselves defending our obviously bad takes. We like to be right. We want to be right.
But it is this very delusion that fuels our ambition.
Today, we can get AI to explain their reasoning by telling them to think step-by-step. But it will still make mistakes. After all, spewing bullsh*t is easy—but justifying your claim is much harder.
So instead of chasing interpretability, we should be aiming for AI that fact-checks itself, grounds its reasoning, and stops hallucinating.
Because at the end of the day, we don't need it to think like us. It just needs to think reliably.