Most of the proposed solutions I have seen to prompt injection to date involve layering on even more AI. I wrote about why I think this is a bad idea in You can’t solve AI security problems with more AI. AI techniques use probabilities: you can train a model on a collection of previous prompt injection examples and get to a 99% score in detecting new ones… and that’s useless, because in application security 99% is a failing grade:
Source: CaMeL offers a promising new direction for mitigating prompt injection attacks