As far as I know, automated reasoning with propositional logic can be done with a solver like Prolog (and it's not new). I don't know if it has been done but I don't think it would make sense to train a ML model for propositional logic, since it's entirely symbolic (as opposed to statistical): the right answer can be found deterministically.
The question of logic reasoning from text like in the proposed example is a bit different, because it involves a step of representing the text. I think it could make sense to train a model which converts a text into a formal logic proposition (and back). The logic reasoning should still be done with a tool meant specifically for that imho. Note that question answering doesn't involve any logical reasoning, even if it might look this way to a user. As far as I know a QA system learns patterns about matching a type of question to its corresponding answer, the system is completely oblivious to the meaning.