The Asimov stories illustrate exploits on his own 3 laws.
But how can you fix that? Embedding a full robotic-oriented legal system in their little brains does not seem to be the answer neither.
I think we have to admit that robots will be far from perfect. They will make mistakes. They will stumble and fall. They mill misjudge situations. Like we humans do.
Not to mention that creating a brain with the three laws ingrained would be an order of magnitude more difficult than creating a brain in the first place.