How would Tau deal with illogical human behaviour


I do not fully understand how Tau would work. As far as i understood Tau aims to be a "decentralized blockchain social network able to take the consensus of its users and update its own software in real-time". And the import part would be that it is based on logic...

The basic building block would be logical AI to transfer the users worldview into logic.

But we humans do not have a logical worldview...we do not act's all fuzzy.

Humans believe the bible to be the word of god. Even though it contains statements that contradict their word view. Christians and Muslims alike believe that murder is unjust, yet they went on dozens of killing sprees in the history of human kind. We cherry pick logical arguments and ignore counter arguments. Feelings and logic are like oil and water. The inner logic dictates that buying a lottery ticket is stupid, yet you buy one anyway. There are hundreds of examples of contradictions inside each persons worldview. Somehow evolution shows that we need to be able to allow contradictions to survive.

There is logic in human behaviour and human to human consensus. But it is fuzzy. Political arguments are not always based in logic but rather in feelings. Legalizing drugs lowers drug deaths, government spending, the influence of criminal cartels, lowers drug usage and makes the remaining usage safer...yet it is not implemented for moral reasons. Which logic itself should refute...

Can someone explain to me how logical AI would help? I can see how it would do it's job in a perfectly logical world. But we are far from that. How will it deal with the fuzziness? If a user creates 1000 logical arguments for their world view you'll find that 100 of those lead to contradictions inside the single agent itself...