The more we rely on AI, robots and machine learning, the more work we need social scientists and humans experts to do.
And the problem of employment arises in the face of the automation of production systems. So it becomes clear that areas such as the humanities and social sciences will take an important place by their necessities.
Humans will have to stand out from their way of thinking because on a physical level, the machine is incontestably superior. It is, therefore, these four areas, all unique to us humans, that will arguably rise in value in the coming years.
AI is fatally faster than us in processing large amounts of data so it can detect a problem in established models. But what about a situation a little more ambiguous where a person would have a critical behaviour, destructive or even self-destructive? Would AI in charge of human relations be able to allow the social aspect in its decision process?
The interest in new things, the imagination, the prediction of the future are all aspects of life that AI would probably badly grasp. The machine cannot speculate or project new ideas in many particular situations.
The humans will probably retain the ability to criticize situations or understand their context better than machines. Humans can conceive historical patterns that are repeating themselves to evolve differently, to tend towards positive differences.
The algorithm of AI is not yet able to dictate good from evil and cannot make a fundamental decision requiring an element of moral judgment.
AI will evolve exponentially until it reaches the singularity where we will not be able to understand it anymore. Until then, we still have a lot to learn from it and AI can certainly teach us a lot about ourselves.