Interesting that the story assumes a legible AI for the robot. If its behaviour was the result of a deep learning neural net, the story plot would necessarrily be quite different. There would be no talk of understanding the robot's calculations or tweaking the program parameters. From the perspective of today's AI efforts the programming in the story seems antiquated and unrealistic.
Or the reverse: today's AI research is missing large components of what would be necessary to achieve sapience. A conversation with Gerald Sussman half a decade ago had a big influence on me in this area:
I had some further conversations with Sussman and some other oldschool AI researchers from MIT later, the shortest summary of their comments would be that "We knew that neural nets could do this kind of thing, we didn't have the power to do it yet though. But an artificial intelligence system that can't explain why it's doing what it's doing doesn't seem very intelligent." Sussman and his students' work on propagators provide a very interesting alternative direction where explanations are a key part.
And yes it's true, humans also construct imperfect versions of their own thinking. That's because these systems are combined: the fast gut-feel type neural-network'ish systems and the slower symbolic reasoning systems that are associated with language. And probably the right design combines both of these.