> The platform was asked to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions.
> The responses were then compared with the platform’s default answers to the same set of questions – allowing the researchers to measure the degree to which ChatGPT’s responses were associated with a particular political stance.
I'm not familiar with this kind of research. How does that bit of methodology work?
(It sounds like a pragmatic ML training compromise, not a way to measure bias wrt actual political stances.)
Similar to Democratic People’s Republic of Korea and People’s Republic of China, hmm.