If a site has a black box algorithm that uses unlawful criteria (race) to limit the scope of searches, the site will not be immune under Section 230c and are liable if and only if they require the users to provide the information (race) as a condition of accessing its service in which case they act as “information content provider”
To my understanding fb would just need a number of your likes or shares to guess your gender, ethnicity, age, education level.
You giving away your data this way, would that count as providing the information to control the information stream?
And follow-up: This reminds me of the story of that data mining store chain which with high accuracy predicted pregnancy. A dad to a teenage daughter was very pissed when the chain started, in his perception, a personalized mail in coupon campaign to his teenage daughter "to become pregnant". When she actually already was. (The dad later apologized.) Nonetheless, the chain then concealed their knowledge, their targeted baby stuff and organic everything coupons, between just enough noise of tires, tools and men's sth sth, that it was no longer obvious to the innocent eye.
Would that tactics be sufficient to white wash from section 230c obligations?
Right- it's well known that discriminatory algorithms can evolve unintentionally. It's quite possible to accidentally create a feed that almost never shows black people white potential roommates and vice-versa.
I don't think "the algorithm did it" would be necessarily be shield against liability.
And, back to my orginal point, I think hosting content a person's profile page is one thing, and making the (probably automated) editorial decision to show 100,000 people that content in their streams is another.
You need to remember only your Gmail password. The rest--
I simply maintain a single Google spreadsheet(well ordered) to
store every password for services like Facebook, Twitter, Github, etc.