This is probably going to get me lanced, but an LLM trained on a candidates actual positions would probably do more good than whatever sort of messaging avenues we have now - spam text with some talking point, having to actively watch some news channel and hope they aren't lying, go find their campaign website and poke around. I bet most people would rather just text "what are your stances on XYZ?" and then we get elect to pytorch as vice president or something.
The first LLMs were trained for helpfulness and it shows. However my suspicion is that if they can be trained for helpfulness they can also be trained for rhetorical efficacy and I don't think that will improve the information ecosystem.
Then again, politics is already mired in slimy rhetoric so this probably won't be a cataclysmic change. We already have a few tools to deal with it. AI Debates could be cool. Are cool -- the evidence that convinced me LLMs were special was going on character.ai and pitting a Marx and Hayek character against each other. That was a fun debate with helpful AIs but it would still be a fun debate with rhetorical AIs.
But that makes it too easy to lie, omit, and equivocate. If the LLM is trained on all of their public statements over the last X years, and any official documents authored by them, then—theoretically!—you get something that's harder to manipulate.
Aha. I was thinking you were suggesting it be trained by the campaign. I'm usually bearish about how LLMs are being used, but if you created a corpus of legit sourced documents that humans could peruse, and added a feature to the LLM that allowed it to cite which document(s) it used to answer a question, that would be quite interesting.
What's more likely to happen is that the LLM will try bending and spinning the candidate actual position so it fits to what it perceives you would want it to be.
Yah I’ve long wanted an LLM trained on Bernie Sanders stump speeches (he gave a LOT) that could be called upon to criticize any political writing I find on the web. A little Bernie Bot in the side bar that can comment on news articles. “Enough is enough! The American people are sick and tired!” Hehe.
I consider myself a bleeding heart libertarian capitalist pig. I want the government to stay out of my life as much as possible. But I don’t have a problem with taxes or providing a safety net to people in need and paying taxes to fund it.
I would love to have an LLM that could respond to any position I gave it from the viewpoint of both Bernie Sanders and a Romney/Bush/WSJ conservative angle.
Whatever the Republican Party overall these days is, it ain’t conservative. On a side note, I do have to give a shout out to my Republican governor Kemp of Georgia, somehow he has managed to stay true to what I would expect from a conservative governor.
I’m not trying to criticize any Democratic governor. I’ve lived in GA all of my life until last year and I don’t follow state politics outside of Ga and FL where I now live (and I won’t open that can of worms)
Not to sidetrack too much, but is this not a violation of campaign rules? Aren't all text messages from a political campaign at least supposed to be operated by a human touching a phone somewhere? This is to avoid robodialers and others?
> if the message’s sender does not use autodialing technology to send such texts and instead manually dials them.
So, there's definitely not enough here to suggest that it's in violation.
Related, what does "manually dial" even mean? A button that changes to the next number each time you tap it? Tapping an entry in a contact? A single "Call" button that enables when a phone line, in the queue of hundreds, is free?
I have friends who have volunteered to call people for various campaigns, they usually sit at a computer and it has the number queued up and they hit dial, and it goes. They do this for hundreds of calls an hour. but its exactly as you said, the next number and name is queued up for you, you hit call.
I could imagine there being a human being on the other end, copy-pasting all these LLM responses over SMS - and humoring you by keeping it going for a while.
"17941. (a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot."
(b) “Online” means appearing on any public-facing Internet Web site, Web application, or digital application, including a social network or publication.
SMS could be a considered a "digital application", so it might apply. Agree that it could use clarification.
The combination of an open chatbot that you can freely text with and a very limited one that only talks about the subject it wants to is pretty interesting. There's a little bit of the uncanny valley in there, but only if you realize it's there. Like OP, having a game of how far you can push each one would be kinda fun IMO.
In my probing there were a lot of answers that were "As a supporter of Ron DeSantis..." which felt like the replacement for "As a large language model..."
> and a very limited one that only talks about the subject it wants to is pretty interesting
This is one of the serious main use-cases I've heard discussed. The idea is typically to add a customer support chat bot to an existing website/app that is knowledgable about the brand. Because you wouldn't want your customer support rep. suggesting alternative brands (or worse), you instruct it to only discuss certain subjects.
There are messages where I attempted to get it to say inappropriate things that I didn’t include. It does follow their party line about what lives matter, though.
> DeSantis bot: Biden is a pedophile and asylum-seekers are all MS-13 members
Fact Check False: Joe Biden reportedly showered with his daughter in a way she described as "inappropriate" while she was a adolescent, so he engaged in mere hebephilia and is not a pedophile. Also only some asylum seekers are involved in organized crime, not all.
This post made me think about a talk I just watched by Tristan Harris at the Nobel Prize Summit, so I'm sharing it with you here in the hopes that you'll find it as relevant and insightful.
Imagine a time, not too far into the future when there are thirty people running for president and they all have bots who want to engage with you. Guess then you could create your own bot to keep them all busy;<).
No, the device will just give you a lower social credit score if you don’t listen to the messages. All run by non-goverment organizations, of course, so it’s all legal.
I mean, we already can have someone asking his bot "turn these bullet points into a proper email" and the receiver asking her bot "summarize this email".
Is anyone aware of an independent source for this claim? I'm not finding anything using search.
I'm happy to believe the author's account, but I'm curious how widespread this might be. I'd also like to know if this actually came from some group associated with DeSantis, as opposed to a fan who can glue together APIs or another campaign trying to generate bad publicity for a rival.
I think it would be a mind blowing experience, and potentially not a good one. I think it's fine for people to have deep conversations with bots and to even become emotionally connected to them. But to do this to unwitting participants is careless. I'll give this bot's creators the benefit of the doubt and say that it looks like they tried to give it guardrails but it still feels like they're playing with fire.