But what are you doing about your bubble? The bubble that colors your bias on what makes good career advice? That's what the parent was talking about and you talked straight past him. That doesn't bode well for your bot's performance!
I don't think this is the right format, but I do think computer-assisted political strategy is an interesting prospect. The problem with trusting other humans for political advice is that you can't know how much their own political ambitions play in, but the bot is really just once-removed from the people who made it, so the issue of human meddling is not entirely out of the picture. It would need to be very transparent, explicit, and clear to be trustworthy, and preferably highly tunable.
How are you ensuring that your bot is providing a valuable, localized analysis for the person asking for advice? If Boost tells me to do something that would get me promoted in California but fired in Boston, who's liable? What kind of cultural accounting is performed? Can I adjust it based on the political/cultural leanings and backgrounds of my subordinates, peers, and superiors?
Also, politics depends heavily on unpredictability. You almost need a reverse bot, telling you the worst solution, so that your political enemies don't accurately predict your moves and set traps. Does Boost have a "surprise" mode?
So first of all, this isn't a bot. I don't think a bot would be able to help with complex political situations very well.
As our customer, you're our sole focus. Your needs, desires, goals, etc. We succeed when you succeed, so I don't think it's valid logic to have to worry about what Boost is looking to get out of the advice.
>So first of all, this isn't a bot. I don't think a bot would be able to help with complex political situations very well.
Phew! That's actually a big relief. For the record, I wasn't just going off the commenter above who called this a "chatbot"; I also skimmed the site and saw a bunch of stuff about "advanced AI and Machine Learning", an IM interface that looks like the other new-wavey AI talk stuff, etc., so it all seemed to comport. You should definitely make it abundantly clear on the landing page that this is subscription access to real, human career counselors.
Now I have all the same questions about the backgrounds of these career counselors. :P What kind of vetting or training is done?
As non-computers, I think the situation is more complex; how can I ensure that one of the career counselors is not in league with a political enemy? If I spill my guts to this guy, what stops him from contacting my much richer, more powerful, more attractive boss and saying "Hey, Jim Bob just developed a crazy scheme to take you down, I'll give you the details for $DOLLARS_AMOUNT"? Just the belief that Boost is full of nice people who wouldn't want to do that?
Lawyers have to check to ensure that their firm doesn't also represent the opposing party before they can take a case because of the glaring conflict of interest. How can Boost ensure that myself and political opponent A are not artificially manipulated by Boost internal staff, sent into a feedback loop primarily designed to keep both of us subscribe to Boost as long as possible, instead of moving us up the power ladder to the point where we don't need it anymore? How do I ensure that my boss and myself don't end up being advised by the same career counselor, who would know both of our moves in advance, and who therefore couldn't possibly perform his functions in an effective manner for either of us?
While I think software-assisted politics is a much more interesting business model (not buzzwordy ML/AI BS, but a "political planner" or something to help evaluate and plan), I don't necessarily think that subscription access to human counselors is a bad idea. I just think that a lot goes into it, there are a lot of potential ramifications.
What's the typical use case/scenario for this? It seems like anything people would need advice on, it's risky to trust Boost. Everything else would be simplistic advice that everyone knows, but just doesn't want to do, like "Brush your hair better". Maybe they have specific grooming tips and point to a good hair gel?
> We succeed when you succeed, so I don't think it's valid
> logic to have to worry about what Boost is looking to get
> out of the advice.
That's the only political advice you need, right there. Everyone is interested in your success, so stop worrying. sarcasm
It's funny to see this business that specifically targets people insecure about their understanding of complex political situations and tells them worrying that I would act against your interest is illogical.
Why get advice when negotiating salary? My employer succeeds when I succeed, so concerning myself with whether they are shorting me must be illogical, right?
I don't think this is the right format, but I do think computer-assisted political strategy is an interesting prospect. The problem with trusting other humans for political advice is that you can't know how much their own political ambitions play in, but the bot is really just once-removed from the people who made it, so the issue of human meddling is not entirely out of the picture. It would need to be very transparent, explicit, and clear to be trustworthy, and preferably highly tunable.
How are you ensuring that your bot is providing a valuable, localized analysis for the person asking for advice? If Boost tells me to do something that would get me promoted in California but fired in Boston, who's liable? What kind of cultural accounting is performed? Can I adjust it based on the political/cultural leanings and backgrounds of my subordinates, peers, and superiors?
Also, politics depends heavily on unpredictability. You almost need a reverse bot, telling you the worst solution, so that your political enemies don't accurately predict your moves and set traps. Does Boost have a "surprise" mode?