Probably trying to say to investors, "the government will never use DeepSeek, our moat is being a trusted US AI company who is trusted for government work"
Stuff like this troubles me. I am in defense tech working with LLMs and I am here to tell you the guards and fences are down at the chow truck while commoners are using LLMs.
I think it means people in the military and contractors are already using openai and other tools and nobody is able to stop them, even if it leaks secrets.
I hate to be that guy, but here's what ChatGPT says:
This expression uses metaphorical language to describe a situation where traditional barriers, hierarchies, or protections have been removed, allowing broader access or disrupting the status quo. Here's a breakdown:
1. "Guards and fences are down" - This implies that the usual controls, restrictions, or gatekeepers are no longer in place.
2. "At the chow truck" - The chow truck symbolizes something previously exclusive or regulated, like access to resources, opportunities, or knowledge.
3. "While commoners are using LLMs" - Refers to everyday people (as opposed to elites or specialists) now having access to advanced technology, specifically Large Language Models (LLMs), like AI tools for generating text.
Together, the expression likely means that AI-powered tools have democratized access to knowledge and creativity, breaking down barriers that once limited these capabilities to experts, institutions, or privileged individuals. It highlights a significant shift where advanced tools are now accessible to "commoners," disrupting traditional power dynamics.
I read this and I still am uncertain about what it means. I believe ChatGPT also failed to understand this, it differs from my interpretation: GGP is worried that while people access "aligned" models military deployment don't have the guardrails.
Basically he’s worried about gen pop having access to LLMs thinking only govts should have access to the technology for some strange reason. Probably because he works in defence contracting so he benefits directly from having that stance.
The real revolution will come (hopefully) when voters start using LLMs to figure out which of their representatives actually vote for policies that improve their life.
I don't really understand why people bash so hard on LLMs. In some cases it is a spectacularly bad tool, I get that. But it is only a tool.
Imagine if you have a judge giving out sentences based on astrology books. I don't think anyone would argue the problem would be resolved by banning astrology books from our libraries.
Reminds me of a quote that says that science progresses one scientist's funeral at a time as push back against new ideas often comes from older scientists regardless of the empirical merits of the idea.
I believe the main issue in regard to LLMs is that there is a real chance of the prevalence and ease of use of LLMs to erode critical thinking skills. Regardless of boilerplate warnings to "check the validity of answers" coming from the LLM, plenty of people in society outside of this tech savvy audience wouldn't even know where to begin. There was a recent Big Think article on this: https://bigthink.com/thinking/artificial-intelligence-critic....
To be fair, I do think there are plenty of uses for LLMs, but with adoption skyrocketing there really are no guardrails against misuse.
I don't really believe that LLMs will make us dumber. It only changes what we decide to put our attention. It's the same that happened with Google, it changed our relationship with information. Even though it has several shortcomings the ability of just look things up instead of having to hold everything in our heads was a net positive for society.
And I suspect the same will happen for LLMs, in the end we will just start thinking in "another level of abstraction". We are still in the early days and still have a lot to learn about how to properly use this new toll but I think LLMs are a positive change for society.
Sure, that judge is a problem, but I think your metaphor is a bit mal-formed.
In your example you should probably drop the judge, but you should also make a rule saying astrology books aren't a legitimate source of sentence guidance. That's what people are annoyed about re:LLMs. People keep insisting they are a legit source in different situations.
You wouldn't ban them overall, but you do want some kind of society-level taboo against relying on them. You can't just deal with it on the level of people who get fooled into using them.
>you should also make a rule saying astrology books aren't a legitimate source of sentence guidance
My example was absurd on purpose, I didn't want to bring an example where people could respond with "well, actually..."
But in the real world that is rarely the case, imagine substituting "astrology book" for "Bible/Quran/..." Would that be considered a legitimate source of sentence guidance? I'm sure people would spend years arguing about that...
As a society we need to understand that LLM hallucination is no different than a bloom filter giving you a false-positive.
That's all well and good except that we look out and see all of the actively bad uses being hyped as the way of the future, at untold expense in both dollars and energy. The LLM is just a model that is what it is, bashing it doesn't make sense. People are bashing how it is used, both currently and in prospect.
I expect as much. Having going through FedRAMP before, the alternative would be to make their regular commercial infrastructure be up to FedRAMP standards, which is incredibly onerous (things like change tracking, DR, US Citizen access only to production systems, etc.)
The thing about FedRAMP is that it's not one thing, it's a framework. Even if you have a FedRAMP package for a system, you need a fresh ATO to use it at any customer.
Technically, there is no citizenship requirement for FedRAMP.
In reality, at every US Government customer I'm aware of, US Persons (Citizen or PermRes) are the only ones that can deploy code to production. You can have non-USP writing code but it must be reviewed and rolled out by USP.
I wonder if it's the exact same model as the public has access to,
or if it comes pre-jailbroken so it's more willing to discuss certain angles on certain topics that the government may take.
I really doubt that it is literally pre-jailbroken. More likely, OpenAI will configure a custom prompt to improve the usage scenario of a large customer anyway.
I checked the examples they listed under "How government agencies use ChatGPT today". With the exception of the [very nebulous] research use case, it appears that all of these can be done using open-source models hosted locally:
1. basic coding and "supporting AI education efforts" (really?)
2. more accurate translation services for multilingual communities
3. analyzing project requirements
This makes a lot of sense. It’s just following the model of how cloud services have a separate government focused infrastructure for government use, with appropriate certifications and regulatory requirements met. I don’t understand the comments calling this desperate - it’s what we want American providers to do, to bring the power of AI to government agencies.
The US government likes its employees and contractors to have their own separate version of cloud products. For instance you would not want OpenAI using prompts about controlled unclassified information as training data. Most of the things government contractors do is CUI- you don't want it spread around, but it's not actually a secret. The US goes out of its way to minimize how much CUI is exposed, and on top of that you have export controls like ITAR and then of course classified information.
Secure AWS instances are one thing, having the government use unreliable LLMs for their work is another. I do not want the government using LLMs to make decisions.
I think Elon Musk will try very hard for OpenAI to not get any government contracts and with being in charge of government technology via Doge he has a good chance of succeeding.