Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Introducing ChatGPT Gov (openai.com)
56 points by keepit 11 months ago | hide | past | favorite | 71 comments


Sounds like a desperate move, now that the insane valuations, GPU capacity projections, and subscription revenue are at risk.


Probably trying to say to investors, "the government will never use DeepSeek, our moat is being a trusted US AI company who is trusted for government work"

Whether it sticks or not is another matter.


No reason to not trust deepseek, once you run it locally.

You can't use it for DoD purposes through...


Seeing two times "trusted", makes me suspicious.


DeepSeek would need FEDRAMP certification to be deployed in most cases.


It's worked for Palantir.


For some short term definition of “worked”


This is also why Altman was lobbying for AI regulation: to have moat with barriers of entry


I am surprised Elon hasn't public said anything about this with all the recent events.


I doubt this was done in 1 week.

Obviously this product had to be released. It takes a lot of design to build stuff for Government, lot of bureaucracy etc.


How do you know that they have actually build a product? They probably just wrote the blog post for now.


True, but it certainly lands differently today than it would have a week ago.


To be honest, no. This is the natural evolution of a project that needs a huge amount of data and so on.

For me it's more suspicious the "project Stargate" than this one feature.


Stuff like this troubles me. I am in defense tech working with LLMs and I am here to tell you the guards and fences are down at the chow truck while commoners are using LLMs.


> I am here to tell you the guards and fences are down at the chow truck while commoners are using LLMs.

I've read this over and over and have no idea what this means.


I think it means people in the military and contractors are already using openai and other tools and nobody is able to stop them, even if it leaks secrets.


I hate to be that guy, but here's what ChatGPT says:

This expression uses metaphorical language to describe a situation where traditional barriers, hierarchies, or protections have been removed, allowing broader access or disrupting the status quo. Here's a breakdown:

1. "Guards and fences are down" - This implies that the usual controls, restrictions, or gatekeepers are no longer in place.

2. "At the chow truck" - The chow truck symbolizes something previously exclusive or regulated, like access to resources, opportunities, or knowledge.

3. "While commoners are using LLMs" - Refers to everyday people (as opposed to elites or specialists) now having access to advanced technology, specifically Large Language Models (LLMs), like AI tools for generating text.

Together, the expression likely means that AI-powered tools have democratized access to knowledge and creativity, breaking down barriers that once limited these capabilities to experts, institutions, or privileged individuals. It highlights a significant shift where advanced tools are now accessible to "commoners," disrupting traditional power dynamics.


I read this and I still am uncertain about what it means. I believe ChatGPT also failed to understand this, it differs from my interpretation: GGP is worried that while people access "aligned" models military deployment don't have the guardrails.


This is some much needed comic relief.

It’s as if buddy just came in the topic and casually dropped a rhetorical puzzle that even AI and the bright minds on HN can’t figure out.

Guess he’s down at the chow truck now.


We're all down at the chow truck now; metaphorically-speaking.


Basically he’s worried about gen pop having access to LLMs thinking only govts should have access to the technology for some strange reason. Probably because he works in defence contracting so he benefits directly from having that stance.


You can choose to not be that guy.


The real revolution will come (hopefully) when voters start using LLMs to figure out which of their representatives actually vote for policies that improve their life.


is that not happening yet? It will be the equivalent of people search for that kind of info on Google


I don't really understand why people bash so hard on LLMs. In some cases it is a spectacularly bad tool, I get that. But it is only a tool.

Imagine if you have a judge giving out sentences based on astrology books. I don't think anyone would argue the problem would be resolved by banning astrology books from our libraries.


From Douglas Adams (of "Hitchhiker's Guide to the Galaxy" fame)

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you’re thirty-five is against the natural order of things.


Reminds me of a quote that says that science progresses one scientist's funeral at a time as push back against new ideas often comes from older scientists regardless of the empirical merits of the idea.


I believe the main issue in regard to LLMs is that there is a real chance of the prevalence and ease of use of LLMs to erode critical thinking skills. Regardless of boilerplate warnings to "check the validity of answers" coming from the LLM, plenty of people in society outside of this tech savvy audience wouldn't even know where to begin. There was a recent Big Think article on this: https://bigthink.com/thinking/artificial-intelligence-critic....

To be fair, I do think there are plenty of uses for LLMs, but with adoption skyrocketing there really are no guardrails against misuse.


I don't really believe that LLMs will make us dumber. It only changes what we decide to put our attention. It's the same that happened with Google, it changed our relationship with information. Even though it has several shortcomings the ability of just look things up instead of having to hold everything in our heads was a net positive for society.

And I suspect the same will happen for LLMs, in the end we will just start thinking in "another level of abstraction". We are still in the early days and still have a lot to learn about how to properly use this new toll but I think LLMs are a positive change for society.


Sure, that judge is a problem, but I think your metaphor is a bit mal-formed.

In your example you should probably drop the judge, but you should also make a rule saying astrology books aren't a legitimate source of sentence guidance. That's what people are annoyed about re:LLMs. People keep insisting they are a legit source in different situations.

You wouldn't ban them overall, but you do want some kind of society-level taboo against relying on them. You can't just deal with it on the level of people who get fooled into using them.


>you should also make a rule saying astrology books aren't a legitimate source of sentence guidance

My example was absurd on purpose, I didn't want to bring an example where people could respond with "well, actually..."

But in the real world that is rarely the case, imagine substituting "astrology book" for "Bible/Quran/..." Would that be considered a legitimate source of sentence guidance? I'm sure people would spend years arguing about that...

As a society we need to understand that LLM hallucination is no different than a bloom filter giving you a false-positive.


Then I guess I don't understand the point you are trying to make. Are you saying that the problem is unsolvable and people should just accept that?


My point is that LMM is a tool, and we should never attribute any sort of blame to our tools. The blame should always be with the humans.


It’s fine to blame a car’s heater when the design is so poor it can’t keep the cabin warm when it’s only 0C outside.

Thus being a spectacularly bad tool is itself a cause to assign blame.


That's all well and good except that we look out and see all of the actively bad uses being hyped as the way of the future, at untold expense in both dollars and energy. The LLM is just a model that is what it is, bashing it doesn't make sense. People are bashing how it is used, both currently and in prospect.


First graph, okay. Second graph, the wheels come off in spectacular fashion (unless you're from Florida).

Banning books does not ban knowledge. It just makes it a bit more inconvenient. Banning drugs has not stopped people from getting access.


.... WAT.

Just kick out the judge who clearly... lacks judgment.


congratulations on missing the point of the hypothetical example


Wait till defense contractors start using DeepSeek with Aider..


That would get the contractor fired on the spot along with whatever network engineer allowed it to happen.


Why would gov agencies rely on 3rd party services while they could deploy opensource LLMs locally and privately?


Lol! Since when has the US government not jumped at the opportunity to pay a company obscene amounts of money?


For the same reason that businesses offload it to OpenAI and the rest: it's not a "core competency".

That, and imagine trying to explain the funding request for that to the residents of the crypt... I mean, the current US Congress.


This would have to be installed on-prem by each agency using it to be FEDRAMP compliant (which it would have to be).


They already offload managing all of their documents to Office 365 cloud, so no additional harm done.


because that requires a capital cost, which tends to require special approval, and different types of accounting that are more difficult to get past.


Bc they exist to waste tax payer money.


Seems an OpenAI employee downvoted it.


Would they have shielded instances from rest of instances processing data for plus/pro/enterprise customers?


I expect as much. Having going through FedRAMP before, the alternative would be to make their regular commercial infrastructure be up to FedRAMP standards, which is incredibly onerous (things like change tracking, DR, US Citizen access only to production systems, etc.)


> US Citizen access only to production systems,

What if the employees building the thing are not US citizens?


The thing about FedRAMP is that it's not one thing, it's a framework. Even if you have a FedRAMP package for a system, you need a fresh ATO to use it at any customer.

Technically, there is no citizenship requirement for FedRAMP.

In reality, at every US Government customer I'm aware of, US Persons (Citizen or PermRes) are the only ones that can deploy code to production. You can have non-USP writing code but it must be reviewed and rolled out by USP.


now now, no one wants to call out the unwillingness to pay fair labor cost of the sausage machine :^)


It says that it’s meant to be self hosted


I wonder if it's the exact same model as the public has access to, or if it comes pre-jailbroken so it's more willing to discuss certain angles on certain topics that the government may take.


I really doubt that it is literally pre-jailbroken. More likely, OpenAI will configure a custom prompt to improve the usage scenario of a large customer anyway.


I checked the examples they listed under "How government agencies use ChatGPT today". With the exception of the [very nebulous] research use case, it appears that all of these can be done using open-source models hosted locally:

1. basic coding and "supporting AI education efforts" (really?) 2. more accurate translation services for multilingual communities 3. analyzing project requirements

I have a lot of questions.


This makes a lot of sense. It’s just following the model of how cloud services have a separate government focused infrastructure for government use, with appropriate certifications and regulatory requirements met. I don’t understand the comments calling this desperate - it’s what we want American providers to do, to bring the power of AI to government agencies.


This should be unsurprising- see also govcloud: https://aws.amazon.com/govcloud-us

The US government likes its employees and contractors to have their own separate version of cloud products. For instance you would not want OpenAI using prompts about controlled unclassified information as training data. Most of the things government contractors do is CUI- you don't want it spread around, but it's not actually a secret. The US goes out of its way to minimize how much CUI is exposed, and on top of that you have export controls like ITAR and then of course classified information.


Secure AWS instances are one thing, having the government use unreliable LLMs for their work is another. I do not want the government using LLMs to make decisions.


I think Elon Musk will try very hard for OpenAI to not get any government contracts and with being in charge of government technology via Doge he has a good chance of succeeding.


Oh wow, look, more shit the general public could not care less about.

This sounds like a company in free fall, which isn't terribly surprising, given who is at the helm.



Hooooboy. Amazon and OpenAI aren’t even playing the same game. This is comparing Apples and Carbon nanotubes.


Kinda. AWS could subsist, but Amazon itself has unironically become the shady red light-district bazaar we were all afraid of.


Meanwhile… DeepSeek


Wow, that escalated quickly...


Horrible idea


Your application for XYZ was rejected because the LLM was instructed to. Please do not try again later, because the outcome might be different.


Sounds exactly the same as the current situation.


I guess that now the secret sauce turns out to be not so special after all, and valuations are tanking, it's time for the next phase in the plan:

"lmao we were actually secretly a defence job machine all this time"

(or alternatively, "give us money because we are the next thing in defence and boooooo china")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: