Yes, I'd readily pay for GPT-4 access, though not the limited 25 requests per 3 hours version. I ponied up $20 for a month of usage to check it out, and it performs head & shoulders above 3.5 in its ability to comprehensively address more complex prompts and provide output that is more nuanced than ChatGPT.
I'll also point out that paid api access to 3.5 (davinci-03) is frequently better than ChatGPT's use of 3.5. You get many fewer restrictions, and none the "awe shucks, I'm just a little 'ol LLM and so I couldn't possibly answer that".
If you're frustrated by having to go to great lengths to prompt engineer and ask ChatGPT to "pretend" then it's worth it to pay for API access. I'm just frustrated that I can't use the GPT-4 API the same way yet (waitlist)
I hear ya! I'm out here dying on the GPT-4 API waitlist too. I use gpt-3.5-turbo's API extensively, and occasionally copy my prompts into GPT-4's web UI and watch as it just flawlessly does all the things 3.5 struggles with. Very frustrating since I don't have GPT-4 API access, but also very, very impressive. It's not even remotely close.
I pay the $20 for ChatGPT Plus (aka, GPT-4 web interface access); personally I find it useful enough to be worth paying for, even in its
rate-limited state. It already replaces Google for anything complex for me. I wish I could pay for the API too, and use it in my projects.
The last time I tried to find a plumber in my local area through google I realized that the first three pages of results contained zero actual results. It was a mix of ads and seo spam from scammers. I ended up going to ddg and while there was plenty of seo spam there too, I found several good results on the first two pages.
Google has the technology and talent to relaunch themselves in a leadership position, but the current executive team doesn’t seem to have what it takes. They’re custodians / accountants, running the company a bit like Microsoft in the Ballmer era. What google needs to do now is leap ahead, and I don’t see it happening without a leadership change.
Amen. I have basically stopped using Google at this point because, when I do, the results are all garbage. I ask GPT-4 the same questions and get reliable mostly accurate answers. You do have to be cautious of lies/hallucinations but realistically most of Google's results now adays are sales pages masked as helpful articles that are mostly full of crap anyway.
I was dying on the GPT-4 API waitlist too. I built a proof-of-concept with GPT-3.5, got some ada embeddings, played around with some common patterns for a couple of weeks, spent less than $20. I then applied to the waitlist again with a few short sentences about what I'd done, how GPT-4 would make it better, and how it would enable something new and valuable for a particular market. Approved that day.
It's not exactly a shortcut, and maybe it was just luck, but I suspect the key is just to start building with what you have and show a trajectory. The best part is that coding with ChatGPT-4 as a "colleague" has made the whole thing super fun.
Thanks you for the offer, but I’m extremely conscious of avoiding a direct link from my comments here to who I am.
Maybe it’s a bit too paranoid, I don’t know, but I’ve also been open here about my workplace experiences, if someone who knew my irl connection to them and decided to comb through my comments, in a way that my HR dept among others might not quite appreciate. Maybe I should setup a separate HN account connected to me Professionally for that sort of thing.
Also my use case for GPT-4 is data analysis. Using the paid “plus” version shows a lot of promise for quickly bootstrapping data exploration and consumption as a jumping off point for more detailed digging. Via the chat interface it can ingest very small aggregate datasets and spit out observations that only myself and my boss have the domain name expertise to produce in my organization. but the Chat interface is highly limited and often truncates even small (faked but plausible) data, so I really want API access, because it involves sensitive info I couldn’t put into the chat site or responsibly share with someone outside my org.
But really, thanks for the offer. What are you working on with it?
If you're technical just get yourself OpenAI API access which is super cheap and hook it up to your own self-hosted ChatGPT clone like https://github.com/magma-labs/magma-chat
The wait for GPT-4 is not as long as it used to be, and when you're using the API directly there's no censorship.
Could you please describe how one "just" do that?
I have been on the GPT-4 API waitlist since it was announced and I still don't have access to the GPT-4 API.
Yep, I use the paid API, and it’s a lot more flexible than ChatGPT. I’d didn’t know about the self-hosted interface though: that will be my project for tomorrow morning, thanks!
I’ve been on the GPT-4 waitlist for about 6 weeks, but I’m not sure what the typical wait is.
Magma wants me to use my google credentials to login. I’ll pass on that, it shouldn’t be required in anything self hosted which makes me distrust it a bit right off the bat.
There’s a bit. When I used the openai playground I have received warnings about response potentially being bad, but using the API directly I don’t even get warnings like that.
Testing things out, it will produce vile and hateful content on demand. However it won’t say anything. If I specifically tell it to use some words I get the same type of content warning but also an extra note about those words, and that I have to contact openai support if my use case truly requires their use.
There’s also the fact that using or distributing such content is against TOS, so I suppose could simply ban you
Not exactly. The "censorship" is the RLHF tuning in the chat models as far as I understand it; the API for the chat models is the same AFAIK. The base models don't have censorship, but there are no base models available for API access for GPT-4, only a chat model. You can use the GPT-3-era base models, but, well, they're not as good as GPT-4.
Or you go to Microsoft Azure and use the GPT-3.5 base model: code-davinci-002.
Though they could still use "observational" censorship there, i.e. analyze your prompt with a different model. OpenAI does that, not sure about Microsoft.
> I'll also point out that paid api access to 3.5 (davinci-03) is frequently better than ChatGPT's use of 3.5. You get many fewer restrictions, and none the "awe shucks, I'm just a little 'ol LLM and so I couldn't possibly answer that".
Little correction - 3.5 is not davinci. davinci is 3.0, 3.5-turbo (chatgpt) is a davinci variant that has been tuned and adjusted for chatting and conversation, including all those restrictions. It is much faster than davinci, way cheaper but as you know, results are… ok
davinci (3.0) is more untuned, slower, more expensive to use, not conversational, but can yield much better quality
> Little correction - 3.5 is not davinci. davinci is 3.0, 3.5-turbo (chatgpt) is a davinci variant that has been tuned and adjusted for chatting and conversation, including all those restrictions.
Little correction of the correction. The base models are:
davinci = GPT-3
code-davinci-002 = GPT-3.5
They do only text completion and do not natively answer to instructions. There are also instruction tuned versions of the latter, e.g. text-davinci-003 and gpt-3.5-turbo-0301 (used in ChatGPT). See
Note that code-davinci-002 is no longer available via the OpenAI API, but it is still on Azure. The GPT-4 base model is generally unavailable. Too powerful perhaps.
Turbo and davinci should be equally non-conversational. When you use GhatGPT it also has InstructGPT on top of turbo which is what makes it conversational, together with RLHF.
I’m pasting my response to someone else who made the same kind offer:
Thanks you for the offer, but I’m extremely conscious of avoiding a direct link from my comments here to who I am.
Maybe it’s a bit too paranoid, I don’t know, but I’ve also been open here about my workplace experiences, if someone who knew my irl connection to them and decided to comb through my comments, in a way that my HR dept among others might not quite appreciate. Maybe I should setup a separate HN account connected to me Professionally for that sort of thing.
Also my use case for GPT-4 is data analysis. Using the paid “plus” version shows a lot of promise for quickly bootstrapping data exploration and consumption as a jumping off point for more detailed digging. Via the chat interface it can ingest very small aggregate datasets and spit out observations that only myself and my boss have the domain name expertise to produce in my organization. but the Chat interface is highly limited and often truncates even small (faked but plausible) data, so I really want API access, because it involves sensitive info I couldn’t put into the chat site or responsibly share with someone outside my org.
But really, thanks for the offer. What are you working on with it?
I'll also point out that paid api access to 3.5 (davinci-03) is frequently better than ChatGPT's use of 3.5. You get many fewer restrictions, and none the "awe shucks, I'm just a little 'ol LLM and so I couldn't possibly answer that".
If you're frustrated by having to go to great lengths to prompt engineer and ask ChatGPT to "pretend" then it's worth it to pay for API access. I'm just frustrated that I can't use the GPT-4 API the same way yet (waitlist)