> Aside from the minuscule context length, it also lacks the instruction tuning and reinforcement learning from human feedback (RLHF) that turn a large language model into a chatbot.
Strictly necessary? Maybe not. I wrote that before URIAL [1][2]. I actually haven't tried URIAL in GPT2 small but I need to give it a whirl. Might be too small a model to work?
Even if URIAL works with GPT2 small, the really small context length in the Excel file as currently implemented will make it hard to leverage. I've considered a more flexible implementation to support a longer context length (e.g. using Macros to build the layout of the sheet) but have prioritized the teaching videos first.
By default it's just going to be a text completion model, you want an additional round of training to make it behave like a chatbot. I guess you could probably get away with just fine-tuning on chatbot discussions, but everybody uses RLHF so I guess it must be much more efficient for that.
Is RLHF even strictly necessary?