Ah, I see, one interesting part is how the LLM generates a summary of the shown "cards" and weaves a narrative together. Interesting and unconventional use of an LLM.
..
It seems to me that the only thing that "works" about Tarot cards is that they allow the "user" to generate a semi-prompted narrative via symbolically rich-enough images and concepts. The rest is either hopeful thinking, a reflection of one's emotional state, rationalization or some combo thereof. This makes me wonder: do people like having ever aspect of a tarot "reading" laid out for them? Or do they get more out of their own interpretation?
Waite's descriptions strikes me as densely arcane and potent sounding symbolism that can be used suggestively to try and provoke some kind of experience or insight in the querent. That's my take.
Personal and dictated readings are very different experiences. What you describe is the self help version of tarot. Maybe most common, but not exhaustive of the medium. Practitioners know/Intuit that the typical major and minor arcana (cards that reveal mutable or immutable Fate) layouts are incredibly flushed out in archetypical terms. Using the symbols and suits, you can define not just A personal narrative, but ANY personal narrative. Both the major, and each of the four suits of the minor arcana, capture progressions of conscious thought that arguably encompass every conceivable platonic Form. This is a phenomenal achievement when examined in good faith.
You might say it is impractical even if impressive. Still, I have been very surprised to see the principle that is supposed to make the tarot work (the hermetic idea of 'As above so below' or, ' the microcosm can only reflect the macrocosm') pop up in Karl Frissons FEP work. What esoterists describe as 'higher and lower powers', seems to be explicitly captured by ongoing work involving Markov blankets. I am not technically minded enought to get the math, but that's what I take from these interviews -
Default stable diffusion XL 1.0 via the HuggingFace Diffusers API prompted with something along the lines of "Digital illustration of a cyborg llama tarot card reader for an app store icon" then copied & resized via ImageMagick convert.
The marketing copy for the app store was generated by the same model (vicuna v1.5 7b) that is used in the app itself by hardcoding the prompt inline and running it locally.
I noticed that the app is listed as being ~3Gb in size and the Vicuna 7b model is ~13Gb in size. What did you do to compress it? Same for memory... I think it needs 30Gb? And same for CUDA or GPU support... How does that work, or is it just running on the CPU?
My own personal convenience so I don't have to maintain a third party api subscription and the user's privacy–maybe someone wants to ask it a personal question that they don't want transmitted to a 3rd party, used to train other llms, etc.
..
It seems to me that the only thing that "works" about Tarot cards is that they allow the "user" to generate a semi-prompted narrative via symbolically rich-enough images and concepts. The rest is either hopeful thinking, a reflection of one's emotional state, rationalization or some combo thereof. This makes me wonder: do people like having ever aspect of a tarot "reading" laid out for them? Or do they get more out of their own interpretation?