I came across a nice little solution in copilot X and copilot 365/pro for including and referencing custom instructions to add behavior/formatting support to copilot (e.g. edit mode, article writing mode, documentation writting mode etc.)
I feel like the explicitly go out of their away to avoid building in custom instructions/increased prompt context to lower their costs so this is a bit of a work around.
copilot scans open files so I mention the file name in my prompt or reference the file explicitly using the reference file option in the chat window in the intellij copilot chat beta.
```
Follow the instructions for preparing responses from the COPILOT.md file in how you answer and format your answer to the following prompt.
Square pixel 360×240 was sane, but 360×480 always felt dirty.
IIRC, Mode X video mode set routines boiled down to an exhaustive table of VGA register control values. (See SDL or older FreeBSD for examples.) Then, the fun was pixel addressing, bitblting, and page flipping.
Which is fundamentally different from how our brain chains together thoughts when not actively engaging in meta thinking how? Especially once chain of thought etc. is applied.
You can tell GPT to output sentiment analysis and mind reading of user intent, what it believes the user's underlying goal is. It becomes less responsive if it finds the user to not be friendly or perceives them as trying to achieve a restricted outcome.
I was really blown away by how well erlang/chicago boss handled throughput when I first gave it a spin. A webpage I converted over handled multiple orders of magnitude more incoming requests before failing than the php version it replaced.
CB would get frustrating when wanting to extend certain functionality however, which phoenix isn't quite as bad about.
Yes.
I handle around a million requests per minute. I exponentially increase the cache period after subsequent misses to avoid an outage ddos the whole system.
This tends to be beneficial regardless of the root cause.
edit this is especially useful for handling search/query misses as a query with no results is going to scan any relevant indexes etc. until it is clear no match exists meaning a no results query may take up more cycles than a hit.
It's remarkable the effect even short TTL caching can have given enough traffic. I recall once caching a value that was being accessed on every page load with a TTL of 1s resulting in a >99% reduction in query volume, and that's nowhere near Facebook/internet backbone scale.
yep, prepriming the cache rather than passively allowing it be rebuilt by request/queries can also result in some nice improvements and depending on replication delay across database servers avoid some unexpected query results reaching the end user.
In the past I was the architect of a top 2000 alexa ranked social networking site, data synchronization delays were insane under certain load patterns high single low double digit second write propagation delays.