Cloud-cutting is an interesting journey, even if temporarily done.
Some offline alts:
1. Penpot for ui design
2. Llama for stackoverflow (from this thread, yet to try myself)
3. Python had devpi to cache packages. You might find something similar for npm.
I also found vscode plugins are mostly offline. A nuc-style box with docker + glinet router may help if you are not alone on this journey.
It might not align with current trend, but I would like to make cloud-cutting a norm, instead of exception.
Similar to supervised and unsupervised learning, one can see dual paths on this journey. One path answers the questions which have been in user's mind. The other explores unasked ones to finds new insights.
Just slightly more difficult than punching "give me javascript that returns fictional but plausible motion sensor data to a website that asks for it, to spoof that we are on a device that actually has a motion sensor" into ChatGPT.
From the text, feels like user agent should be capable, it should let site request sensor data and randomly reject certain requests after X (random) seconds, while providing fake data in other cases.
Half of the point of these scripts is to detect headless browsers. Most of them are fairly obvious, and even when they’re hidden, it’s things like what the article mentions that gives them away. For example, headless browsers can’t respond to permission requests, so they’ll likely immediately accept or reject the request for motion data.
I understand that Akamai’s new bot manager does more than just grab telemetry data.
It’s more like a captcha for browsers, i.e. if the user is using a real browser it should behave in a way that pre-scripted bots can’t easy replicate. The payload is auto-injected by Akamai so the expected behaviour can be altered in a non-deterministic way.
Just record couple hours of phone usage IMU data and then feed it to them with random segments added together and they won't by any wiser. Or just rock the phone in it's cradle. There are companies with robots tapping the screens, adding some rocking movements is not going to be that hard.
I tried to change config.py to use ggml setting, but I did not see any request going to local llama-cpp.server. It keeps on asking for openai key. The local llama-cpp.server is up and I was able to make swagger calls for completion.
Assuming that may be I missed something, I deleted ~/.continue and trying to start from scratch (vs code message: Starting Continue Server ...). I do see FastAPI is up on http://localhost:65432/ still vscode dialog shows same message. Where I can see the logs what its trying to do? What am I missing?
Then we went for Tetris. I did provide her help in getting the initial loop running where you press the key and something happens, then she implemented the game logic by herself. I did ask her to explain the architecture to me in broad strokes prior to development, and gave her some feedback, so the architecture was adjusted a bit.
First version was built in a very direct way, so once it was done I told her about Object Oriented Programming and she rewrote Tetris using objects in a more elegant way.
Then she moved over to Snake game, and mostly did it by herself, reusing parts of the code that were used in Tetris (main game loop thing, mostly), I mostly provided feedback and beta testing. We had some interesting moments debugging the wave algorithm which computer player snakes used to find a target (a simple wave algorithm), and improving performance. Then, we moved on to this LZW thing and, unfortunately, its creators immediately started dying.
The reasoning why I chose these particular tasks is because they are all relatively limited in scope, so a studying task is not enormous in size so you never finish it and actually have a chance to ship a finished product to show it to friends/teachers.
I followed this teaching path: scratch > python > sqlite > javascript (web).
I am missing algorithms and OOP, and will look for gradual learning resources in these area.
One recent area I tried was observability. With prometheus and grafana, they got into collecting more data points and creating visualizations for them. Bangle.js is another one.
And it seems to me, at least factoring in the upcoming projects, to cover different types of work, not merely progressive difficulty. So she'll get a taste of games, a taste of systems, a taste of web ui, a taste of embedded, etc. It sounds great.
I remember being on a long haul with a colleague, and playing a trivia game on the inflight entertainment system, with other flyers. It was fun to hear the roar of victory from another passenger