"Extreme programming" methodology said you should not do TDD if you don't already know how to implement the code. In that case you should instead experiment until you know, and then throw away the experiments and write the code test-first.
Maybe it should be done that way with AI: experiment with AI if you need to, then write a plan with AI, then let the AI do the implementation.
Any decent battery system measures the current that goes into the battery, and the current that goes out. Off-the-shelf ICs "learn" the battery's initial capacity and its state-of-charge to voltage curve, and thereon can observe degredation below those initial measurements, as well as fairly accurately reporting how much energy is in the battery at any given moment.
They only want ASCII tablature parsing because that's what ChatGPT produces. If ChatGPT produced standard music notation, users would not care about ASCII tablature. ChatGPT has created this "market".
ASCII tabulature was not invented by ChatGPT, it is decades old thing. It is easier to write with basic computer capabilities, and also read for ChatGPT (and humans with no formal music education), so it is probably even more prominent in the Internet than "standard graphical notation". So it quite expected that LLMs have learned a lot of that.
> It is possible to get interactive SSH access, but this access is limited. It is not possible to have interactive access via port 22, but it is possible via port 23. There is no full shell. For example, it is not possible to use pipes or redirects. It is also not possible to execute uploaded scripts.
Can you easily chain these, though? (gzcat some.txt|grep foo|sort -u|head -10 etc?). Especially lazily, if the uncompressed stream is of modest size, like a couple of gigabytes?
I'm not sure what you mean by lazily here, but internally[0] it creates real anonymous pipes[1] between the spawned processes, so the data does not go through the ruby process at all.
I'm currently working with 150MB worth of gzipped JSON - marshalling the full file from JSON to ruby hash eats up a lot of memory. One tweak that allows for easier lazy iteration over the file (while keeping temporary disk Io reasonable) is to pipe it through zcat, jq in stream mode to convert to ndjson, gzip again - for a temp file that ruby zlib can wrap for a stream convenient for lazy iteration per read_line...).
Generally marshalling a gig or more of JSON (non-lazily) takes a lot of resources in ruby.
Some do, some don't. JSON is a special case as a valid JSON file needs to be a single array or object literal - event driven (SaX style) parsing needs to be a hack (like jq stream mode). In theory json_streamer or yajl should help, but I couldn't get a combination to return a proper lazy iterator.
With file as ndjson it was easier, if a little sparsely documented (Zlib::new or #wrap?):
my_it = Zlib::GzipReader.wrap(some_ndfile).lazy
obs = my_it.each_line.lazy.map do |line|
JSON.parse line
end.first(4)
When we can get a line at a time marshalling the whole line isn't an issue.
My issue is more that it is tricky to nest ruby IO objects and return a lazy iterator - especially nesting custom filters along the way - at least more tricky than it should be.
Apparently there's a third party frame work that does seem promising:
Didn't realize that! That's one snippet I can maybe eliminate now. (As to why I didn't know: the first thing in the RDoc for Kernel#system is still "see the docs for Kernel#spawn for options" — and then Kernel#spawn doesn't actually have that one, because it doesn't block until the process quits, and so returns you a pid, not a Process::Status. I stopped looking at the docs for Kernel#system itself a long time ago, just jumping directly to Kernel#spawn...)
But come to think of it, if Kernel#system is just doing a blocking version of Kernel#spawn → Process#wait, then shouldn't Process#wait also take an exception: kwarg now?
And also-also, sadly IO.popen doesn't take this kwarg. (And IO.popen is what I'm actually using most of the time. The system! function above is greatly simplified from the version of the snippet I actually use these days — which involves a DSL for hierarchical serial task execution that logs steps with nesting, and reflects command output from an isolated PTY.)
> How do you "prove" that other people are conscious?
For sentience scientists mainly look at behavioral cues:
> For example, "if a dog with an injured paw whimpers, licks the wound, limps, lowers pressure on the paw while walking, learns to avoid the place where the injury happened and seeks out analgesics when offered, we have reasonable grounds to assume that the dog is indeed experiencing something unpleasant." Avoiding painful stimuli unless the reward is significant can also provide evidence that pain avoidance is not merely an unconscious reflex (similarly to how humans "can choose to press a hot door handle to escape a burning building").
Exactly. All of that is reasonable and the behavior described are obviously present as anyone who's ever had a dog would tell. So I don't understand why "are animals conscious" is being debated at this point.
I’m not saying that this is the case, but all the mentioned behaviors are only indicators and could also be reflexive actions which the dog is genetically programmed to do because they work. If a beetle is flipped, it also has a “program” to get upright again, but that doesn’t mean it’s aware of its situation and is actively deciding something. I’m pretty sure dogs are conscious, but you can’t really tell from the outside. LLMs also appear to reason and make arguments but I wouldn’t call them conscious.
You’re right, you also can’t tell for other people. You can make an assumption because they are very similar to you and you yourself appear to be conscious to yourself. But you can’t really disprove solipsism as far as I know.
Maybe it should be done that way with AI: experiment with AI if you need to, then write a plan with AI, then let the AI do the implementation.