In the slowed version it seemed like the operations were fully sequential, I think they might be able to achieve a shorter time by overlapping some operations and potentially with edge-cutting too
In the slow-montion footage of the shared faster Mitsubishi robot you can see it's doing some operations in parallel (but not edge-cutting)
The profusion of jaded nerds, although saddening at times, seems to be pushing science forward. I have a feeling that a prolonged sense of "Awe" can hinder progression at times, and the lack of it is usually a sign of the adaptability of a group (how quick new developments are normalized?)
reminds me of the rabbit hole sessions I used to fall into in https://wiki.c2.com/ (a merge of the two interfaces (chat and context rabbit hole window thing) would be perfect for me)
For me, the snappy, easy to go from one link to another interface reminds me of hours and hours and hours spent browsing https://everything2.com in the earlier years of the Internet.
I'm wondering the same, but also wonder if the situation off the coast of Yemen and Iran's recent response to Israel bombing their embassy made the conflict partially international?
The conflict cannot be not considered international simply because Palestine's recognition is blocked by the US on Israel's behalf.
Nor can it be not international due to the vagueness of Israel's borders. Israel has internationally legally recognized borders (the Green Line) and is acting outside them.
In fact Palestine’s recognition is not blocked by the US. What is blocked by the US is Palestine becoming a full member of the UN.
The two things are different. Switzerland did not join the UN until 2002. I’m sure that we can all agree that Switzerland was recognized as a state prior to 2002.
Becoming a full member of the UN is a sufficient but not necessary condition for recognition. The other way is simply to get as many other states as possible to recognize you.
Arguably Palestine’s recognition by the UN General Assembly is also sufficient.
He meant something more meta I believe. Knowing you are a monkey is one thing, and knowing that you know you are a monkey is a another thing. It's about being cognisant of the fact that there is something called knowledge and you have it
Precisely. To put it more concretely: it is no small feat to grasp the abstract distinction between known-knowns, known-unknowns, unknown-knowns, and unknown-unknowns. They do not know what they do not know.
That's really cool, do you think this might be the basis for potential natural language navigation? (when going over a document, instead of having to search by keyword or regex, one can search for more complicated concepts using English)
If not, what extra work is needed to bring it to that level?
I think you could get a pretty good solution for that using RAG and some tricks with prompt engineering and semantic chunking. With google's very-long-context models (Gemini) you may also have good results simply with some prompt engineering. Preprocessing steps like asking the LLM to summarise themes of each section can be helpful too (in RAG, this info would go in the 'metadata' stored with each chunk, presented to the LLM with each chunk).
A key engineering challenge will be speed ... when you're navigating a document you want a fast response time.
Side related question: are there content licenses coming up that are similar in spirit to what the GPL is but targeted at AI training? (E.g. if this piece of content was used in training an AI that was to be used commercially, the AI's weights must be published)
The argument AI companies make is that LLMs are not derived works of their input, or is fair use. So according to them, the input's license does not matter.
In the slowed version it seemed like the operations were fully sequential, I think they might be able to achieve a shorter time by overlapping some operations and potentially with edge-cutting too
In the slow-montion footage of the shared faster Mitsubishi robot you can see it's doing some operations in parallel (but not edge-cutting)