I'm working on a JSON schema discovery tool, JSONoid[0]. JSONoid can discover many more features of JSON Schema than existing tools such as regular expression patterns, formats, and dependencies. I'm also working on integrating this with some past work I've done on using LLMs to augment JSON Schemas[1,2].
There are a number of use cases for such a tool. One is for helping data analysts who are handed a pile of JSON documents to be able to more quickly and effectively craft analytics pipelines for heterogeneous data where just inspecting a few documents isn't sufficient. Another is to help automate API specification generation and regression testing. Definitely interested in any feedback.
I don't know that it's all that meaningful to discuss the component library as if it were its own UI framework. None of the other rust UI frameworks have distinct component libraries with distinct usage data either
> No hub required - TOMMY runs as a Home Assistant add-on or on a Linux host (Docker) and uses supporting devices to create a sensing network.
I don't see how either the HA instance or the Linux host can't be viewed as a hub. No hub required feels untrue to me. I assume this question is intended to clarify that some additional device is not necessary, but I think this could be reworded.
You're absolutely right. Thanks for the feedback! "No hub required" is misleading. What I meant was "no additional proprietary hub". Meaning, if you're already running Home Assistant or have a Linux machine, you don't need to own a separate device like Zigbee/Z-Wave hub. But yes, the HA instance or Linux host is effectively acting as the hub. I'll update that wording on the site.
I figured that's what you meant and I think it's totally reasonable! I just think the wording could be updated a little. I have a couple ESP32s lying around not doing anything, so I'm looking forward to trying out TOMMY with HA :)
OTOH, when you buy any Bosch, IKEA, Hue, Aqara device, it says on the box: Hub required (and they do mean get our hub and place next to all other hubs. Even though Home Assistant will usually work fine-ok).
So I see where he’s coming from, and I interpreted it as intended.
You are of course correct but in the HA community "no hub required" often should be read as "no addiditonal hub required because HA can communicate directly with it"
I'm curious if you do anything to control how the code evolves over time. Test suites are often incomplete and it's possible that behavior that is not fully specified may be unintentionally relied on.
If Specific regenerates code from the spec each time (which I'm not sure it does), there's the potential for different code each time even for parts of the spec that haven't changed. This seems like a nightmare for maintainability and debugability.
> If Specific regenerates code from the spec each time (which I'm not sure it does), there's the potential for different code each time even for parts of the spec that haven't changed
It doesn't. When the spec changes, the coding agent takes the diff and turns it into the equivalent change to the codebase. The tests also run each time so that the coding agent does cause a regression as part of this. Although as you say, test suites are often incomplete. We are aiming to make it easier to build complete test suites in Specific than in regular code though because they are a part of the spec and the agent can you help write them as well.
We haven't done much yet in this area but I'm quite excited about how to evolve the codebase over time. I think we have an advantage in that a system evolving also means the specs are evolving and growing. We can maintain a loose mapping behind the scenes between specs and code for the coding agent, to give it the right context and keep code changes localised even as a system grows large. We can also refactor incrementally as we go as given that it becomes the job of the coding agent, instead of a human that might put it off.
Thanks for the reply! I still worry about debugability. What if the generated code doesn't actually follow the spec? I understand that then generated tests would fail, but I assume there will be cases where Specific will fail. With no access to the code, it seems like there's no way to correct this. Is there any sort of escape hatch for these cases?
Having arXiv run the cleaner automatically would definitely be cool. Although I've found it non-trivial to get working consistently for my own papers. That said, it would be nice if this was at least an option.
Not with a geostationary orbit. That must have a fixed radius. The problem is that satellites have to move to counteract the force of gravity to avoid falling out of orbit. But if they move too much or too little, then the satellite moves with respect to the earth and the orbit is no longer geostationary.
(Caveat: Not an expert by any means, just someone who had a similar question and did some reading, so my answer may well be incomplete or not fully correct.)
This has already been addressed as LEO is not geostationary
but to point as to why. Consider the earths equator rotates at a particular velocity so there is a particular orbital radius where the two cancel and NO energy is needed to fall around the equator at the same rate the equator is moving. That is a geostationary orbit.
LEO maxes out ~ 1,200 miles radius, geostationary is at little over over 22,000 miles radius.
Part of the problem here is that there is no prior association of an identity with an account. So proving who you are is somewhat irrelevant since even if the account has your name, email, and photo, that's no guarantee that the account was created by you. If identity verification were required ahead of time, then perhaps verifying identity after loss of access could be reasonable recovery method. But of course there are many reasons why requiring such verification is problematic.
I go to the Strong almost every weekend with my kids and they love it. I think there are some examples of poor uses of technology that the OP is talking about (screens that just replicate something you could play at home). But there is also some incredibly cool stuff that combines technology with physical play.
> Why does providing assistance have to mean centralized control of what assistance looks like?
I generally agree with you, but often the reason that these programs work economically is that those who don't choose to use them still contribute. There are (at least) three different categories: (1) caregivers who will care for their child themselves regardless of whether or not free care is available elsewhere, (2) caregivers who will find care elsewhere regardless of the cost, and (3) caregivers who will make use of free care if available, or otherwise, care for their child themselves.
I think the group (1) has a tendency to be higher income. It's certainly not true of everyone in that group, but I would wager that a significant number of people in that group do not need the financial assistance. Those people not using the free resource, but still contributing to funding it is what makes it economically viable.
There are a number of use cases for such a tool. One is for helping data analysts who are handed a pile of JSON documents to be able to more quickly and effectively craft analytics pipelines for heterogeneous data where just inspecting a few documents isn't sufficient. Another is to help automate API specification generation and regression testing. Definitely interested in any feedback.
[0] https://github.com/dataunitylab/jsonoid-discovery/ [1] https://michael.mior.ca/blog/llms-for-schema-augmentation/ [2] https://arxiv.org/abs/2407.03286
reply