It's really cool, but as far as I know there's no complete C++ implementation for embedded platforms, and I still can't figure out how it actually works.
Does the gossip flooding mean every single node needs to know about every other node in the entire mesh?
I have a project vaguely inspired by this and Meshtastic that tries to make use of existing internet tech, while falling back to local links, instead of trying to replace the Internet completely.
It's very much WIP, I'm planning to get rid of all of the automatic reliable retransmit stuff and replace it with per channel end to end acknowledgment.
https://github.com/EternityForest/LazyMesh#
Is there any kind of DHT like routing for the addresses? Woudn't the announces make a lot of traffic without that, if you ever got to thousands of nodes?
It works great with lora, but each interface is it's own thing. It's not exactly like meshtastic/meshcore/etc (for better or worse) but also fulfills totally different roles. You can connect 1 interface to another, and only forward messages for particular addresses, if you want, or addresses that have announced on a specific interface, and you can control what you want to propagate/route.
You can set it up tons of different ways, so just imagine this is what you want:
- 20 ESP32 lora devices around my house, that respond with sensor-data or something
- a pizero connected to the internet (via a huge TCP testnet) and lora (via a SPI device connected to some GPIO.)
- These are not "secret" anyone can ask a sensor for it's data. the messages are encrypted, but they are intentionally public
If any of the 20 lora devices want to to be available to talk to someone on the internet, they can, and their announcements are forwarded, so people on the testnet know the address.
I can set it up so only messages directly to those 20 devices is forwarded, but otherwise announces are recorded (and replayed) on the pi.
Additionally, I can setup propagation for just my 20 devices, so even if they are out of range or turned off, they will get the message (from the pi) when they get back in range or turn on.
In this example, the structure of the network forms a kind of tree-like thing. Each tier of the network is scaled to the amount of traffic it can deal with: pi can deal with a ton, and is connected to internet, the ESP32s only need to deal with 1-to-1 traffic (announces don't really matter to them) and only compete with traffic from 20 devices (on the same lora network.)
These messages are pretty small (an announce is ~160 bytes, message proof is ~115 bytes.) For larger messages, you string them together over a link (a 1-to-1 encrypted tunnel.) I think a key thing though, is that not every tier of the network needs to send all the same packets. For example, not even 1000th of the "testnet firehose" gets sent over the local lora net of 20 devices, based on how it's setup here.
So, the usage-flow of this would like this:
- each sensor announces on lora, pi forwards that to internet ("hey my address/pubkey is X, and I have these cool capabilities as a sensor")
- a user on internet sends a data-message to the address "hey give me your sensor data"
- the pi routes that from internet to lora, and propagates (replays periodically if the lora is not around)
- if the esp32 has not seen that peer, it can request an announce (and the pi will forward that both ways)
- the esp32 responds "oh hey internet user with X address, my sensor data is X"
- the message is sent over lora to the pi, which forwards on to internet
for very small data, if you don't care about P2P encryption, you could even put the sensor-data directly in the initial announce. "hey I have this address/pubkey and the current temperature is X" since announce "app data" is great for a very small amount of data.
Tilting them vertical or nearly so is very useful if there could be any hail, that might be a good idea to support.
What about compressed air? It might not be too hard to find a small brushless low power air pump that could drive pistons directly.
You could mount the pump controller onto the back of the panels and use an accelerometer to measure angle, and run the pump until it's where you want it.
You'd probably need to do some testing and make sure it couldn't get jammed, then build up pressure, then suddenly unstick and move unsafely.
True, but I think the point I'm trying to make is that when it comes to deploying (what are more often than not) web services, getting to the point with systemd where it "just works" requires more pain than I'd like - especially with regard to production deployments (reading logs, checking service status, wondering why my env vars aren't being read, etc).
If at the time when I was cutting my teeth on systemd, I had access to something more lightweight and "do one thing well", I think I would've gotten a lot more sleep :)
Disenhackifying one of the last pieces of my KaithemAutomation server that still feels not best practicesful.
Device driver plugins used to have a very simple flat key value, strings only format, with a set_config_properties function to tell the host what kind of UI to show.
That's all getting replaced with JSON schemas, with some auto-upgrade shims so old config keeps working.
It's one of many things that now seems completely insane, but made sense when I had way less experience a long time ago!
Also still on and off working on my BLE/WiFi based Meshtastic-alike.
Usually arbitrary binaries stuffed in Python wheels are mostly self contained single binaries and such, with as little dynamic linking nonsense as possible, so they don't break all the time, or have dependency conflicts.
It seems to consistently work really well for binaries, although it would be nice to have first class support for integrating npm packages.
I believe it is used to cross platform link Rust/maturin wheels, which seems nice because it's one fewer unusual install script to integrate into your project, if zig isn't packaged for Debian yet.
I wouldn't expect video timeline seeking to be all that performance critical, I would think you could use SQLlite with indexes, since you only need a small number at a time and they're probably pretty low resolution, right?
During engine failure / fire situations, I would expect that pilots are likely to be too busy to have any time left over for peering at a video feed, trying to assess the state of the wing.
In emergencies, information overload tends to make things worse, not better.
Having cameras pointed at the engines/wings like rearview mirrors would be helpful. It does not add that much workload if you take a quick glance in the “mirror” and figure out what the problem exactly is.
And now we have technology that allows for cameras everywhere to give a better situational awareness across all critical aircraft surfaces and systems.
It is going to take a little bit of adjusting to, but it will help improve safety in a tremendous way.
This would need to be tested. There's a lot going on already during normal take-offs. Now you're in a situation where the engine fire alarm is going off, probably a few other alarms, you got so many messages on your display that it only shows the most urgent one, you're taking quick glances at 50 points in the cockpit already.
And how would the cameras even work? Are the pilots supposed to switch between multiple camera feeds, or do we install dozens of screens? And then what, they see lots of black smoke on one camera, does that really tell them that much more than the ENG FIRE alert blaring in the background?
Maybe this could help during stable flight, but in this situation, when the pilots were likely already overloaded and probably had only a few seconds to escape this situation - if it was possible at all - I can't imagine it being helpful.
You know how the tail camera works on the new planes? Something like that, which can be far away from the wings, but get the full picture. Am I saying it's the solution for everything? No. But after you go through the memory committed items during an emergency, you can take a look outside and be like "ah, I see better what the problem is".
If we don't try to see how it goes, we won't know if it is a good idea or not.
It'd certainly need more thought put into it than just showing the camera view from the entertainment system. Either just one camera on the tail pointed forwards, so you have one single camera that can show the whole plane, or two cameras in the front, one pointed at each wing. Two cameras is worse than one, but they are less likely to be affected by smoke or blood splatters or whatever. Maybe give each pilot one of the camera feeds. And you'd have to fit a dedicated screen for the video feed so pilots don't need to switch through screens in an emergency.
It'd take lots of testing and engineering. But especially in cases where you have multiple warnings going off I imagine that a quick view at an exterior camera can often give you a clearer/faster indication of the situation
How does that work in the dark/rain/snow? Or are we now going to add lights pointing in the direction where the camera is facing. And then what do we do about the fact that aircraft external lighting has to follow regulations?
its super weird to me this isn't a thing, and there's resistence to the idea. I mean, if they are already masters at glacing at 100000 differnent indicators and warning messages etc. and processing them at super speeds (they really do!) then i'd say a monitor with a bunch of buttons below it to switch feeds (maybe a little more elaborate, but not tooo...) would be helpful.
the problem might be getting trained and experienced pilots to adjust to it since they are already in a certain flow of habits and skills to apply in their job, but new pilots surely could learn it as they aren't so set on their ways yet and have the opportunity to build this new data into their skillset / habits.
Look, information overload is a real problem. Medical devices are an analogous industry in that in an emergency nurses and doctors are getting completely bombarded with alarm tones, flashing lights, noise, and also whatever is going on with the patient. There are standards in that industry governing how you alarm, what your alarm tones sound like, what colors you're supposed to use, how fast you're supposed to flash, and so on. And people still miss alarms because there are still a ton of them all going off at once.
People have an upper limit on their capacity to take in information, and that limit goes down when they are moving quickly to solve problems. Throwing more information at them in those moments increases the risk that they will take in the wrong information, disregard more important information, and make really bad decisions.
So no, it's not cut and dried like you're thinking.
The entire event was over in less than a minute, and during that time there’s only one thing pilots are working on: maintaining what little control they have, and gaining as much altitude as possible without loss of control.
This is consuming all mental processing, there are no spare cycles.
This wasn’t a salvageable situation by having more information after the engine separated. If a sensor could have provided a warning of engine failure well before V1, that would be helpful.
I expect the questions will focus on what information existed that should have resulted in aborting the takeoff. Not what information was needed to continue.
Okay. So you mean in general it would help in some cases. Not that in this case it would have helped.
> see UA1175
I'm familiar with the case you are mentioning. I'm also aware that they sent a jump seater to look at the engine. But did seeing the engine provide them with any actionable information? Did they fly the airplane differently than if they would have just seen the indications available in the cockpit?
Excellent. So in what cases does seeing the engine visually do help? So far we discussed UPS2976 and UA1175 where the presence or absence of the camera didn't change the outcome.
> Regarding UA1175, they had someone extra, but not all flights happen to have someone extra in the cockpit.
You are dancing around my question. What does the pilot do differently based on what they see? If you can't articulate a clear "pilot sees X they do Y, pilot sees Z they do Q" flow then what is the video good for?
in a sibling thread you say "There are countless situations where it can be helpful." But you haven't named even one of those countless situations yet.
Let's say there is a case like UA1175:
- they can see how damaged the engine is
- they can see if the wing is damaged in any way (over and under)
- is there any other damage to the aircraft (like there was a piece of shrapnel that hit the plane)
In other situations:
- are the wheels out when the sensors say they are not
- have a way to visually inspect critical parts of the plane while in flight (so you don't have to do a flyover and the tower to look with binoculars at the airplane)
> So far we discussed UPS2976 and UA1175 where the presence or absence of the camera didn't change the outcome
To be fair, the presence of a camera might have changed the outcome of UPS2976. Depending on when the fire developed fully, rejecting the takeoff based on the sheer size of the fireball on the wing might have led to fewer casualties on the ground. This is of course under the assumption of a world where a camera feed is a normal part of the flight deck instruments and there is a standard for the pilot monitoring to make judgments based on it.
> Yes, the cameras would not have helped here, but it dorsn’t mean they are useless in general.
I think that statement needs the support of actual evidence. Air incident investigation agencies make detailed reports of the causes of crashes, with specific, targeted recommendations to help ensure that similar incidents don't recur.
If we haven't seen recommendations for cameras like that, then I think it's reasonable to assume that the actual experts here have determined that cameras would not be helpful.
FAA/EASA can dictate what equipment new airplans should/must have. And that is done in cooperation with the manufacturers. And manufacturers have zero incentives to add new equipment, airlines zero desire to do additional certifications for pilots.
It is not reasonable to assume anything.
Air crash investigators are not the experts on airplanes design.
They surely can and this has been done.
On one the flights that I took with Turkish Airlines they had a few video streams from different sides of the airplane. One was from the top of the tail and you could see the entire plane.
Now... not sure how much that is helpful in this kind of emergency, they really didn't have time to do much.
I'm not sure they usually have the views on screen in the cockpit in flight, even if available (and an old MD-11 freighter won't have the cameras in the first place). The picture of an A380 cockpit (on the ground) on Wikipedia does show the tail view on a screen, but its on the screen normally used for main instruments. With an A380 that had an uncontained engine failure causing various bits of havok (Qantas 32?) IIRC the passengers could see a fuel leak on the in flight entertainment screens, but they had to tell the crew as AFAIK they didn't have access to the view in the cockpit in flight.
reply