SignWriting has been documented at National Deafness section in California State University of Northridge South Library since 1968 and at Gallaudet University, Washington, D.C. since 1950s.
That's great, but it doesn't have anything to do with the points I raised.
Which is that I can't find any indication of a linear digital encoding that has been used to any appreciable extent that Meta could train on a corpus of it.
Which is why I'm struggling to understand why you're criticizing Meta? How could they realistically train a linear text model on ASL when the necessary content doesn't appear to exist?