i run up to 16 crawlers at the same time. twitter limits (in terms of rate limits) me only for each user and the calls i issue through that user's credentials. so i can go parallel easy. not much of a bottleneck on my side.
Does it fail for an individual user or is it all users at that time period. instead of reducing the number of followers, you may just need to wait 5 - 10 minutes and ask for that user again with full 100.
pretty much what I do, my local sqlite storage per user slows things down a bit (but that's good since Twitter is even slower). so between requests, I often give Twitter enough time to finish the request for the previous 100 and store them in cache, so that when I re-request the same 100 (i always try twice), there are often there. but not always. it's a mix of overall twitter load, plus where/how deep down these 100 followers are stored, plus if a follower record is damaged (happens frequently), plus other time-out factors. that's what I am complaining about, it's so hard to work around this.