Maybe for you, but I'd consider that a pretty annoying bug. I like to structure my files/classes/modules in decreasing order of abstraction: put the public API and high-level algorithms at the top, and then dive into the minutiae how the various parts are implemented as you scroll down. That also means I almost exclusively call functions that are defined further down in the file.
> I like to structure my files/classes/modules in decreasing order of abstraction: put the public API and high-level algorithms at the top
That makes sense. What I am describing doesn't preclude this organizational structure. It depends on the language design. You could either support c style forward declaration that you put at the top of the file and would be available for completion even if the implementation is below the first error in the file. Or the IDE could provide folding of the implementation so that you can scan the api without drilling down into the details.
> That also means I almost exclusively call functions that are defined further down in the file.
Again, to clarify are you calling things before they are _declared_ or _defined_?
Most languages don't make a distinction between declaration and definition, and many don't even care where something is defined/declared at all. C/C++ are really the exception nowadays, and for good reason: having to keep the definition and declaration in sync is annoying and unnecessary, even though I sometimes miss the easy overview of an API header files give.
Ok. So I'm guessing you're using a Java like language with classes and that the upper classes call into the lower classes? I understand the appeal of that approach. The tradeoff is that you have to make multiple passes of the file to compile/do IDE analysis. If the language is designed as I've been advocating, one can essentially write a one pass compiler. This is simpler to implement and will generally be faster. The big tradeoff is that it imposes a more rigid organizational structure.
As a compiler author (I have written a self-hosting compiler for an unpublished new language), I dramatically prefer implementation simplicity to organizational flexibility. I respect your preference but believe that ultimately the more free-form and flexible a language, the more complex and slow the tooling, both of which lead to more bugs in the final artifact. But I certainly can't prove my point.
A significant portion of C(++)'s cruft comes from its catering to one-pass compilers. I will grant you that it simplifies the job of the IDE, but it comes with so many other costs.
The obvious is that it requires forward declaration. However, depending on how back & forth interdependence is, it requires multiple "tiers" of forward declaration to effectively iteratively redeclare (or augment? Now that's some added complexity for all parties involved…) incomplete types until you can complete them. It's one thing to have a nice list of "here's what exists", but it's another to manually detangle dependency graphs. It's bad enough in C++ which already doesn't allow very much interdependence, but it'd be completely infeasible for any language more complex than glorified assembly.
Next, how would compile-time execution work? It's one thing to forward declare the existence of something, but how do you execute it without knowing its definition? You literally have to add another pass. Similarly, how do you make an inline function call?
One-pass compilation also relies heavily on searching and referring back to data from earlier in the pass, making it rather cache-unfriendly unless you build data structures as you compile, but now you're using massive amount of memory since you have to build these structures for everything in the entire input. This scales incredibly poorly. If you use one thing from some imported header, you now need to add that header and its entire dependency tree into your one pass. This isn't the 70's anymore; more passes ≠ more slower.
It's actually the compiler and IDE people pushing against this one-pass mindset, because it's "simpler" but just worse for everyone… except maybe the developer reading a header file instead of documentation. And do note that complexity comes in forms other than fancy data structures & algorithms in compilers/tooling. I'd argue that a manual flattening of a real world dependency graph is much more complex and harder to grok & maintain. Regardless, it's the compiler/tool developer's job to take the burden of complexity to better serve their users.