Having used both Objective-C and .NET, I would much rather be faced with implementing a dynamic language atop .NET than have to do so atop Objective-C. While ObjC's dispatch system is quite dynamic, the relative weakness of the type system (if you can even call what ObjC has a 'type' system) and the amount of C baggage it drags along with it makes it a rather unfriendly environment for running dynamic code.
Here's one example: With the .NET type system, you can check a set of arguments against the signature of a method defined in the type information of an object instance, which means that you can be absolutely certain that your invocation will succeed without corrupting state or returning incorrect results. What's more, you can do this check at compile time (in a static language, like C#) or at run time, via reflection. The .NET JIT and runtime code generation facilities mean that you can do 'static' verification at load time for scripts instead of at the point of every invocation. In cases where you have a mismatch between the types requested and the types provided, you get clear, precise error feedback that allows the problem to be corrected. The .NET runtime is able to use this type system to perform a large number of useful safety checks against all code when it's first loaded, so that in many cases it can guarantee no memory corruption or out-of-bounds accesses will occur without having to check every single operation.
As the vast majority of Objective C invocations in a Cocoa application are effectively static (you're sending a message, as described in the Apple documentation, to an object of a given interface, as described in the Apple documentation, which you constructed following instructions in the Apple documentation), the dynamic nature of ObjC invocation wins you very little. The ObjC type system cannot provide you much assistance for those invocations, because it's built atop the C type system - GCC will check your argument types for you statically at compile time, but at run time, given an arbitrary ObjC object, it's going to be quite difficult to be certain that an invocation will succeed. What's more, in most cases GCC can provide no more than warnings, and those warnings are often incorrect.
On the other hand, for the vast majority of code written in a language like Ruby or Python, your method signatures contain no type information, so no validation can be performed. Neither Objective C or .NET win here because both require strong types for arguments. You need only look at NSArray to see how little you win by using Objective C in this case.
Ultimately, in both cases, neither environment is a good fit for a language like Ruby, but I would argue that .NET (and, for similar reasons, the JVM) provides a far stronger foundation upon which to build a language like Ruby. Building a language like Ruby on top of Objective C is only superior to building it in C due to its status as Apple's preferred language for OS X development.
I would argue the Smalltalk-esque message-passing system is an advantage, not a weakness. A large amount of the dynamic nature of languages like MacRuby can be captured with such a system whereas on .NET you need another abstraction layer (like the Dynamic Language Runtime), leaks and all.
And I would nitpick the argument that Objective-C requires types for arguments. You can declare arguments as "id" in Objective-C, exercising message passing, and objects will respond to the messages regardless of type. This is why MacRuby and Objective-C are a good fit: both languages are based messages, a concept alien to .NET.
Here's one example: With the .NET type system, you can check a set of arguments against the signature of a method defined in the type information of an object instance, which means that you can be absolutely certain that your invocation will succeed without corrupting state or returning incorrect results. What's more, you can do this check at compile time (in a static language, like C#) or at run time, via reflection. The .NET JIT and runtime code generation facilities mean that you can do 'static' verification at load time for scripts instead of at the point of every invocation. In cases where you have a mismatch between the types requested and the types provided, you get clear, precise error feedback that allows the problem to be corrected. The .NET runtime is able to use this type system to perform a large number of useful safety checks against all code when it's first loaded, so that in many cases it can guarantee no memory corruption or out-of-bounds accesses will occur without having to check every single operation.
As the vast majority of Objective C invocations in a Cocoa application are effectively static (you're sending a message, as described in the Apple documentation, to an object of a given interface, as described in the Apple documentation, which you constructed following instructions in the Apple documentation), the dynamic nature of ObjC invocation wins you very little. The ObjC type system cannot provide you much assistance for those invocations, because it's built atop the C type system - GCC will check your argument types for you statically at compile time, but at run time, given an arbitrary ObjC object, it's going to be quite difficult to be certain that an invocation will succeed. What's more, in most cases GCC can provide no more than warnings, and those warnings are often incorrect.
On the other hand, for the vast majority of code written in a language like Ruby or Python, your method signatures contain no type information, so no validation can be performed. Neither Objective C or .NET win here because both require strong types for arguments. You need only look at NSArray to see how little you win by using Objective C in this case.
Ultimately, in both cases, neither environment is a good fit for a language like Ruby, but I would argue that .NET (and, for similar reasons, the JVM) provides a far stronger foundation upon which to build a language like Ruby. Building a language like Ruby on top of Objective C is only superior to building it in C due to its status as Apple's preferred language for OS X development.