Used to be a Google X. Not sure at what scale it was.
But if any state/central bank was clever they would subsidize this.
That's a better trickle down strategy.
Until we get to agi and all new discoveries are autonomously led by AI that is :p
> Google X is a complete failure
- Google Brain
- Google Watch/Wear OS
- Gcam/Pixel Camera
- Insight (indoor GMaps)
- Waymo
- Verily
It is a moonshot factory after all, not a "we're only going to do things that are likely to succeed" factory. It's an internal startup space, which comes with high failure rates. But these successes seem pretty successful. Even the failed Google Glass seems to have led to learning, though they probably should have kept the team going considering the success of Meta Raybands and with things like Snap's glasses.
Didn't the current LLMs stem from this...? Or it might be Google Brain instead. For Google X, there is Waymo? I know a lot of stuff didn't pan out. This is expected. These were 'moonshots'.
But the principle is there. I think that when a company sits on a load of cash, that's what they should do. Either that or become a kind of alternative investments allocator. These are risky bets. But they should be incentivized to take those risks. From a fiscal policy standpoint for instance.
Well it probably is the case already via lower taxation of capital gains and so on.
But there should probably exist a more streamlined framework to make sure incentives are aligned.
And/or assigned government projects?
Besides implementing their Cloud infrastructure that is...
I am slowly trying to understand dependent types but the explanation is a bit confusing to me as, I understand the mathematical terminology of a function that may return a type, but...
Since function types take a value and return a value, they are by definition in another universe from say morphisms that would take a type and return a type.
The same way, I see a value as a ur-element and types as sets of values.
So even if there is syntactic sugar around the value <-> type equivalence, I'd naively think that we could instead define some type morphism and that might be more accurate. The value parameter would merely be declared through a type parameter constrained to be a singleton.
The same way a ur-element is not a set but a member of set.
Then the question is representation but that could be left as an optimization.
Perhaps that it is already what is done.
Example:
type Nine int = {9}
And then the rest is generic functions, parameterizable by 9, or actually, Nine.
So nothing too different from a refinement of int.
Basically, 'value' would be a special constraint on a type parameter in normal parametric polymorphism implementations. There would probably be derived constraint information such as size etc...
But I guess, the same issue of "which refinement types can be defined, while keeping everything decidable" remains as an issue.
Also how to handle runtime values? That will require type assertions, just like union types?
Or is it only a compile time concept and there is no runtime instantiations.
Only some kind of const generics?
A typeof function could be an example of dependent type though? Even at runtime?
In the style of the linked post, you'd probably define a generic type (well, one of two generic types):
type ExactlyStatic : (0 t: Type) -> (0 v: t) -> Type
type ExactlyRuntime : (0 t: Type) -> (v: t) -> Type
Then you could have the type (ExactlyStatic Int 9) or the type (ExactlyRuntime Int 9).
The difference is that ExactlyStatic Int 9 doesn't expose the value 9 to the runtime, so it would be fully erased, while (ExactlyRuntime Int 9) does.
This means that the runtime representation of the first would be (), and the second would be Int.
> Also how to handle runtime values? That will require type assertions, just like union types?
The compiler doesn't insert any kind of runtime checks that you aren't writing in your code. The difference is that now when you write e.g. `if condition(x) then ABC else DEF` inside of the two expressions, you can obtain a proof that condition(x) is true/false, and propagate this.
Value representations will typically be algebraic-data-type flavored (so, often a tagged union) but you can use erasure to get more efficient representations if needed.
In type theory, all singleton types are isomorphic and have no useful distinguishing characteristics (indeed, this is true of all types of the same cardinality – and even then, comparing cardinalities is always undecidable and thus irrelevant at runtime). So your Nine type doesn’t really make sense, because you may as well just write Unit. In general, there is no amount of introspection into the “internal structure” of a type offered; even though parametricity does not hold in general (unless you postulate anticlassical axioms), all your functions that can run at runtime are required to be parametric.
Being isomorphic is not the same as being identical, or substitutable for one another. Type theory generally distinguishes between isomorphism and definitional equality and only the latter allows literal substitution. So a Nine type with a single constructor is indeed isomorphic to Unit but it's not the same type, it carries different syntactic and semantic meaning and the type system preserves that.
Some other false claims are that type theory does not distinguish types of the same cardinality. Type theory is usually intensional not extensional so two types with the same number of inhabitants can have wildly different structures and this structure can be used in pattern matching and type inference. Cardinality is a set-theoretic notion but most type theories are constructive and syntactic, not purely set-theoretic.
Also parametricity is a property of polymorphic functions, not of all functions in general. It's true that polymorphic code can't depend on the specific structure of its type argument but most type theories don't enforce full parametricity at runtime. Many practical type systems (like Haskell with type classes) break it with ad-hoc polymorphism or runtime types.
This comment contains a lot of false information. I’m first going to point out that there is a model of Lean’s type theory called the cardinality model, in which all types of equal cardinality are modelled as the same set. This is why I say the types have no useful distinguishing characteristics: it is consistent to add the axiom `Nine = Unit` to the type theory. For the same reason, it is consistent to add `ℕ = ℤ` as an axiom.
> So a Nine type with a single constructor is indeed isomorphic to Unit but it's not the same type, it carries different syntactic and semantic meaning and the type system preserves that.
It carries different syntax but the semantics are the exact same.
> Type theory is usually intensional not extensional so two types with the same number of inhabitants can have wildly different structures
It is true that type theory is usually intensional. It is also true that two types equal in cardinality can be constructed in multiple different ways, but this has nothing to do with intensionality verses extensionality – I wouldn’t even know how to explain why because it is just a category error – and furthermore just because they are constructed differently does not mean the types are actually different (because of the cardinality model).
> Cardinality is a set-theoretic notion but most type theories are constructive and syntactic, not purely set-theoretic.
I don’t know what you mean by this. Set theory can be constructive just as well as type theory can, and cardinality is a fully constructive notion. Set theory doesn’t have syntax per se but that’s just because the syntax is part of logic, which set theory is built on.
> most type theories don't enforce full parametricity at runtime
What is “most”? As far as I know Lean does, Coq does, and Agda does. So what else is there? Haskell isn’t a dependent type theory, so it’s irrelevant here.
---
Geniune question: Where are you sourcing your information from about type theory? Is it coming from an LLM or similar? Because I have not seen this much confusion and word salad condensed into a single comment before.
I will let Maxatar responds if he wants to but I will note that his response makes much more sense to me than yours as someone who uses traditional programming language and used to do a little math a couple decades ago.
With yours, it seems that we could even equate string to int.
How can a subtype of the integer type, defined extensionally, be equal to Unit? That really doesn't make any sense to me.
> it is consistent to add `ℕ = ℤ` as an axiom
Do you have a link to this? I am unaware of this. Not saying you're wrong but I would like to explore this. Seems very strange to me.
As he explained, an isomorphism does not mean equality for all I know. cf. Cantor. How would anything typecheck otherwise? In denotational semantics, this is not how it works.
You could look into semantic subtyping with set theoretic types for instance.
type Nine int = {9} defines a subtype of int called Nine that refers to the value 9.
All variables declared of that type are initialized as containing int(9). They are equal to Nine.
If you erased everything and replaced by Unit {} it would not work. This is a whole other type/value.
How would one be able to implement runtime reflection then?
I do understand that his response to you was a bit abrupt. Not sure he was factually wrong about the technical side though. Your knee-jerk response makes sense even if it is too bad it risks making the conversation less interesting.
Usually types are defined intensionally, e.g. by name. Not by listing a set of members (extensional encoding) in their set-theoretic semantic. So maybe you have not encountered such treatment in the languages you are used to?
edit: the only way I can understand your answer is if you are only considering the Universe of types as a standalone from the Universe of values. In that universe, we only deal with types and types structured as composites in what you are familiar of perhaps?
Maybe then it is just that this Universe as formalized is insufficiently constrained/ underspecified/ over-abstracted?
It shouldn't be too difficult to define specific extensional types on top, of which singleton types would not have their extensional definitions erased.
> Many practical type systems (like Haskell with type classes) break it with ad-hoc polymorphism or runtime types.
Haskell does not break parametricity. Any presence of ad-hoc polymorphism (via type classes) or runtime types (via something like Typeable, itself a type class) is reflected in the type signature and thus completely preserves parametricity.
How does it become Unit if it is an integer of value 9?
Why would cardinalities be undecidable if they are encoded discretely in the type?
For instance, type Nine int = {9} would not be represented as 9. It would probably be a fat pointer.
It is not just an int, it would not even have the same operations (9 + 9 is 18 which is an int but is not of type Nine, but that's fine, a fat pointer does not need to share the same set of operations as the value it wraps).
It could be seen as a refinement of int?
I am not sure that it can truly be isomorphic? My suspicion was that it would only be somewhat isomorphic at compile time, for type checking, and if there is a mechanism for auto unwrapping of the value?
There's only one possible value of type Nine; there's only one possible value of type Unit. They're isomorphic: there's a pair of functions to convert from Nine to Unit and from Unit to Nine whose compositions are identities. Both functions are just constants that discard their argument un-inspected. "nineToUnit _ = unit" and "unitToNine _ = {9}".
You've made up your language and syntax for "type Nine int = {9}" so the rules of how it works are up to you. You're sort of talking about it like it's a refinement type, which is a type plus a predicate over values of the type. Refinement types aren't quite dependent types: they're sort of like a dependent pair where the second term is of kind Proposition, not Type; your type in Liquid Haskell would look something like 'type Nine = Int<\n -> n == 9>'.
Your type Nine carries no information, so the most reasonable runtime representation is no representation at all. Any use of the Nine boils down to pattern matching on it, and a pattern match on a Nine only has one possible branch, so you can ignore the Nine term altogether.
It is an integer. It is in the definition.
And any value should be equal to nine.
By construction Nine could have been given the same representation as int, at first. Except this is not enough to express the refinement/proposition.
One could represent it as a fat pointer with a space allocated to the set of propositions/predicates to check the value of.
That allows to check for equality for instance.
That information would not be discarded.
This is basically a subtype of int.
As such, it is both a dependent type and a refinement type. While it is true that not all refinement types are dependent types, because of cardinality.
I think Maxatar response in the same thread puts it in words that are possibly closer to the art.
The predicate gets tested every time we do type checking? It is part of the type identity.
And it is a dependent type. Just like an array type is a dependent type, the actual array type depending on the length value argument.
edit: I think I am starting to understand. In the implementations that are currently existing, Singleton types may be abstracted. My point is exactly to unabstract them so that the value is part of their identity.
And only then we can deal with only types i.e. everything from the same Universe.
> The predicate gets tested every time we do type checking? It is part of the type identity.
When does type checking happen?
I think it happens at compile time, which means the predicate is not used for anything at all at run time.
> edit: I think I am starting to understand. In the implementations that are currently existing, Singleton types may be abstracted. My point is exactly to unabstract them so that the value is part of their identity.
I am not sure what you mean by "to unabstract them so that the value is part of their identity", sorry. Could you please explain it for me?
> And only then we can deal with only types i.e. everything from the same Universe.
If you mean avoiding the hierarchy of universes that many dependently typed languages have, the reason they have those is that treating all types as being in the same universe leads to a paradox ("the type of all types" contains itself and that gets weird - not impossible, just weird).
> the predicate is not used for anything at all at run time.
It is used for its value when declaring a new variable of a given type at runtime too. It has to be treated as a special predicate.
The allocator needs to be aware of this. Runtume introspection needs to be aware of this.
Parametric type instantiation also needs to know of this since it is used to create dependent types.
The point is that in the universe of types that seems to be built in dependent types, singleton Types are just Types decorrelated from their set of values. So they become indistinguishable from each other, besides their name. Or so it seems that the explanation is. It is abstracted from their value.
The proposal was to keep the set definition attached. What I call unabstract them.
The point is exactly to avoid mixing up universes which can lead to paradoxes. Instead of dealing with types as some sorts of values with functions over types, mixed up with functions over values which are also types and then functions of types and values to make some sort of dependent types, we keep the algebra of types about types.
We just bridge values into singleton types.
Also, it should allow an implementation that relies mostly on normal constrained parametricity and those singleton types.
The point is that mixing values and types (as values) would exactly lead to this type of all types issue.
But again, I am not familiar enough with the dependent type implementations to know exactly what treatment they have of the issue.
Hi aatd86! We had a different thread about existence a couple days ago, and I'm just curious -- is English your second language? What's your first language? I'm guessing French, or something from Central Europe.
Functions that return types are indeed at a higher level than those that don't.
Values can be seen as Singleton types. The key difference is the universe they live in. In the universe of types, the level one level below is perceived as a value. Similarly, in the universe of kinds, types appear to be values.
> Basically, 'value' would be a special constraint on a type parameter in normal parametric polymorphism implementations
Yes this is a level constraint.
> But I guess, the same issue of "which refinement types can be defined, while keeping everything decidable" remains as an issue.
If you're dealing with fully computable types. Nothing is decidable.
> Also how to handle runtime values? That will require type assertions, just like union types? Or is it only a compile time concept and there is no runtime instantiations. Only some kind of const generics?
A compiler with dependent types is essentially producing programs that are itself embedded with its input. There cannot be a distinction between runtime and compel time. Because in general type checking requires you to be able to run a program. The compilation essentially becomes deciding which parts you want to evaluate and which parts you want to defer until later.
> A typeof function could be an example of dependent type though? Even at runtime?
Typeof is just const.
Typeof: (T : type) -> (x:T) -> Type
Final note: I've written some compilers for toy dependently typed languages. By far dependent typing makes both the language and the compiler easier not harder. This is because Haskell and c++ and other languages with type systems and metaprogramming or generics actually have several languages: the value language that we are familiar with, but also the type language.
In Haskell, this is the class/instance language which is another logic programming language atop the lambda calculus language. This means to write a Haskell compiler you have to write a compiler for Haskell and the logic programming language (which is turing complete btw) that is the type language
Similarly in c++, you have to implement c++ AND the template system which is a functional programming language with incredibly confusing semantics.
On the other hand for a dependently typed language, you just write one language... The types are talked about in the same way as values. The only difficulty is type inference. However, an explicit dependently typed language is actually the easiest typed compiler to write because it's literally just checking for term equality which is very easy! On the other hand with a dependent language, type inference is never going to be perfect so you're a lot freer to make your own heuristic decisions and forego HM-like perfection.
Ok so it was tongue-in-cheek if not obvious but thanks whoever for the downvote.
Then a bit more serious... There might be even better examples but let's consider that someone is part of a community that can use what is considered a slur, depending on context, or a term of endearment, depending on context and who uses it etc...
If someone else uses it but fails to disclose their appartenance to said group.
When asked, they can refuse to disclose it.
Is it fair to get them banned from the community? Can we consider that they might be lying by omission? After all they didn't answer and they might pass themselves as part of a community.
There are also colloquial considerations in online interactions that might be taken into account.
This is not really what I was veering toward initially but simply as a way to bring some more nuance since humor doesn't work here apparently.
This is the sort of things we see on twitter/X etc. You can't force people to speak differently, you can't force people to disclose information they would not want to disclose, but you may want to have some sort of policy to rule these kind of issues.
It is a lie if they use it as if they were a full-fledged member of a community while not actually being a true member of said community.
If I disguise myself as a man, that does not mean that I can go the male restrooms. If I am asked for proof that I am actually female for some reason, can I decline showing such proof?
And regarding arguing in bad faith, I was not arguing. Maybe you are not aware of the expression 'lying by omission'? But the smileys I used were supposed to make obvious that it was a joke/tongue-in-cheek. Even the initial question was tongue-in-cheek. Do you sincerely believe that I expect to receive some credit card info?!!!
Ack that this example might not be best since the lie in the first place is the disguise.
But, not everything is ruled by law, especially online. Which is also the point of the question.
"It is a lie if they use it as if they were a full-fledged member of a community while not actually being a true member of said community."
That would be a lie, yes. (I found your example above not clearly written and still am not quite sure what you meant exactly)
"And regarding arguing in bad faith, I was not arguing. Maybe you are not aware of the expression 'lying by omission'? But the smileys I used were supposed to make obvious that it was a joke/tongue-in-cheek. Even the initial question was tongue-in-cheek. Do you sincerely believe that I expect to receive some credit card info?!!!"
Asking for information and someone declining that information has nothing to do with lying by ommision. That you try to make a connection here is what makes me believe you are not debating (or talking about or whatever) in good faith.
"But, not everything is ruled by law, especially online. Which is also the point of the question."
But this is about a concrete community, where my point is, they can very much rule certain things by their law.
And to me by default, lying is evil. And not banning those who lie (which was the starting point here).
So on an online forum, do you really think that you can force people to choose a profile picture that represents them accurately?
Or is it a lie and it is a bannable offense?
How do you enforce the truth?
> Asking for information and someone declining that information has nothing to do with lying by omission.
You may want to look up the definition of 'lying by omission'. Within the context of asking for profile information, it might well be. My point is that you need to be more measured.
Even lying can be for protection at times. Sometimes it is not. It is not as straightforward as you make it seem.
As someone who is currently writing their own js framework, llms are able to generate code quite easily. So I am not worried that we will be able to see new frameworks.
Now, about the incentives? Probably less inference costs for llms, which probably means that they are more legible than the current state of the art for humans as well.
Less API changes than let's say react also means that the generated code as less branching although llms can adapt anyway. Cheaper.
Will probably be closer to the platform too (vanillaJS).
For release but not for development.
Sufficient for the build step to take a long time and you start to notice the friction.
The web/browser should not rely on bundlers and compilation steps overall. This should remain optional.
Hot-reloads in a modern bundler like Vite will typically be instantaneous. Normally in development, only dependencies are bundles, and the files you write are served as-is (potentially with a per-file compilation step for e.g. jsx or TypeScript). That means that when you save a file, the bundler will run the compiler over that single file, then notify the hot-reload component in the browser to re-fetch it. That would be quick even if it were done in JavaScript, but increasingly bundlers use parts with in Go or Rust to ensure that builds happen every more quickly.
If you've got a huge project, even very quick bundlers will end up slowing down considerably (although hot reload should still be pretty quick because it still just affects individual files). But in general, bundlers are pretty damn quick these days, and getting even quicker. And of course, they're still fully optional, even for a framework like React.
Not really optional for react since it relies so heavily on jsx...
You can write react without it but then is it react? What about the libraries you may want to import or code that a llm will generate for you?
There should be better.
jsx is not really needed. We have templates. Besides it really is a dsl with a weird syntax.
I'm doubtful it will ever become an ES standard. And for good reasons.
That should be left to the different frameworks to handle.
If you use them raw, yes. They are just the building block you can build upon.
And that's a really good building block. You can create your own parsers. I am doing exactly this for a framework that has yet to be released, full disclosure.
Makes html clearly html, and javascript fully javascript. No bastardization of either one into a chimera.
And the junction of the two is why the custom parser is required. But it is really light from a dev experience.
I would not be surprised if the universe was somewhat elastic, expands and then contracts and then expands ad infinitam.
After all, existence in itself is irrefutable and cannot not exist by definition.
If we subscribe to a theory of the multiverse, set theory, likelihood, and interaction driven evolution based on gradient type of fundamental laws.
Locally changing. Obviously everything sharing a fundamental quality that is part of existence itself. But obviously there are sets, there is differentiation. But it is not created, the infinity of unconstrained possibilities exists in the first place and reorganizes itself a bit like people are attracted to people who share some commonalities or have something they need from each other and form tribes. Same processus kind of works for synapse connections, works for molecule formations, works for atoms... etc...
Everything is mostly interacting data.
We could say that the concept of distance is a concept of likelihood. The closer is also the most likely.
Just a little weird idea. I need to think a bit more about it. Somewhat metaphysic?
You have a material view of existence perhaps.
How would the notion of nothingness even exist if there was no existence in the first place?
And if we even accepted that nothing was possible, which in itself doesn't make any sense, how would something even start to exist?
Well the contradiction is already in the fact that there is a preexisting concept of nothing in the first place.
Existence is impredicative too.
It defines itself. That's a fact.
It is not because it is impredicative that it needs to be hard to understand I think. It's almost a tautology rather.
Oh by the way, forgniz exist, you made it to design something. It doesn't have to refer to something material. It could be an idea. After all, inventions don't exist by being material in the first place.
But idea have at least material support (your brain signals) and the data acquired through your body. As far as we know.
> How would the notion of nothingness even exist if there was no existence in the first place?
It wouldn't, that's the point. The is no need for a "notion of nothingness" if nothing exists.
Why do you think nothingness doesnt't make any sense? It's a simple concept: no space, no time, and therefore nothing else such as matter, etc.
> how would something even start to exist?
Perhaps it wouldn't. We weren't talking about the origin of the universe from nothing. If you want to say existence is irrefutable because we observe it, that's fine. But it's not irrefutable because of its definition, that's religious circular logic, like the ontological argument.
Not really, in mathematic or type theory it is a proof.
But that's besides the point.
If there was nothing, then we wouldn't be able to describe it.
We are only describing it because we think it might exist. So in itself it is illogical for it to exist since it can only exist if it does not exist.
Even if we had no data, the state before birth let's say, we still exist as a probability that is about to come to fruition.
That is still besides the point.
If there was nothing, as you are trying to call it, there would not be existence.
But then we would not be here. reductio ad absurdum. Even if life is a dream, it is still something, an experience. It is still an existence at some level. You are not discussing with nothing, while being nothing :)
You’re being downvoted, but your point is true — something can exist “by definition”, and yet not exist in our real world. The thing that exists “by definition” is just a version that we have imagined to exist by definition. But imagining something with property X doesn’t imply anything can actually be found with property X.
Side-note: the deontological argument is an argument for the existence of God, which uses the same principle as the grandparent. “Imagine God. Imagine God is good. A good God should exist, because otherwise that god is not good. Therefore, the good God we imagined has the property of existence. Therefore God exists”. The issue is exactly the same — we can imagine something with property X, but that doesn’t mean we can find something with property X
Nope :)
It 's not about that. It's not because I imagine that there is a banana in front of me that there will be. It's not tied to material existence in that way.
It's perhaps another notion of existence which should be more mathematical.
You could think it as "God" provably existing as an idea but that might or might not be realized probabilistically, in our material world.
The idea exists obviously. Same as "Zeus"... or "Batman" any other such notions.
"Existence" being different from "alive" as we colloquially understand it.
The point is absence of anything is still something.
The idea of nothing can only exist if there is existence first. How does it make sense? Then nothing can't exist.
Not as an absolute. It can only be a relative negative within a weirdly heterogeneous infinity.
Or you could see it as a predicate, sometimes false, sometimes true.
It forms a lower universe of types than existence which is the set of all predicates. Predicates just...exists. They don't have to return true all the time.
Funny thing is to ask: 'Is blue, blue?'
Now with existence: 'Does existence, exist?'
And then a bit differently: 'Is nothing something?'
We see that these are different types of impredicativity.
Existence just needs itself to define itself.
Nothing cannot exist if nothing actually somehow is. It needs existence.
Blue is a word. It does not exhibit the characteristic is describes.
The set of all things blue does not contain the proposition 'blue'.
While the set of all things that exist contains itself?
Sweet baby Ouroboros ;D
There are two main claims that I think you may be touching on:
1. The question of whether concepts exist in the absence of a human mind to imagine them. This is still debated in philosophy. I'm not an expert, so I won't make a claim about this, but I will point out that if it was easy to resolve, it probably wouldn't be a field of active debate after 2000+ years.
2. The question of whether it is necessary that _something_ physically must exist. This I do make a strong claim on: it is not necessary that something physically exist. There is no law that forces objects to exist. We find ourselves in a universe where objects do exist. This is not required. It just happens to be the case.
Side-note: I find the response "Nope :)" to be kindof condescending. I realize English may be a second language to you, so maybe you don't feel the subtle jab in that -- no worries if so, I'm sure I make the same mistakes in other languages all the time. Smiley faces are definitely allowed online, but in general I'd say to use them when making a joke or when acknowledging your own mistake.
Edit: In case somebody is curious, "the question of whether concepts exist in the absence of a human mind to imagine them" is debated at least since plato's time. I believe these concepts-that-exist-without-humans are sometimes called Platonic Forms. They are good for a wikipedia binge!
I thought the smiley would make the 'Nope' less argumentative. Sorry if you felt it was offensive.
This was in response to:
> Side-note: the deontological argument is an argument for the existence of God, which uses the same principle as the grandparent
which was not actually true. This is not the same principle. Maybe the way I expressed the idea wasn't too clear.
A close principle, would be Descartes' cogito perhaps...
The question of whether a concept exists even in the absence of the human mind is easy to answer. Without arguments to authority, it suffices to realize that every past event that predates a human being is a concept for that same human.
Every future event, even what one is likely to do the next day, is also a concept.
Besides, why human? this is too anthropocentric. It should be extended to animals at the very least.
Or let's have another example: you don't really perceive UV light, and let's say you've never been told that it exists and you live in a cave. You will never interact with it. That does not mean that it does not exist. Whether as a physical concept or merely a pure concept which is then a probability. Even if that probability is 0 or negative even (negative??? we are veering quantum :).
It's probabilistic, not all of these concepts are realized materialistically (for future events that is).
An apple exists even in the absence of humans. So does its concept. Awareness of the existence of this concept is a different thing.
One must not forget that, as wise and introspective as some of the ancients were, they were also prone to a lot of cognitive biases such as anthropocentrism.
In essence, my original point is closer to the one of greek philosopher Parmenides.
But this is again not about physical existence. Matter is just data with a set of properties and interaction rules. One of them being existence. A physicist would call matter a special kind of spatial perturbation perhaps.
On a whole other note, I am curious: what made it appear as if English was a second language? :)
No worries, i did feel it was a bit dismissive, but no hard feelings at all.
I think your main claim is that concepts exist even if there is not a human to perceive/think of them. I have no horse in that race, sorry if that’s disappointing haha.
However, sometimes your comments seem to be claiming that _something_ must exist. _This_ i disagree with. We can observe that something does indeed exist in our universe. However, there is nothing that forced that to be the case. It just happens to be.
Regarding English, your phrase choice is just a bit odd and somewhat poetic, haha. It reminds me a bit of my dad, whose first language was Farsi. Here’s a couple concrete examples from your writing:
- “Maybe the way I expressed the idea wasn’t too clear”. “Too” is close in meaning to “excessive” (among other uses). I think it would be more common to see “wasn’t so clear” or “wasn’t clear enough”
- “Besides, why human?” I think there are a few words that have been dropped here, which would not normally be dropped even in casual English. You mean something like “Besides, why do we need a human perspective?” From context your meaning is clear, but the phrase “why human?” just feels unusual. I think the phrase “why X?” is common when X is a verb, but not so much when X is a noun. Consider, “why drive?”, “why worry?”, “why wear that?” all sound normal to me. On the other hand “why apple?”, “why lamp?”, “why monkey?” all seem unusual, even somewhat humorous.
- “On a whole other note”: i think the common phrase here is “on another note”. I’ve never heard “whole other note” before.
And now I’m curious: Is English your second language? In either case, your writing is unique in a very interesting way, and not something you should be worried about. I like the style, it gives you much more personality than most comments i see.
Edit: I can’t help myself, I want to guess where you’re from, lol. My best guess is Central Europe. The use of “too” to be “adequate” feels vaguely French to me, although that’s probably just based on Hollywood portrayals, since I don’t know any French. So I’ll say French is your first language, but I’ll claim victory if it’s anything from Central Europe :P
Edit edit: after googling, i see France isn’t usually counted as Central Europe. But I’m leaving my guess as France + Central Europe
Haha, before I answer you, could you do me a favor?
I'd like you to paste the message in an LLM of your choice and also tell me what they infer. I think that could be very interesting. ;p
Then I'll give you an answer.
Haha I did the same and it is a bit more nuanced. But it is hinting at either native speaker or French.
I do speak French. But it is still quite surprising to me. Although it did not pick up everything or it pointed at things that should indicate something other than French wrt punctuation for instance.
I think they mean existence in general, not the existence of any specific thing. Meaning that if there were no “existence” then we wouldn’t be here to consider its nonexistence.
> I think they mean existence in general, not the existence of any specific thing.
Yes, but the definition of "existence" doesn't require that anything must actually exist.
In other words, it is not the case that existence "cannot not exist by definition."
> Meaning that if there were no “existence” then we wouldn’t be here to consider its nonexistence.
That's an anthropic principle argument, which is not an argument from the definition of existence. One of the premises of that argument is that we exist already.
How can nothingness exists, if it is supposed to not exist since it is nothingness?
Concept are not really for humans, but humans can grasp them. Or would you say that the sun only exists because (some) humans see it?
It's not because a human is unaware of something that it does not exist. Its concept is still there somewhere. Independent of its treatment by human cognition.
Eventually we will find that the heat death of the universe and the big bang are the same thing, since the totality of the universe is always a oneness, then from the universal perspective the infinitely small and infinitely large are the same thing (one), then they by nature bleed into (and define) each other like yin and yang.
reply