Untrue. Unsold inventory must be represented on the balance sheet as an asset. Marking down the value of 1000’s of cybertrucks will drop the book value of the company. It will impact various financial ratios that are used to estimate value. Worse yet, a public admission that they can’t sell these things can undermine the confidence which is the only thing propping up the stock’s value right now.
That's not how valuations work. The investors with enough capital to affect stock prices are not ignorant of the inventory situation, nor are they fooled by assets held on the balance sheet above fair market value. Your comment is extremely naive and quite disconnected from reality.
The person you're replying to/insulting spelled/realized out the normal operations of a mature capitalist economy based on publicly owned/traded companies. I'm not being "conspiracy theorist" here, it's how it is and how the economy insulates itself from shock/swings.
To your point on "investors not being ignorant", have you followed the last 5 years of tesla stock valuation, and the constant signal that "this company P/E is disconnected from reality?" Meaning, despite no sales growth, consumer goodwill, etc. investors continued to dump money in? We see the reverse effect now, a slow massive draw down on stock value.
Anyway.
In the real world, a board(and also in particular the finance director) is more like to completely write off the unsold fleet, as it would allow both tax insulation and a more clean financial statement for auditing, investors.
Mazda recently did this with the entire MX-30 program to signal to investors "this was actually intended from the beginning and allowed us EV research."
This happens in every single industry every day. Where Tesla to just mark the trucks down, that instantly signals to investors (note: not experts.) that there is a cash problem at the firm. There is nothing worse to signal as a public company.
Thank you very much for a great reply but in no way can I see how destroying unsold vehicles can never be better in selling them for less.
If you say they can't lower the price because it signals to investors there's a problem doesn't it signal the same problem having a massive amount of vehicles unsold and then destroyed? How is one signal better than the other? So they're signaling they've got so much money they just don't care? That also seems like a bad signal to send. Far better to say publicly, hey we made a mistake this vehicle is not selling now we're going to sell what we made for less and stop making it.
None of this theorizing about signaling to investors makes any sense as long as we know that the vehicles aren't selling.
That's not how valuations work. P/E ratio is just one factor, and a minor one at that for growth stocks. It's possible that Tesla will underperform the market but you could say the same about any volatile stock.
Your claim about writing off the unsold inventory is just silly, and displays a stunning ignorance of the basics of corporate income tax law and accounting. The vehicles will eventually be sold, perhaps at a deeply discounted price. They won't be just tossed in a landfill or something.
Marking down assets has zero impact on cash or cash flow so your comment about that makes no sense at all. The main shareholders are sophisticated institutional investors who aren't fooled by simplistic financial engineering tricks.
> 1. (transitive, Internet slang) To apply several beauty filters to (a picture or video of someone), typically making the subject look more made-up, potentially more feminine, and often unrecognizable.
> 2. (figuratively, sometimes derogatory) To present (something) as fashionable and glamorous, often by removing or disguising aspects which are considered unappealing.
> Fast-moving consumer goods (FMCG), also known as consumer packaged goods (CPG)[1] or convenience goods, are products that are sold quickly and at a relatively low cost.
So I read it as consumer packaged goods which are presented as fashionable and glamorous using Internet marketing and real-world branding, for what is likely generic brand quality.
That sounds like an admission that it’s a bad thing and not an argument about why Musk’s department should be allowed to do more of the same with even less oversight.
It is a big mistake to think that most computability theory applies to AI, including Gödel’s Theorem. People start off wrong by talking about AI “algorithms.” The term applies more correctly to concepts like gradient descent. But the inferences of the resulting neural nets is not an algorithm. It is not a defined sequence of operations that produces a defined result. It is better described as a heuristic, a procedure that approximates a correct result but provides no mathematical guarantees.
Other aspects of ANN that show that Gödel doesn’t apply is that they are not formal systems. Formal system is a collection of defined operations. The building blocks of ANN could perhaps be built into a formal system. Petri nets have been demonstrated to be computationally equivalent to Turing machines. But this is really an indictment on the implementation. It’s the same as using your PC, implementing a formal system like its instruction set to run a heuristic computation. Formal system can implement informal systems.
I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
AI is most definitely an algorithm. It runs on a computer, what else could it be? Humans didn't create the algorithm directly, but it certainly exists within the machine. The computer takes an input, does a series of computing operations on it, and spits out a result. That is an algorithm.
As for humans, there is no way you can look at the behavior of a human and know for certain it is not a Turing machine. With a large enough machine, you could simulate any behavior you want, even behavior that would look, on first observation, to not be coming from a Turing machine; this is a form of the halting problem. Any observation you make that makes you believe it is NOT coming from a Turing machine could be programmed to be the output of the Turing machine.
> With a large enough machine, you could simulate any behavior you want
This is not exactly true, depending on what you mean by behavior. There are mathematical functions we know for a fact are not computable by a Turing machine, no matter how large. So a system that "behaves" like those functions couldn't be simulated by a TM. However, it's unclear whether such a system actually could exist in physical reality - which gets right back to the discussion of whether thinking is beyond Turing completeness or not.
> But the inferences of the resulting neural nets is not an algorithm.
Incorrect.
The comment above confuses some concepts.
Perhaps this will help: consider a PRNG implemented in software. It is an algorithm. The question of the utility of a PRNG (or any algorithm) is a separate thing.
Heuristic or not, AI is still ultimately an algorithm (as another comment pointed out, heuristics are a subset of algorithms). AI cannot, to expand on your PRNG example, generate true random numbers; an example that, in my view, betrays the fundamental inability of an AI to "transcend" its underlying structure of pure algorithm.
1. If an outside-the-system observer cannot detect any flaws in what a RNG outputs, does the outsider have any basis for claiming a lack of randomness? Practically speaking, randomness is a matter of prediction based on what you know.
2. AI just means “non human” intelligence. An AI system (of course) can incorporate various sources of entropy, including sensors. This is already commonly done.
On one level, yes you’re right. Computing weights and propagating values through an ANN is well defined and very algorithmic.
On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
I suspect maybe at that level you can think of it as an algorithm with unreliable outputs. I don’t know what that idea gains over thinking it’s not algorithmic and just a heuristic approximation.
"Heuristic" and "algorithmic" are not antipodes. A heuristic is a category of algorithm, specifically one that returns an approximate or probabilistic result. An example of a widely recognized algorithm that is also a heuristic is the Miller-Rabin primality test.
“Algorithm” just means something which follows a series of steps (like a recipe). It absolutely does not require understanding and doesn’t require determinism or reliable outputs. I am sympathetic to the distinction that (I think) you’re trying to make but ANNs and inference are most certainly algorithms.
> On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
It is hard to assess the comment above. Depending on what you mean, it is incorrect, inaccurate, and/or poorly framed.
The word “really” is a weasel word. It suggests there is some sort of threshold of understanding, but the threshold is not explained and is probably arbitrary. The problem with these kinds of statements is that they are very hard to pin down. They use a rhetorical technique that allows a person to move the goal posts repeatedly.
This line of discussion is well covered by critics of the word “emergence”.
> But the inferences of the resulting neural nets is not an algorithm
It is a self-delimiting program. It is an algorithm in the most basic sense of the definition of “partial recursive function” (total in this case) and thus all known results of computability theory and algorithmic information theory apply.
> Formal system is a collection of defined operations
Not at all.
> I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
We have zero evidence of this one way or another.
—
I’m looking for loopholes around Gödel’s theorems just as much as everyone else is, but this isn’t it.
Heuristics implemented within a formal system are still bound by the limitations of the system.
Physicists like to use mathematics for modeling the reality. If our current understanding of physics is fundamentally correct, everything that can possibly exist is functionally equivalent to a formal system. To escape that, you would need some really weird new physics. Which would also have to be really inconvenient new physics, because it could not be modeled with our current mathematics or simulated with our current computers.
To be fair, I muddled concepts of formal/informal systems versus completeness and consistency. I think if you start from an assumption that ANN is a formal system(not a given), you must conclude that they are necessarily inconsistent. The AI we have now hallucinates way too much to conclude any truth derived from its “reasoning.”
Excuse me, what are you talking about? You think there is any of computability that doesn't apply to AI? With all respect and I do not intend this in a mean way but just intend to rightly call all of this as exactly nonsense. I think there is a fundamental misunderstanding of computational theory and Turing machines, Church-Turing thesis, etc. any standard text on the subject should clear this up.
But surely any limits on formal systems apply to informal systems? By this, I am more or less suggesting that formal systems are the best we can do, the best possible representations of knowledge, computability, etc., and that informal systems cannot be "better" (a loaded term herein, for sure) than formal systems.
So if Gödel tells us that either formal systems will be consistent and make statements they cannot prove XOR be inconsistent and therefore unreliable, at least to some degree, then surely informal systems will, at best, be the same, and, at worst, be much worse?
I suspect that if formal systems were unequivocally “better” than formal systems our brains would be formal systems.
The desirable property of formal systems is that the results they produce are proven in a way that can be independently verified. Many informal systems can produce correct results to problems without a known, efficient algorithmic solution. Lots of scheduling and packing problems are NP-complete but that doesn’t stop us from delivering heuristic based solutions that work good enough.
Edit: I should probably add that I’m pretty rusty on this. Godels theorem tells ua that if a formal system is consistent, it will be incomplete. That is, there will be true statements that cannot be proven in the system. If the system is complete, that is, all true/false statements can be proven, then the system will be incomplete. That is you can prove contradictory things in the system.
AI we have now isn’t really either of these. It’s not working to derive truth and falsehood from axioms and a rule system. It’s just approximating the most likely answers that match its training data.
All of this has almost no relation to the questions we’re interested in like how intelligent can AI be or can it attain consciousness. I don’t even know that we have definitions for these concepts suitable for beginning a scientific inquiry.
Yeah I don’t know why GP would think computability theory doesn’t apply to AI. Is there a single example of a problem that isn’t computable by a Turing machine that can be computed by AI?
It does apply to AI in terms of the computers we compute neural networks on may be equivalent to Turning machines but the ANN networks are not. If you did reduce the ANN down to a formal system, you will likely find that in terms of Godels theorem that it would be sufficiently powerful to prove a falsehood. Thus not meeting the consistency property we would like in a system used to prove things.
I've replaced the battery and SSD on my 2015 15" MBP. SSD was very easy to replace. Battery replacement was very involved and took about two hours. It is doable if you're methodical.
You might read the article you linked to at wikipedia. It doesn't really support your point. Real (inflation adjusted) figures are more useful than nominal for measuring income growth.
"According to the CBO, between 1979 and 2011, gross median household income, adjusted for inflation, rose from $59,400 to $75,200, or 26.5%.[18] However, once adjusted for household size and looking at taxes from an after-tax perspective, real median household income grew 46%, representing significant growth."
Are you actually trying to use a time series starting before Millennials were even born to try to prove a point about financial situations today? On top of trying to apply median household numbers at the national level to a single cohort? Please stop. Look at actual data like [1]. Or at least look at the whole picture like [2] instead of cherry picking income data and acting like you can just stop there.
Also things like:
"U.S. economic growth is not translating into higher median family incomes. Real GDP per household has typically increased since the year 2000, while real median income per household was below 1999 levels until 2016, indicating a trend of greater income inequality"
And:
"Total compensation's share of GDP has declined by 4.5 percentage points from 1970 to 2016. This implies that the share attributed to capital increased in that period."
Also:
"Measured relative to GDP, total compensation and its component wages and salaries have been declining since 1970. This indicates a shift in income from labor (persons who derive income from hourly wages and salaries) to capital (persons who derive income via ownership of businesses, land and assets). This trend is common across the developed world, due in part to globalization.[16] Wages and salaries have fallen from approximately 51% GDP in 1970 to 43% GDP in 2013. Total compensation has fallen from approximately 58% GDP in 1970 to 53% GDP in 2013"
The whole picture here is a pretty mixed bag and not at all a case for how median wage to the average worker is phenomenal.