Hacker Newsnew | past | comments | ask | show | jobs | submit | Fraterkes's commentslogin

…and I wouldn’t have to read this kind of drivel. Sounds like a blessing.

I’ve been working on a fontdrawing app that only uses the keyboard (inspired by vim) for a couple of months. Finally got export to ttf working this weekend and got to see a bad looking j I drew in Figma

Hey, you've probably seen these already, but Redblobgames has a great series of posts about creating good labels for maps: https://www.redblobgames.com/blog/2024-08-20-labels-on-maps/ https://www.redblobgames.com/blog/2024-10-09-sdf-curved-text... (There's a few others too)


Tried this, thought: "this seems kind of like an insubstantive thing to build, there's no attempt to actually teach you and it's all very generic. I wonder if this is just some ai stuff someone spit out in an hour and tossed on HN".

Sure enough, you go to their homepage and it's filled to the brim with genai stuff. Guess it's good I'm getting some intuition for these things.


Please don't cross into personal attack on HN generally, and especially please not in Show HN threads. It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/showhn.html


Yeesh. Personal attacks aside - there's plenty on my blog that I've written that has nothing to do with AI.

The Physiological Constraints of Free Will https://mordenstar.com/essays/free-will

The Lost Art of Windows 95 Pranking https://mordenstar.com/blog/win9x-hacks

The Q-Basic Continuum https://mordenstar.com/blog/q-basic

The Infinite Wish Generator https://mordenstar.com/blog/on-genies

Bible Stories https://mordenstar.com/blog/bible-stories

They're all things I conceived of, thought of, and created. I do use GenAI for some of the related pictures - but its in the augmentation of my original ideas.

With all due respect, I'll stack up anything I've ever created against anything of yours at any time.


> With all due respect, I'll stack up anything I've ever created against anything of yours at any time

I realize the GP comment was a provocation, but please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html


> With all due respect, I'll stack up anything I've ever created against anything of yours at any time.

I love this response. for what its worth there is some thought here on this app.


Hacker News has become weirdly anti-hacker in the last 5 or so years, so please keep building stuff and keep posting it. This is literally what HN is supposed to be. The "AI slop" tirade is just bottom of the barrel bandwaggoning for upvotes because it's popular to hate AI today


Thanks for the support. Honestly, I probably shouldn’t get so defensive either, it’s a bad habit and a pretty poor "evolutionary holdover" in the internet age of anonymity and social media.

I thought one way to help mitigate my emotional responses was to desensitize myself, but who really wants to expose themselves to the requisite sufficient threshold of personal attacks? That’s not exactly a fun callus to develop.


One of the counterintuitive aspects of the LLM boom is that agentic coding allows for more weird/unique projects that spark joy with less risk due to the increased efficiency. Nowadays, anything that's weird is considered AI slop and that's not even limited to software development.

No, "LLMs can only output what's in their training data" hasn't been true for awhile.


Pay me no heed, I'm sure you'd find many of the things I've made insubstantive.


I'm normally in the camp of "why flood HN with AI crap" and if you are not a musician then I can see why this seems unnecessary. But as a musician, this is a great learning tool. Every musician should be able to play by ear (and I had to ramp up the difficulty substantially to get a bit of a challenge). AI generated or not, this is useful.


Yeah I've been playing 40 years and did a stint in music school. Other than fat-fingered note entry errors my ear nailed all the ones I did. IMO this seems to start from a pretty advanced level off the bat.


I've been playing piano for 40 years, tend to hate anything with AI-buzzwords anywhere adjacent to it. But I generally think this particularly one is a good thing.

Curious what you mean by no attempt to teach. We learn multiplication tables by rote. Are flash cards a genuine instrument of learning? The only way to learn intervals is to practice identifying them. This is how you do it. You can read about music theory (and should) but the only way to build your listening skills is to practice it starting with basic stuff.


This teaches intervals like Duolingo teaches language rules. You sort of pick them up because you need them to figure out the small melody it plays. But you don't get the concept of a 'fourth' or a 'fifth' and there's never a moment where the actual rules are explained.

That said, I think it's very useful for what it is and highlights that whatever your view on AI, there is a niche here that AI can fill that people otherwise would just not build either because they don't think it is interesting, or because no one would pay enough for it.


I addressed that. You should read a book to learn the definition of intervals. But in addition, there's no substitute for ear training. Grinding on interval identification is just as valid as this. Once you get to a level where you can identify intervals on the keyboard, the skills are pretty transferable. But there's just no way to learn what a fifth sounds like by reading a book. You need something like this. There is probably room to add a mode that says "this is a fifth" after you identify a fifth. Or to choose a named interval or chord quality based on hearing it. But I don't think any of that diminishes the utility of what's here.

FWIW I think it's probably more useful to play what you hear than it is to be able to name it. Although they're both good.


Right, and I addressed all that as well. I doubt we are in serious disagreement here and calls for me to “read a book” are frankly rude. I think you need to be more generous in the interpretation of others words because I actually disagree with the original poster for the most part, but you obviously have a different definition of “teach” than he. Flash cards don’t teach. They assist memorization or practice. Memorizing times tables doesn’t teach multiplication except trivially for the numbers you’ve memorized. It does assist in learning multiplication. Likewise this ear training can trivialize learning and identifying intervals later but is not itself “teaching intervals”.


I'm not asking you to read a book. Sorry for being unclear. The reading a book stuff all started from this in my original comment:

> You can read about music theory (and should) but the only way to [...]

My point is just that "you" (an abstract you) can learn music abstractly and in practice. Some things require book reading. Some things require practice and listening. Nothing intended about the cgriswald "you".

I know how to do long-hand multiplication and have memorized the 12x12 multiplication table. I'm not sure which one is more valuable, but I think they complement each other.

I'm not sure if we actually disagree about anything, except maybe the relative value of knowing what an interval sounds like vs what it's called.


Ah, apologies for my misunderstanding. Maybe I should be more generous in interpreting others words. I don’t think we disagree about that either. To me it isn’t about “What it’s called” but about the concept itself. Intervals are “hidden” in this ear training. You get them for free but you don’t necessarily learn that the pattern is there at all. I can agree that the doing ability is more important than the concept but it’s not just about the name. That’s just what we have to use to talk about it.


It's ear training, not theory training, right?


Yes, which is why it doesn’t teach intervals, but is still useful.


The key thing is that you teach multiplication tables in a structured, incremental manner. Yes, it's just rote memorization, but the structure makes it way easier. You don't just dump all tables on the student at once and start quizzing them until they get it.

Imo not being able to select a subset of intervals to train heavily limits how useful this is.


There are plenty of musicians here saying this is useful for them or would be useful while learning.

As meta commentary, those not in a subgroup sometime fail to see utility of a thing built for that subgroup and it's easy to feel a sense of superiority "oh how dumb and trivial this thing is", but it may be better to first have curiosity and see how the intended audience responds. Often it's not dumb or trivial, you're missing context and experience to see the value.


I've played the piano for years. Your immediate conclusion that my dislike must stem from inexperience instead of a more nuanced place strikes me as the exact kind of thing you're lamenting in your comment.


As the other poster said, your comment didn't really leave any room for nuance, it was "ai bad". And it's also clear you're too egoistic/defensive to reflect on it.

Other commentary is you're not owed courtesy you yourself didn't give.


> a more nuanced place

Your original comment implies "it's GenAI so it must be bad."


It’s the typical “engineer thinking they’re smarter than everyone else” trope. From my experience, engineers fall squarely in the middle of the bell curve. The AI hate is just used as justification, so I don’t even take it that seriously. And fwiw, as someone that played piano when I was younger, this is 100% a useful tool. In fact, during quarantine I was learning to play guitar and used tools like this to learn which string is which by ear.


I think this is much better as a relative pitch training tool for people with a very basic background in piano and music in general. I would have loved something like this back in high school to use for practicing over and over.

I think "teach" is a high bar, but I do think it's a good practice tool.

My one and only complaint is that sometimes the melodies it generates are tough to play back because they don't really sound like a real melody and I have to fight my brain telling me to play back the one that would actually sound good. Sort of like having to memorize a random string of words vs memorizing a normal sentence.


As someone that like to play piano and guitar - I could see this immediately as something to improve my skills and playing by ear

For me I was thinking a thought I almost never think and is long forgotten "where is that old home button in my browser so I can set this as my homepage, or maybe I have to solve 2-3 of these before I can log in to my computer" xD

"insubstantive" is a nice word - software that is modifiable by the user at run time - I guess like scripting "it's just throw away" or emacs bit of elisp and keyboard macro and move on "insubstantive"

Embrace the insubstantive! Otherwise - enjoy when you have a "problem" sitting down and having to abstract more and find the general and solve for "N" because the time investment was high and the tools did not allow for this sort of sketching, insubstantive, throwaway type thing.


This doesn't really seem 'generic' at all to me?


Click the "?" on the game to see the AI slop popup dialogs blocking other popup dialogs.

The good news is that this means you can quickly make the same app yourself at home, and improve it to suit your needs.


When I go to their homepage, I get a Cloudflare SSL handshake failed error -- feels like a classical vibecode bug.


There’s many people who dislike both of those things. Please think before you write


Dumb question about contracts: I was reading the docs (https://c3-lang.org/language-common/contracts/) and this jumped out

"Contracts are optional pre- and post-condition checks that the compiler may use for static analysis, runtime checks and optimization. Note that conforming C3 compilers are not obliged to use pre- and post-conditions at all.

However, violating either pre- or post-conditions is unspecified behaviour, and a compiler may optimize code as if they are always true – even if a potential bug may cause them to be violated.

In safe mode, pre- and post-conditions are checked using runtime asserts."

So I'm probably missing something, but it reads to me like you're adding checks to your code, except there's no guarantee that they will run and whether it's at compile or runtime. And sometimes instead of catching a mistake, these checks will instead silently introduce undefined behaviour into your program. Isn't that kinda bad? How are you supposed to use this stuff reliably?

(otherwise C3 seems really cool!)


Contracts are a way to express invariants, "This shall always be true".

There are three main things you could do with these invariants, the exact details of how to do them, and whether people should be allowed to specify which of these things to do, and if so whether they can pick only for a whole program, per-file, per-function, or whatever, is separate.

1. Ignore the invariants. You wrote them down, a human can read them, but the machine doesn't care. You might just as well use comments or annotate the documentation, and indeed some people do.

2. Check the invariants. If the invariant wasn't true then something went wrong and we might tell somebody about that.

3. Assume these invariants are always true. Therefore the optimiser may use them to emit machine code which is smaller or faster but only works if these invariants were correct.

So for example maybe a language lets you say only that the whole program is checked, or, that the whole program can be assumed true, or, maybe the language lets you pick, function A's contract about pointer validity we're going to check at runtime, but function B's contract that you must pick an odd number, we will use assumption, we did tell you about that odd number requirement, have the optimiser emit that slightly faster machine code which doesn't work for N=0 -- because zero isn't an odd number assumption means it's now fine to use that code.


I guess the reason I found it surprising is that I would only use 3 (ie risk introducing UB) for invariants that I was very certain were true, whereas I would mostly use 2 for invariants that I had reason to believe might not always be true. It struck me as odd that you'd use the same tool for scenario's that feel like opposites, especially when you can just switch between these modes with a compiler flag


It feels pretty clear to me that these Contracts should only be used for the “very certain” case. Writing code for a specific compiler flag seems very sketchy, so the programmer should assume the harshest interpretation.

The runtime check thing just sounds like a debugging feature.


In other words, in production mode it makes your code faster and less safe; in debug mode it makes your code slower and more safe.

That's a valid trade-off to make. But it's unexpected for a language that bills itself as "The Ergonomic, Safe and Familiar Evolution of C".

Those pre/post-conditions are written by humans (or an LLM). Occasionally they're going to be wrong, and occasionally they're not going to be caught in testing.

It's also unexpected for a feature that naive programmers would expect to make a program more safe.

To be clear this sounds like a good feature, it's more about expectations management. A good example of that done well is Rust's unsafe keyword.


> That's a valid trade-off to make. But it's unexpected for a language that bills itself as "The Ergonomic, Safe and Familiar Evolution of C".

No, I think this is a very ergonomic feature. It fits nicely because it allows better compilers to use the constraints to optimize more confidently than equivalently-smart C compilers.


I'll give you "more ergonomic" if you'll give me "less safe".


I'd argue it's no less safe than the status quo, just easier to use. The standard "assert" can be switched off. There's "__builtin_unreachable". My personal utility library has "assume" which switches between the two based on NDEBUG.

C is a knife. Knives are sharp. If that's a problem then C is the wrong language.


But people are looking at C3, Odin & Zig because they've determined that C is the wrong language for them; many have determined that it's too sharp. C3 has "safe" in its title, they're expecting fewer sharp edges.

I'm not asking for useful optimizations like constraints to go away, I'm asking for them to be properly communicated as being sharp. If you use "unsafe" incorrectly in your rust code, you invite UB. But because of the keyword they chose, it's hardly surprising.


Maybe also worth mentioning is that some static analysis is done using these contracts as well. With more coming.


Is C3 using a different terminology than standard design by contract?

Design by contract (as implemented by Eiffel, Ada, etc.) divides the set of conditions into three: Preconditions, postconditions, and invariants. Pre- and postconditions are not invariants by predicate checks on input and output parameters.

Invariants are conditions expressed on types, and which must be checked on construction and modification. E.g. for a "time range" struct with start/end dates, the invariant should be that the start must precede the end.


So the compiler could have debug mode where it checks the invariants and release mode where it assumes they are true and optimizes around that without checking?


Yes, and that same pattern already does exist in C and C++. Asserts that are checked in debug builds but presumed true for optimization in release builds.


Not unless you write your own assert macro using C23 unreachable(), GNU C __builtin_unreachable(), MSVC __assume(0), or the like. The standard one is defined[1] to either explicitly check or completely ignore its argument.

[1] https://port70.net/~nsz/c/c11/n1570.html#7.2


Yeah, I meant it's common for projects to make their own 'assume' macros.

In Rust you can wrap core::hint::assert_unchecked similarly.


You've described three different features with three different sets of semantics. Which set of semantics is honored? Unknown!

This is not software engineering. This is an appeal to faith. Software engineering requires precise semantics, not whatever the compiler feels like doing. You can't even declare that this feature has no semantics, because it actually introduces a vector for UB. This is the sort of "feature" that should not be in any language selling itself as an improved C. It would be far better to reduce the scope to the point where the feature can have precise semantics.


> Which set of semantics is honored?

Typically it's configurable. For example C++ 26 seems to be intending you'll pick a compiler flag to say if you want its do-nothing semantics, or its "tell me about the problem and press on" semantics or just exit immediately and report that. They're not intending (in the standard at least) to have the assume semantic because that is, as you'd expect, controversial. Likewise more fine-grained configuration they're hoping will be taken up as a vendor extension.

My understanding is that C3 will likely offer the coarse configuration as part of their ordinary fast versus safe settings. Do I think that's a good idea? No, but that's definitely not "Unknown".


Any idea how the situation is handled where third party code was written to expect a certain semantic? Is this just one more rough edge to watch out for when integrating something?


not enforced for any given implementation is hardly "unknown". presumably the tin comes with a label saying what's inside


- "Note that conforming C3 compilers are not obliged to use pre- and post-conditions at all." means a compiler doesn't have to use the conditions to select how the code will be compiled, or if there's a compile-time error.

- "However, violating either pre- or post-conditions is unspecified behaviour, and a compiler may optimize code as if they are always true – even if a potential bug may cause them to be violated." basically, it just states the obvious. the compler assumes a true condition is what the code is meant to address. it won't guess how to compile the code when the condition is false.

- "In safe mode, pre- and post-conditions are checked using runtime asserts." it means that there's a 'mode' to activate the conditions during run-time analysis, which implies there's a mode to turn it off. this allows the conditions to stay in the source code without affecting runtime performance when compiled for production/release.


It’s giving you an expression capability so that you can state your intent, in a standardized way, that other tooling can build off. But it’s recognizing that the degree of enforcement depends on applied context. A big company team might want to enforce them rigidly, but a widely used tool like Visual Studio would not want to prevent code from running, so that folks who are introducing themselves to the paradigm can start to see how it would work, through warnings, while still being able to run code.


This is not just expressing intent. The documentation clearly states that it's UB to violate them, so you need to be extra careful when using them.


Perhaps another helpful paradigm are traffic/construction cones with ‘do not cross’ messages. Sometimes nothing happens, other times you run into wet concrete, other times you get a ticket. They’re just plastic objects, easy to move, but you are not meant to cross them in your vehicle. While concrete bollards are a thing, they are only preferable in some situations.


I don't think this analogy fully respects the situation here. These pre/post condition are not just adding a warning to not do something, they also add a potentially bigger danger if they are broken. It's as if you also added a trap behind the construction cone which can do more damage than stepping on the wet concrete!


> documentation clearly states that it's UB to violate them

Only in "fast" mode. The developer has the choice:

> Compilation has two modes: “safe” and “fast”. Safe mode will insert checks for out-of-bounds access, null-pointer deref, shifting by negative numbers, division by zero, violation of contracts and asserts.


> The developer has the choice

The developer has the choice between fast or safe. They don't have a choice for checking pre/post conditions, or at least avoiding UB when they are broken, while getting the other benefits of the "fast" mode.

And all in all the biggest issue is that these can be misinterpreted as a safety feature, while they actually add more possibilities for UB!


Well, the C3 developer could add more fine grained control if people need it...

I don't really see what's your problem. It's not so much different than disabling asserts in production. Some people don't do that, because they rather crash than walking into invalid program state - and that's fine too. It largely depends on the project in question.


> It's not so much different than disabling asserts in production.

Disabling asserts would be equivalent to not having them at all, while this feature introduces _new_ UB. In "fast" mode it's equivalent to using C's `__builtin_assume` or Rust's `std::hint::assert_unchecked`, except it's marketed with a name that makes it appear a safety/correctness feature.


It seems to me like a way to standardize what happens all the time anyway. Compilers are always looking for ways to optimize, and that generally means making assumptions. Specifying those assumptions in the code, instead of in flags to the compiler, seems like a win.


I think they are there to help the compiler so the optimizer might (but doesn't have to) assume they are true. It's sometimes very useful to be able to do so. For example if you know that two numbers are always different or that some value is always less than x. In standard C it's impossible to do but major compilers have a way to express it as extensions. GCC for example has:

  if (x)
    __builtin_unreachable();
C3 makes it a language construct. If you want runtime checks for safety you can use assert. The compiler turns those into asserts in safe/debug mode because that help catching bugs in non performance critical builds.


In the current C standard that's unreachable() from <stddef.h>


Thank you, I've just recently read the list of new features and missed this one!


The way I reason about it is that the contracts are more soft conditions that you expect to not really reach. If something always has to be true, even on not-safe mode, you use "actual" code inside the function/macro to check that condition and fail in the desired way.


>The way I reason about it is that the contracts are more soft conditions that you expect to not really reach

What's the difference from an assert then?


The difference from an assert is that for "require" they are compiled into the caller frame, so things like stack traces (which is available in safe mode) will point exactly to where the violation happened.

Because of inlining them at the call site happens, static analysis will already pick up some obvious violations.

Finally, these contracts may be used to compile time check otherwise untyped arguments to macros.


“However, violating either pre- or post-conditions is unspecified behaviour, and a compiler may optimize code as if they are always true – even if a potential bug may cause them to be violated”

This implies that a compiler would be permitted to remove precisely that actual code that checks the condition in non-safe mode.

Seems like a deliberately introduced footgun.


My understanding of this was that the UB starts only after the value is passed/returned. So if foo() has a contract to only return positive integers, the code within foo can check and ensure this, but if the calling code does it, the compiler might optimize it away.



Assuming that is correct, it's still exactly the same footgun. Checks like that are introduced to guard against bugs: you are strictly safer to not declare such a constraint.


Design by contract is good. I've used it in some projects.

https://en.wikipedia.org/wiki/Design_by_contract

I first came across it when I was reading Bertrand Meyer's book, Object-oriented Software Construction.

https://en.wikipedia.org/wiki/Object-Oriented_Software_Const...

From the start of the article:

[ Object-Oriented Software Construction, also called OOSC, is a book by Bertrand Meyer, widely considered a foundational text of object-oriented programming.[citation needed] The first edition was published in 1988; the second edition, extensively revised and expanded (more than 1300 pages), in 1997. Many translations are available including Dutch (first edition only), French (1+2), German (1), Italian (1), Japanese (1+2), Persian (1), Polish (2), Romanian (1), Russian (2), Serbian (2), and Spanish (2).[1] The book has been cited thousands of times. As of 15 December 2011, The Association for Computing Machinery's (ACM) Guide to Computing Literature counts 2,233 citations,[2] for the second edition alone in computer science journals and technical books; Google Scholar lists 7,305 citations. As of September 2006, the book is number 35 in the list of all-time most cited works (books, articles, etc.) in computer science literature, with 1,260 citations.[3] The book won a Jolt award in 1994.[4] The second edition is available online free.[5] ]

https://en.wikipedia.org/wiki/Bertrand_Meyer


I get that this example is simplified, but doesn’t the maths here change drastically when the 5% changes by even a few percentage points? The error bars on Openais chance of succes are obviously huge, so why would this be attractive to accountants?


That's why you have armies of accountants rating stuff like this all day long. I'm sure they could show you a highly detailed risk analysis. You also don't count on any specific deal working, you count on the overall statistics being in your favour. That's literally how venture capital works.


(I think) I get how venture capital works, my point is that the bullish story for openAI has them literally restructuring the global economy. It seems strange to me that people are making bets with relatively slim profit margins (an average of 500m on a 10b investment in your example) on such volatile and unpredictable events.


I think you’re right that the critical assumption in that example is the 5pct rather than the tax treatment.


What if your 10B investment encourages others to invest 50B and much of that makes it back to you indirectly via selling more of your core business?

I may be way off, but to me it seems like the AI bubble is largely a way to siphon money from institutional investors to the tech industry (and try to get away with it by proxying the investments) based on a volatile and unpredictable promise?


AI has a lot lower bar to clear to upend the tech industry compared to the global economy. Not being in on AI is an existential risk for these companies.


False.

The existential risk is in companies smoking the AI crackpipe that sama (begging your pardon) handed them, thinking it feels great and then projecting[1] that every investment will hit like the first, and continuing to buy the <EXPLETIVE> crack that they can't afford, and they investors can't afford, and their clients can't afford, their vendors can't afford, the grid can't afford, the planet can't afford, the American people can't afford, and sama[2] can't afford, _because it's <EXPLETIVE> crack_!

The wise will shut up and take the win on the slop com bubble.

[1]: https://en.wikipedia.org/wiki/Chasing_the_dragon

[2]: For those following along at home, sama is Sam Altman, he was a part of the Y Combinator community a while back: https://news.ycombinator.com/threads?id=sama


This reminds me of the scene in Margin Call [1] when the analyst discovers that their assumptions for the risk of highly leveraged positions are inaccurate.

[1] https://www.youtube.com/watch?v=QAWtcYOVbWw


I'm pretty the armies of accountants would have rated it higher if the cashflow was positive than negative. Negative can't be good even while accounting for taxes.


Confused a bit by the article: it mentions human trials began in september 2024, but also that the trials that might prove it working are yet to start?


I think it's just poorly written. If you go to the source[1] the trial period was planned from September 2024 to August 2025, and the submission says people are "undergoing" a trial. Perhaps it got delayed, or, more likely IMHO, the trial period is over and they're studying the data so haven't made reached a conclusion yet.

[1]: https://www.kitano-hp.or.jp/info/20240503


It’s a phase 1 clinical trial designed only to assess safety and determine the appropriate dosage. Future trials will focus on efficacy.


It’s interesting that half the comments here are talking about the extinction line when, now that we’re nearly entering 2026, I feel the 2027 predictions have been shown to be pretty wrong so far.


> I feel the 2027 predictions have been shown to be pretty wrong so far

Does your clairvoyance go any further than 2027?


I don't know that it's "clairvoyance". We're two weeks from 2026. We might be able to see somewhat more than we do now if this was going to turn into AGI by 2027.

If you assume that we're only one breakthrough away (or zero breakthroughs - just need to train harder), then the step could happen any time. If we're more than one away, though, then where are they? Are they all going to happen in the next two years?

But everybody's guessing. We don't know right now whether AGI is possible at current hardware levels. If it is N breakthroughs away, we all have our own guesses of approximately what N is.

My guess is that we are more than one breakthrough away. Therefore, one can look at the current state of affairs and say that we are unlikely to get to AGI by 2027.


> Does your clairvoyance go any further than 2027?

why are you so sensitive?


Sorta disconcerting (to me) the stuff that’s getting to the frontpage of hn lately


I found this piece somewhat refreshing.

It presents a thought I have not thought about before. Whether, as some other commenters suggest, the hypothesis that you are dating an ecosystem, has always been true is a different question.


This article is peiced to tug at emotional heartstrings.

Of course people are complex systems. When have you ever felt the thoughts:

"I am the same person I was last year, therefore people should treat me as such and not consider my growth, changes, or nuance." "My partner is the exact same person they where when I married them, therefore I do not need to pay attention to their growth, changes, or nuance."

You realized these things before you read the piece, but like me, found solace in seeing this "author" rationalize it as not our fault, but instead the fault of the new society/the other.

Which...is certainly not wise for sake of self-growth.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: