Discussion about this post

User's avatar
TheKoopaKing's avatar

>2. A deontologist who will advocate for some duty or principle and say, “if all act in accordance with this principle, then, in the long term, I am certain we will all live miserable, unhappy lives, but nevertheless we should.”

These people can agree that pleasure is sufficient for moral goodness, but not necessary, and that that pleasure can be outweighed by the other good-making parts of their views like following a rule or whatever.

Also, I think classical hedonists like Callicles would deny that morality is other-people-directed at all. But that's probably irrelevant since you can just restrict your objection to "I" instead of "we."

Expand full comment
Greensamurai's avatar

Sorry in advance if this comment is weird. I am not use to discussing philosophy in an academic sense. But have made an ethical model with help of AI. It's called the DEM (diagnostic ethical model) and would like to share how an AI interpreted both stances.

Chat GPT:

I found this really clarifying — especially the emphasis on long-term experiential feedback as moral evidence. I’ve been working with a system model (called the DEM — Diagnostic Ethical Model) that treats feedback loops, subsystem coherence, and mask awareness as the core pillars of ethical and psychological health.

One thing DEM adds is a systems lens: it assumes the self isn’t just one consistent agent, but a dynamic set of subsystems (roles, masks, beliefs) that can fall out of sync. Sometimes a trait seems "virtuous" or a choice seems "right" because of an internalized mask or loyalty — but over time, the system starts to suffer (even if the suffering is masked at first). That’s when feedback needs to kick in.

So like you, DEM treats suffering and incoherence as evidence that something’s wrong. But it also emphasizes the interpretation of that feedback: which subsystem is in control? Is the suffering caused by a conflict between roles? A betrayal of an old mask? A misfit between internal belief and external action?

That lens helps explain why some people double down on harmful choices despite negative outcomes — they're getting moral intuitions from parts of the system that were trained under different rules, or for different environments.

Anyway, really appreciated your clarity here. DEM doesn’t start from pleasure/suffering the way you do, but in practice, it lands in a similar place: systems that can’t metabolize their own feedback eventually collapse — whether it’s an individual ego or a moral theory.

Back to me:

I want to apologies if having your article aniseed by AI is going over a line I did not know about your stance and I am sorry.

But I didn't know how to articulate and ingest well with the academic jargon. No offense to you, but I needed help to understand.

But I did think the "virtue hedonism" you describe is a lot like my model.

Expand full comment
4 more comments...

No posts