Saturday, August 2, 2008

Disappointment

Overcomingbias is one the blogs I keep up with , and I've been following Eliezer's series that was meant to culminate in a definition of morality. Now, after slogging through all those posts I feel a bit let down now.
I had been moderately pleased with his explanation for Free Will, given that it was fairly close to what I've come up with over the years. Since I don't have any good grounding for morality, I thought maybe he had hit upon something worthwhile. Sadly not.

Approaching the moral "should' from the standpoint of generalizing from other uses of the word is a fairly obvious tactic:

"What should I do to become a doctor?"
"Finish a good premed program, then study hard in med school."

"Should I have the chicken or the beef?"
"Whichever you have a taste for"

"What should I wear today?"
"Well that depends on what you are doing, how you want to present yourself, etc, etc ,etc."

Yes, obviously the non-moral use of the word "should" is a question of what actions will best (or better) achieve my goal, whatever it is. But what happens when you generalized to morality? What is the goal? At this point Eliezer just declares an end to the journey, making reference to that set of goals (moralities) he would have in some ideal end state.

Argh! The whole point of the question is what are those morals! He's gone back and substituted "What do I output when asked for 2+3?" (anything you want) for "what is 2+3?" (five)

For a guy who seems very quick to say "I'm right and you are a fool" that's a terrible answer, since all it leaves us with is his (or mine, or your) fuzzy intuition of what is right and wrong. If I thought he had much of a chance of creating an AI capable of rapidly taking over the world, I'd find it a remarkably terrifying answer.

Not that I can give a better answer. I've never been given to pronouncing on high. What we need is a method for judging morals (goals) and moving towards those that are somehow better. That insight and several bucks will get you a good coffee. My best guess so far has been to construct consistent goal sets (generally from minimal axioms) and attempt to tease out their consequences and then judge them by base intuition. Which is still fairly worthless but at least attempts to get us away from comparing moralities at a point and discarding one without looking at another's flaws elsewhere.

It's an important question, because people are capable of choosing our goals; and that leaves some of us grasping for a purpose since we can't find any value system, or any evaluation system that points us in a useful direction.