48913

# How do I classify this value using a decision tree

Basically my decision tree can't classify a value using the normal algorithm.

I get to a node, and there are two options (say, sunny and windy), but at this node my value is different (for example, rainy).

Are there any methods to deal with this, e.g. change the tree or just estimate based on other data?

I was thinking of assigning the most common value at that node but this is just a guess.

Have you considered fuzzy logic for the rich/poor continuum? As for things that can't be expressed as a continuum, I can't think of a way it can be done. Rainy weather, for example, is so fundamentally different from sunny and windy weather in how we experience and react to it, I'm not sure how you expect a computer (or whatever it is you're writing your decision tree for) to figure out what to do. (Aside from simply having an "I don't know what to do" output state, but I'm assuming you wanted something more meaningful than that.)

The whole point in decision trees is that the options are complete and (hopefully) mutual exclusive.

If it is not you'll get into trouble. Redefine poor and rich to cover everything. (all incomes, all states of mind...)

But honestly, interpret such weather examples as what they are: just examples for a concept, not the holy grail of meteorology.

The issue here is that you've learned a decision from different data as you are using to classify it. More specific, your decision tree knows only two values (i.e., sunny and windy) for the attribute Weather. But your data for classification also allows the value rainy. Since your decision tree has no observation when the weather was rainy, this value turns useless. In other words, you have to eliminate this value from your classification.

The only solution is to do data cleaning before using the decision tree as classifier. You have two options: 1. Remove all observations/instances with Weather="rainy" from your data set because you can't classify them. The disadvantage is that all instances with Weather="rainy" are not classified. 2. For all observations/instances with Weather="rainy", remove the value or rather set it to unknown/null. In case that your decision tree can handle null values, it can classify all of your data set. If not, you still have a problem. In that case you should go for option 3. 3. Relearn your decision tree with Weather={sunny, windy, rainy} (4). In your case the following is not an option. Replace "rainy" with either "sunny" or "rainy. There are different heuristics for that.

You are talking about the "normal algorithm", which is a quite blurry statement. I assume you are using a strictly-binary rooted decision tree, where the each internal node makes a binary split of the data. Thus, the condition evaluation at each internal node outputs a Boolean variable, which splits the data into the left node (true) and right node (false). In your case, you can have a categorical variable `weather` with two possible values in the training data, which makes only two possible node: `weather==sunny` or `weather==windy`. Hence, the `rainy` samples will be always on the right node, as it is not `sunny` and not `windy`.
In the following picture, the `rainy` samples will be classified as not sunny, not windy.