Categorical_crossentropy vs sparse_categorical_crossentropy - Which is better?
Which is better for accuracy or are they the same? Of course, if you use categorical_crossentropy you use one hot encoding, and if you use sparse_categorical_crossentropy you encode as normal integers. Additionally, when is one better than the other?
categorical_crossentropy vs sparse_categorical_crossentropy Use sparse categorical cross entropy when your classes are mutually exclusive (e.g. when each sample belongs exactly to one class) and categorical cross entropy when one sample can have multiple classes or labels are soft probabilities (like [0.5, 0.3, 0.2]).
Formula for categorical cross entropy (S - samples, C - classes, s∈c - sample belongs to class c) is:
−1N∑s∈S∑c∈C1s∈clogp(s∈c)
For cases when classes are exclusive, you don't need to sum over them - for each sample only non-zero value is just −logp(s∈c) for true class c. This allows us to conserve time and memory. Consider the case of 10000 classes when they are mutually exclusive - just 1 log instead of summing up 10000 for each sample, just one integer instead of 10000 floats. Formula is the same in both cases, so no impact on accuracy should be there.