When to choose one-hot encoding and word embedding?
One-hot encoding is a method which vectorizes any categorical features. It can be done just by adding one to each category. It is simple and fast but it gives rise to issues like ‘curse of dimensional’ when a new dimension will be added to every category.
Word embedding can handle large amount of text data and it gives output as a dense vector with a fixed, arbitrary number of dimensions.
One hot encoding does not tell anything about the semantics of the items but embedding will group co-occurring items together in the representation space. So, word embedding is a better solution than one-hot encoding as it gives better results.