How does the neutral network visualization work?
When writing a paper / making a presentation about a topic which is about neural networks, one usually visualizes the networks architecture. What are good / simple ways to visualize common architectures automatically?
Tensorflow, Keras, MXNet, PyTorch
If the neural network visualization is given as a Tensorflow graph, then you can visualize this graph with TensorBoard.
Here is how the MNIST CNN looks like:
You can add names / scopes (like "dropout", "softmax", "fc1", "conv1", "conv2") yourself.
Interpretation The following is only about the left graph. I ignore the 4 small graphs on the right half. Each box is a layer with parameters that can be learned. For inference, information flows from bottom to the top. Ellipses are layers which do not contain learned parameters. The colour of the boxes does not have a meaning.
I'm not sure of the value of the dashed small boxes ("gradients", "Adam", "save").