Radiogenomic studies have suggested that biological heterogeneity of tumors is reflected radiographically through visible features on magnetic resonance (MR) images. We apply deep learning techniques to map between tumor gene expression profiles and tumor morphology in pre-operative MR studies of glioblastoma patients. A deep autoencoder was trained on 528 patients, each with 12,042 gene expressions. Then, the autoencoder’s weights were used to initialize a supervised deep neural network. The supervised model was trained using a subset of 109 patients with both gene and MR data. For each patient, 20 morphological image features were extracted from contrast-enhancing and peritumoral edema regions. We found that a neural network pre-trained with an autoencoder had lower and significantly different (p < 0.001) errors than regularized linear regression in predicting tumor morphology by 16.98% mean absolute percent error and 0.0114 mean absolute error on average. These results indicate neural networks, which can incorporate nonlinear, hierarchical relationships between gene expressions, have the representational power to potentially find more predictive radiogenomic associations than pairwise or linear methods.