Vision transformer (ViT) is an emerging neural network-based architecture used for image processing. It has outperformed traditional convolutional neural networks in terms of classification accuracy. In this paper, we present an approximate computation reuse method for vision transformers that aims to reduce the computation costs of ViT. Specifically, we profile frequent computation operands of an ViT and pre-store these computation patterns in associative memory. During inference, we will utilize the associate memory to return the pre-stored computation results rather than using energy-intensive functional units. Experimental results show that we can reduce the energy consumption of ViT inference.