In-memory computing architectures have emerged as a promising solution to address the memory-wall bottleneck and enable efficient vectorized and parallel arithmetic operations. This paper proposes a hardware architecture and algorithms for ternary-valued associative processors (TAP) which is a critical step toward broadening their computational versatility. To facilitate the in-memory implementation of complex arithmetic and logic functions, we introduce an efficient methodology for the automatic generation of corresponding look-up tables (LUTs). A SPICE-MATLAB co-simulator is developed to evaluate the performance of the proposed TAP-based in-memory ternary adder in terms of energy consumption, computational delay, and area overhead. Comparative analysis demonstrates that the ternary AP adder achieves energy and area reductions of 11\% and 6.2\%, respectively, compared to its binary counterpart. Furthermore, compared to state-of-the-art ternary carry-lookahead adder, the proposed solution exhibits around 100x and 9.5x reduction in energy and delay, respectively. This work highlights the potential of in-memory and parallel computing processors to high-efficiency arithmetic operations, offering scalable and energy-efficient solutions for emerging computational workloads.