In this paper, a distributed convex optimization algorithm, termed distributed coordinate dual averaging (DCDA) algorithm, is proposed. The DCDA algorithm addresses the scenario of a large distributed optimization problem with limited communication among nodes in the network. Currently known distributed subgradient descent methods, such as the distributed dual averaging or the distributed alternating direction method of multipliers, assume that nodes can exchange messages of large cardinality. Such an assumption on the network communication capabilities is not valid in many scenarios of practical relevance. To address this setting, we propose the DCDA algorithm as a distributed convex optimization algorithm in which the communication between nodes in each round is restricted to a fixed number of dimensions. We bound the rate of convergence under different communication protocols and network architectures for this algorithm. We also consider the extensions to the cases of imperfect gradient knowledge and when transmitted messages are corrupted by additive noise or are quantized. Numerical simulations demonstrating the performance of DCDA in these different settings are also provided.