In the history of artificial intelligence (AI), primary agent focuses have been external environments, outside incentives, and behavioral responses. Internal operation mechanisms (i.e., attending to the self in the same manner as human self-awareness) have never been a concern for AI agent. Obviously, the learning concepts by which one pays attention to oneself has always been ignored by artificial intelligence researchers. However, we integrated a self-awareness mechanism into an agent's learning architecture, so that not only are an agent's thinking and behaviors closer to the ways that people operate, but also to allow agent-based social simulations to be even more akin to the real world. Our research objectives include: a) to propose a self-awareness agent model including external learning mechanism and internal cognitive capacity combining with having super-ego and ego personalities; b) apply an iterated Prisoner's Dilemma game to represent the conflicts caused by public good and private interest in an agents' society, and analyze the effects of an agent's self-awareness capacity upon its individual performance and social cooperation behavior. Our goal is to show the ability of a cognitive learning model to improve intelligent agent performance and support collaborative agent behavior. We believe additional simulations and analyses will indicate enriched social benefits, even in cases where only a few agents achieve limited self-awareness capabilities.