Compositional specifications using temporal logic have been shown to aid reinforcement learning algorithms in achieving complex tasks. However, when a compositional task is under-specified, learning agents might fail to learn useful sub-policies. In this work, we explore the possibility of improving coarse-grained specifications via a CEGAR-inspired strategy to refine the abstract graph over an environment. The proposed framework creates a candidate refinement by sampling trajectories for an edge in the abstract graph generated using the specification and then training a classifier or estimating a convex hull to create a new abstract graph to guide the agent’s policy. Initial experiments show promising improvements in specification satisfiability after applying the proposed refinements over coarse-grained specifications.