Reference-Broadcast Synchronization (RBS) is a technique that allows a set of receivers in a broadcast network to accurately estimate each others' clock values. RBS provides a relative time-frame for conversion between the local clocks of different nodes, and can be used to synchronize nodes to an external time-source such as GPS. However, RBS by itself does not output a logical clock at every node, and so it does not solve internal clock synchronization. In this work we study the theoretical properties of RBS in the worst-case model, in which the performance of a clock synchronization algorithm is measured by the worst-case skew it can incur. We suggest a method by which RBS can be incorporated in standard internal clock synchronization algorithms. This is achieved by separating the task of estimating the clock values of other nodes in the network from the task of using these estimates to output a logical clock value. The separation is modelled using a virtual estimate graph, overlaid on top of the real network graph, which represents the information various nodes can obtain about each other. RBS estimates are represented in the estimate graph as edges between nodes at distance 2 from each other in the original network graph. A clock synchronization algorithm then operates on the estimate graph as though it were the original network. To illustrate the merits of this approach, we modify a recent optimal gradient clock synchronization algorithm to work in this setting. The modified algorithm transparently takes advantage of RBS estimates. Its quality of synchronization depends on the diameter of the estimate graph, which is typically much smaller than the diameter of the original network graph.