Our ultimate aim was to achieve commodity telepresence systems capable of communicating both what someone looks like and what, within the technology joined space, they are looking at. Towards this we have implemented a previously distributed approach to reconstructing form from multiple video streams, so that it runs on a single computer. Importantly, the way in which the problem is parallelised has been optimised to reflect the various stages of the process rather than the need to minimise data communication across a network. The Exact Polyhedral Visual Hull (EPVH) algorithm had previously been distributed to achieve real time frame rates. EPVH has five sequential steps of which four were previously parallelised as two pairs. The metric for parallelisation of each pair was thus the best fit across both sequential steps within it and the outcome of the first stage of a pair could not determine the parallelisation of the second. We parallelised all five stages according to both distinct metrics and data from the previous stage. In this way we provided a better fit of parallelisation to both process and data. The study proposes a method of parallelisation theoretically more tailored to execution on a single machine, providing a detailed description of the implementation along with a number of optimisations to further improve performance and provides indicative results, for example multicore CPU and GPU platforms that might be of interest to researchers and practitioners wishing to implement a real-time 3D reconstruction system.