The devices used for vocal rehabilitation by patients after Laryngectomy produce a distinctly robotic sounding speech. This study aims at introducing human-like qualities into the synthetically generated voices. A simplified source filter model, linear predictive coding coefficients and line spectral frequencies were used to characterize the vocal tract and manipulate the acoustic properties of speech. Two different mapping functions were employed to convert between the synthetically generated voice and human speech features: A Gaussian mixture model (GMM) and a linear regression model (LR). The models were trained using 100 human voice utterances and synthetically generated voice utterances, dynamically time warped. Objective testing showed that both mapping functions produced significant changes in the re-synthesised speech and that the resulting spectra were similar to the human voice ones. The LR mapping produced slightly better results compared to the GMM mapping. Listeners’ tests confirmed this result. The listening tests indicated, however, that voices re-synthesized from the transformed model coefficients, improved on the synthetic voice but still lacked human quality. This may imply that the vocal tract model contains only partial information pertaining to the subjective perception of artificiality in speech. Future work is aimed at investigating an elaborate model containing the speech production excitation and radiation signals and alter their features to better characterize the conversion of synthetically generated voice into human sounding one.