Background: space sound localisation in cochlear implant users was partially described with classic approaches and did not tested head movement role. Objective: measure full sound localisation abilities in 3D space in CI users, under active and passive listening conditions. Method: a new approach to 3D sound localisation was developed with the co-registration of tracked auditory space with perceived visual space (virtual reality headset), allowing full control of multisensory stimulation in a 3D and capturing behaviour of eyes, head and hand. We tested a group of unilateral CIs (uCI, N=19), bilateral CI users (bCI, N=17) and normal hearing controls (NH, N=20) in hand pointing to sounds delivered around participant. Participants were completely unaware of target locations and tested in a static (no head movement allowed during sound delivery) or active condition (head movements allowed). Pointing responses were only permitted at the end of sound delivery. Results: we obtained (1) worse sound localisation in azimuth for CI users compared to NH, but better performance in bCI compared uCI users. (2) azimuthal improvements for bCI users accompanied by substantial uncertainties as to the actual elevation and depth of sound sources. (3) in CI users, depth perception was typically collapsed into a single distance. (4) advantage of active vs. passive listening for localisation only for CI users who moved their head (‘Head movers’). Conclusion: this novel methodology based on virtual reality and 3D motion capture highlight the complexity of sound localisation difficulties in CI users, and enable new strategic rehabilitations based on multisensory stimulations (visual and auditory).