People who are congenitally blind continuously lose a lot of information in their interactions with the external environment. How can we manage to limit the amount of information they miss, ultimately enriching their perception of the world by providing them with inputs that the control population generally receives through vision? Sensory substitution devices (SSDs) originally developed specifically with this aim. In particular, SSDs are devices that transform the information conveyed in one missing sensory modality (e.g., vision) into another, intact sensory modality (e.g., audition) via a predetermined algorithm that can be learned by the users. SSDs algorithms maintain information related to shape, color and spatial positions of objects in a scene, creating what we call auditory soundscapes. SSDs users manage to perform all sort of “visual” tasks, thus arising the question of how this information is processed by our brain, given that, classically, sensory brain specializations were considered to be determined by evolution and intrinsically linked to specific sensory modalities. In this talk I will review the evidence from our lab showing that most of the known specialized regions in higher-order ‘visual’ cortices maintained their anatomically consistent category-selective properties (e.g., for objects or body shapes) in the absence of visual experience when category-specific SSD input was provided. Then I will propose an integrated framework regarding the principles subtending the sensory brain organization, ultimately supporting the notion of the brain as a task-machine rather than as a sensory machine as classically conceived. Finally, I will discuss the implications of our findings for both basic research and for clinical rehabilitation settings.