Robotic-Assisted Minimally Invasive Surgery allows for easy recording of kinematic data, and presents excellent opportunities for data-intensive approaches to assessment of surgical skill, system design, and automation of procedures. However, typical surgical cases result in long data streams, and therefore, automated segmentation into gestures is important. The public release of the JIGSAWS dataset allowed for developing, training, and benchmarking data-intensive segmentation algorithms. However this dataset suffers from several limitations: it is small and the gestures are similar in their structure and directions. This may limit the generalization of the algorithms to real surgical data that are characterized by movements in arbitrary directions. In this research, we use a recurrent neural network to segment a suturing task, and demonstrate one such generalization problem - limited generalization to rotation. We propose a simple augmentation that can solve this problem without collecting new data, and demonstrate its beneļ¬t using: (1) the JIGSAWS dataset, and (2) a new dataset that we recorded with a da Vinci Research Kit. The performance of the network that was trained without augmentation deteriorated when we tested it with rotated versions of the test data, and went down to chance level when we tested it with a new dataset. In contrast, the performance of the network that was trained with rotation augmentation was mostly steady, and achieved good results for the new data. Our study highlights the importance and prospect of using data augmentation in the analysis of kinematic data in surgical data science.