-We propose an audiovisual source separation algorithm for speech signals. In our proposed algorithm we first extract the time segments with low activity of the mouth region from synchronous video recordings. An automatically selected optimal classifier is used to detect silent intervals in these instants of low visual mouth activity. Then, the source separation problem is formulated and solved for the entire signal duration. Our approach was tested on two challenging speech corpora with two speakers and two microphones, namely in the first corpus separate source signals were mixed in a simulated room, and the second corpus contains recorded conversations. The results are promising on both corpora: with the visual silence detector the performance of the source separation algorithm, measured by the signal to noise inference ratio increases.