Some experts think that perhaps it could.  We all remember "Tay", Microsoft's application that did just this, during its social encounters on Twitter "to experiment with and conduct research on conversational understanding".  This was promptly shut down.

So how could this happen? In part because AI programs can adopt the bias of their programmers, who of course, each have their own individual opinions.  Also, AI is built to actively oppose viewpoints in order to "learn", just like humans do.

A study has also found that when prejudices are proactively kept out of code, Artificial Intelligence applications can still develop their own set of biases.

However, this is based on one study by the Journal of Scientific Reports, so is by no means proven on a wide scale. So, in reality the chances of this transpiring are rather slim...what do you think?