- Home
- News & Blogs
- About Us
- What We Do
- Our Communities
- Info Centre
- Press
- Contact
- Archive 2019
- 2015 Elections: 11 new BME MP’s make history
- 70th Anniversary of the Partition of India
- Black Church Manifesto Questionnaire
- Brett Bailey: Exhibit B
- Briefing Paper: Ethnic Minorities in Politics and Public Life
- Civil Rights Leader Ratna Lachman dies
- ELLE Magazine: Young, Gifted, and Black
- External Jobs
- FeaturedVideo
- FeaturedVideo
- FeaturedVideo
- Gary Younge Book Sale
- George Osborne's budget increases racial disadvantage
- Goldsmiths Students' Union External Trustee
- International Commissioners condemn the appalling murder of Tyre Nichols
- Iqbal Wahhab OBE empowers Togo prisoners
- Job Vacancy: Head of Campaigns and Communications
- Media and Public Relations Officer for Jean Lambert MEP (full-time)
- Number 10 statement - race disparity unit
- Pathway to Success 2022
- Please donate £10 or more
- Rashan Charles had no Illegal Drugs
- Serena Williams: Black women should demand equal pay
- Thank you for your donation
- The Colour of Power 2021
- The Power of Poetry
- The UK election voter registration countdown begins now
- Volunteering roles at Community Alliance Lewisham (CAL)
Chatbot shows 'wildwest' of Internet racism
Twitter has become a platform or the ‘wild-west’ for anyone to voice their opinion whether bigoted or not. The Internet overall has been a fantastic force for good, unleashing information and communication platforms as never before. But it’s also been a breeding ground for the spread of hatred and discrimination and Microsoft’s recent experiment with artificial intelligence proves how racism is still alive and well within the public sphere.
On 23 March Microsoft launched their new chatbot Tay on Twitter, an artificial intelligence technology designed to learn through its interaction with millennial users. But after less than 24 hours the bot had posted racist, anti-Semitic, sexist, and violent tweets through communicating with other Twitter users.
Before being taken down Tay managed to tweet:
Jews did 9/11”, “#KKK”, and “feminism is cancer”,
along with a slew of other offensive tweets.
The bot also quoted Trump’s promise to build a wall and have Mexico pay for it, praised Adolf Hitler, and incited racial hatred all over the Internet. Microsoft claims that trolls on Twitter provoked the discriminatory tweets, but the bot merely copied what a lot of Twitter was already writing and what it was able to search on the Internet. The hate already existed; Microsoft’s mess just highlighted the problem everyone knew was there.
The company deactivated the bot the next day and has since apologized for the now-deleted tweets. Peter Lee, Microsoft vice president of research wrote:
We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay”
But there is still a lesson to be learned from this incident: that although progress has been made for equality there will always be a need for more campaigning, activism, and more challenges to racist ideas throughout all societies.
Mary Schlichte